Re: [Bacula-users] Serializing catalog backup

2011-01-10 Thread Blake Dunlap
On Mon, Jan 10, 2011 at 18:18, Phil Stracchino  wrote:

> On 01/10/11 16:49, Mike Ruskai wrote:
> > So simply having the catalog backup be a different priority ensures that
> > no other job can run at the same time, provided mixed priorities are
> > disallowed (that would allow the higher-priority backup jobs to start
> > while the catalog backup is under way).  Which is just as well, since I
> > don't like the idea of relying on database or table locks.
>
> Yes, disabling mixed priorities would prevent any higher-priority job
> from starting while the catalog update was running.
>
>
> --
>  Phil Stracchino, CDK#2 DoD#299792458 ICBM: 43.5607, -71.355
>  ala...@caerllewys.net   ala...@metrocast.net   p...@co.ordinate.org
> Renaissance Man, Unix ronin, Perl hacker, Free Stater
> It's not the years, it's the mileage.
>

What you may wish to request (I can't imagine it would be difficult) is a
maximum mixed priority level option, where anything above that number
ignores mixed priorities being allowed and waits till everything else is
stopped. I would certainly vote for such a feature.

I will look into doing it myself, but I cannot guarantee anything, as my
boss would first have to agree with our internal business need for such an
additional feature, and I'm not sure we need it at the moment though I
certianly see the appeal of the capability.
--
Gaining the trust of online customers is vital for the success of any company
that requires sensitive data to be transmitted over the Web.   Learn how to 
best implement a security strategy that keeps consumers' information secure 
and instills the confidence they need to proceed with transactions.
http://p.sf.net/sfu/oracle-sfdevnl ___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Failing to connect to storage device - still the nkotb

2011-01-10 Thread lance raymond
Eyes are close to bleeding from all the reading, sites, etc. but still no
luck (darn close though).  There are so many doc's even the main one is
really not very helpful in just getting things installed, but anyone feel
free to help someone before blindness takes over!  I am close and rather
than just 'getting it working', I am also trying to understand so I can go
from backing up a local directory, to adding SD device, other servers, etc.

I have read a LOT these past few days and now I have the following.  CentOS
5.5 server running, built from epel repo and yum.  I have webmin installed
and the bacula module (trying to make it easier) and all 3 services
(director, sd and fd on the same box) and want to test with a local backup
of a folder.  The webmin mod has a nice simple home of director, client and
file-storage status, file I select 'show status' I get the following;

Director status shows the test backups with fail.

Client status shows the local 127.0.0.1 I have setup 1st, clicking test now
I get;
Status from servername : bacula-fd Version: 2.0.3 (06 March 2007)
i686-redhat-linux-gnu redhat Enterprise releaseRunning Backup Jobs
*No backup jobs are currently running.*
*
*
Last is the storage.  When I select that, I get the following;
*Failed to fetch status from File : Failed to connect to Storage daemon
File.*

So I am wondering where the problem is.  As you see, I don't have a tape,
but trying to backup to a local storage area /data.  I verified the correct
director-password with the one used in bacula-fd and their exact.  The
/etc/bacula/bacula-sd.conf file has the following;

Storage { # definition of myself
  Name = bacula-sd
  SDport = 9103
  WorkingDirectory = /var/spool/bacula
  Pid Directory = "/var/run"
  Maximum Concurrent Jobs = 20
}

Director {
  Name = bacula-dir
  Password = "pass-here"
}

Device {
  Name = FileStorage
  Media Type = File
  Archive Device = /data
  LabelMedia = yes
  Random Access = yes
  AutomaticMount = yes
  RemovableMedia = no
  AlwaysOpen = no
}

I tried reading as much as I could, but things are working, a backup does
start, but fails when it tries to attach to the storage device, and I am
sure it's something simple or something the examples and such I had didn't
include as a lot were for tape drives.

Thanks for the time of reading and helping out ...
--
Gaining the trust of online customers is vital for the success of any company
that requires sensitive data to be transmitted over the Web.   Learn how to 
best implement a security strategy that keeps consumers' information secure 
and instills the confidence they need to proceed with transactions.
http://p.sf.net/sfu/oracle-sfdevnl ___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Can't connect to Remote. - Update

2011-01-10 Thread Wayne Spivak
Martin,

Believe it or not, a typo... (hence I'm an idiot).

I can ping from both Emma to the public boxes and back again (as well as
telnet) w/o problems.

I can always change it to an IP address ...

Thx

Wayne

-Original Message-
From: Martin Simmons [mailto:mar...@lispworks.com] 
Sent: Monday, January 10, 2011 1:39 PM
To: Bacula-users@lists.sourceforge.net
Subject: Re: [Bacula-users] Can't connect to Remote. - Update

> On Sun, 9 Jan 2011 21:56:55 -0500, Wayne Spivak said:
> 
> After tweaking (and upgrading to 5.0.3) the Bacula-Dir.Conf file a bit
(and
> finding that I'm an idiot) I solved my problems with all the clients
within
> (on the inside) my firewall.  In fact, I added another client (windows)
> without problems.

What fixed it in the end?


> But the two clients that are outside (Fedora 13 and Fedora 8 boxes), which
> worked under Bacula 5.02 under Fedora 11, don't work and I'm still
baffled.
> 
> The error is that a connection from the Remote can't connect to the
Storage
> Daemon.  All the files for bacula-fd are the same (except for the fd
name).
> 
> I can telnet both ways on 9102 and remote to director/storage server and
> from the remote to the director/storage server on 9101 and 9103.
> 
> Any further tips would be appreciated and on finding the error.

Are you using a fqdn in the director storage config and can the remote
clients
resolve it?

__Martin


--
Gaining the trust of online customers is vital for the success of any
company
that requires sensitive data to be transmitted over the Web.   Learn how to 
best implement a security strategy that keeps consumers' information secure 
and instills the confidence they need to proceed with transactions.
http://p.sf.net/sfu/oracle-sfdevnl 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


--
Gaining the trust of online customers is vital for the success of any company
that requires sensitive data to be transmitted over the Web.   Learn how to 
best implement a security strategy that keeps consumers' information secure 
and instills the confidence they need to proceed with transactions.
http://p.sf.net/sfu/oracle-sfdevnl 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Serializing catalog backup

2011-01-10 Thread Phil Stracchino
On 01/10/11 16:49, Mike Ruskai wrote:
> So simply having the catalog backup be a different priority ensures that 
> no other job can run at the same time, provided mixed priorities are 
> disallowed (that would allow the higher-priority backup jobs to start 
> while the catalog backup is under way).  Which is just as well, since I 
> don't like the idea of relying on database or table locks.

Yes, disabling mixed priorities would prevent any higher-priority job
from starting while the catalog update was running.


-- 
  Phil Stracchino, CDK#2 DoD#299792458 ICBM: 43.5607, -71.355
  ala...@caerllewys.net   ala...@metrocast.net   p...@co.ordinate.org
 Renaissance Man, Unix ronin, Perl hacker, Free Stater
 It's not the years, it's the mileage.

--
Gaining the trust of online customers is vital for the success of any company
that requires sensitive data to be transmitted over the Web.   Learn how to 
best implement a security strategy that keeps consumers' information secure 
and instills the confidence they need to proceed with transactions.
http://p.sf.net/sfu/oracle-sfdevnl 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Ignoring Job Errors

2011-01-10 Thread Dan Langille
On 1/10/2011 6:03 PM, rauch.hol...@googlemail.com wrote:
> On Mon, 10 Jan 2011, Dan Langille wrote:
>
>> On 1/10/2011 5:03 AM, rauch.hol...@googlemail.com wrote:
>>> Hi,
>>>
>>> even when looking through
>>>
>>> http://www.bacula.org/5.0.x-manuals/en/main/main/
>>>
>>> I haven't found a way to ignore errors in backup jobs (I only want to be
>>> notified about the errors, but I don't want the entire backup job to be
>>> canceled).
>>>
>>> What's the right option to achieve this behavior and in which configuration
>>> resource (Client, Job, etc.) can it be used?
>>>
>>> (Apologies in case I overlooked something).
>>>
>>> Thanks in advance&   kind regards,
>>
>> What errors?

 > Hi Dan,
 >
 > I was referring to any kind of error (i.e. the generalization was 
intended)
 > regardless of the source (Bacula FD, SD, Director). So, I would like a
 > backup job to continue to the farthest extent possible. Thus, my 
question is
 > still valid, I think.
 >
 > Thanks in advance&  kind regards,
 >
 > Holger
 >

If you reply at the bottom of the email, it is easier to read the entire 
story.

Well, most errors are fatal errors.  warnings are just warnings. 
Bacula will only fail the job if it must.

If you can be more specific as the errors you are encountering, we might 
be able to be more help.  :)


-- 
Dan Langille - http://langille.org/

--
Gaining the trust of online customers is vital for the success of any company
that requires sensitive data to be transmitted over the Web.   Learn how to 
best implement a security strategy that keeps consumers' information secure 
and instills the confidence they need to proceed with transactions.
http://p.sf.net/sfu/oracle-sfdevnl 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Ignoring Job Errors

2011-01-10 Thread rauch . holger
Hi Dan,

I was referring to any kind of error (i.e. the generalization was intended)
regardless of the source (Bacula FD, SD, Director). So, I would like a
backup job to continue to the farthest extent possible. Thus, my question is
still valid, I think.

Thanks in advance & kind regards,

   Holger

On Mon, 10 Jan 2011, Dan Langille wrote:

> On 1/10/2011 5:03 AM, rauch.hol...@googlemail.com wrote:
> >Hi,
> >
> >even when looking through
> >
> >http://www.bacula.org/5.0.x-manuals/en/main/main/
> >
> >I haven't found a way to ignore errors in backup jobs (I only want to be
> >notified about the errors, but I don't want the entire backup job to be
> >canceled).
> >
> >What's the right option to achieve this behavior and in which configuration
> >resource (Client, Job, etc.) can it be used?
> >
> >(Apologies in case I overlooked something).
> >
> >Thanks in advance&  kind regards,
> 
> What errors?
> 
> -- 
> Dan Langille - http://langille.org/


signature.asc
Description: Digital signature
--
Gaining the trust of online customers is vital for the success of any company
that requires sensitive data to be transmitted over the Web.   Learn how to 
best implement a security strategy that keeps consumers' information secure 
and instills the confidence they need to proceed with transactions.
http://p.sf.net/sfu/oracle-sfdevnl ___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Serializing catalog backup

2011-01-10 Thread James Harper
> 
> So simply having the catalog backup be a different priority ensures
that
> no other job can run at the same time, provided mixed priorities are
> disallowed (that would allow the higher-priority backup jobs to start
> while the catalog backup is under way).  Which is just as well, since
I
> don't like the idea of relying on database or table locks.
> 

That's what I do. My normal jobs are priority 50, my catalog job is
priority 99, and my tape eject job is priority 100.

James

--
Gaining the trust of online customers is vital for the success of any company
that requires sensitive data to be transmitted over the Web.   Learn how to 
best implement a security strategy that keeps consumers' information secure 
and instills the confidence they need to proceed with transactions.
http://p.sf.net/sfu/oracle-sfdevnl 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Restore single Job from two different Storages

2011-01-10 Thread Rodrigo Renie Braga
Well, thank you very much for your time, and as for the FullPool directive,
I already change it to do this in the Job section in my new fresh install,
as soon as I get the results, I'll post it here...

Thanks again!

2011/1/10 Phil Stracchino 

> On 01/10/11 14:21, Rodrigo Renie Braga wrote:
> > Yes, it does (see the following configuration). One more thing... I was
> > using the latest 3.0 version of Bacula and I updated it to 5.0.3
> > (Clients and Director). Could this be the problem? Just to make sure,
> > I'm creating a fresh install of Bacula 5.0.3 just to see if this problem
> > remains...
> >
> > Pool {
> > Name = pool.tpa.full
> > Pool Type = Backup
> > Storage = st.tpa
> > Volume Use Duration = 28d
> > Volume Retention = 90d
> > Maximum Volumes = 21
> > Recycle = yes
> > AutoPrune = yes
> > Scratch Pool = scratch.tpa
> > RecyclePool = scratch.tpa
> > Cleaning Prefix = "CLN"
> > }
> >
> > Pool {
> > Name = pool.tpb.diff
> > Pool Type = Backup
> > Storage = st.tpb
> > Volume Use Duration = 6d
> > Volume Retention = 30d
> > Maximum Volumes = 4
> > Recycle = yes
> > AutoPrune = yes
> > Scratch Pool = scratch.tpb
> > RecyclePool = scratch.tpb
> > Cleaning Prefix = "CLN"
> > }
>
> That all looks good to me.
>
> One comment here:
>
> > Schedule {
> > Name = sch.tpa
> > Run = Level=Full Pool=pool.tpa.full 1st sun at 01:00
> > Run = Level=Differential FullPool=pool.tpa.full Pool=pool.tpb.diff
> > 2nd-5th sun at 01:00
> > Run = Level=Incremental FullPool=pool.tpa.full Pool=pool.tpb.inc
> > mon-sat at 01:00
> > }
>
> Schedule-level Pool overrides have been deprecated because they could
> not be made to work reliably in all cases.  There were just too many
> cases in which it would do the wrong thing (for instance, when promoting
> jobs).  You should be using the Full Pool, Differential Pool,
> Incremental Pool directives in your Job or JobDefs resources.
>
> This won't affect the problem you're seeing, though.
>
>
> --
>   Phil Stracchino, CDK#2 DoD#299792458 ICBM: 43.5607, -71.355
>  ala...@caerllewys.net   ala...@metrocast.net   p...@co.ordinate.org
> Renaissance Man, Unix ronin, Perl hacker, Free Stater
> It's not the years, it's the mileage.
>
>
> --
> Gaining the trust of online customers is vital for the success of any
> company
> that requires sensitive data to be transmitted over the Web.   Learn how to
> best implement a security strategy that keeps consumers' information secure
> and instills the confidence they need to proceed with transactions.
> http://p.sf.net/sfu/oracle-sfdevnl
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>
--
Gaining the trust of online customers is vital for the success of any company
that requires sensitive data to be transmitted over the Web.   Learn how to 
best implement a security strategy that keeps consumers' information secure 
and instills the confidence they need to proceed with transactions.
http://p.sf.net/sfu/oracle-sfdevnl ___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Serializing catalog backup

2011-01-10 Thread Mike Ruskai
On 1/10/2011 4:20 PM, Phil Stracchino wrote:
> On 01/10/11 15:48, Mike Ruskai wrote:
>
>> I know how to backup the database.  That's not my question.  My question
>> is how to create a schedule, in an environment with multiple concurrent
>> jobs, that guarantees that the catalog backup runs after all other
>> scheduled jobs, and by itself.
>>
>> So if I have three machines currently running a backup job
>> simultaneously, the catalog backup has to wait until they finish, even
>> if the maximum number of concurrent jobs is four.  Once they do finish,
>> and the catalog backup starts, no other jobs can start until the catalog
>> backup is complete.
>>
>> Can this be done in Bacula, or do I need to do custom synchronization in
>> run-before scripts?
>>  
> If you set the Catalog job to a lesser priority (say, 15, where the
> highest priority is 1, and most normal jobs are priority 10), you
> guarantee that the Catalog job will not be started while any
> higher-priority job is running.
>
> Guaranteeing that no higher-priority job can start while the Catalog job
> is running is a more difficult problem, but one that usually isn't an
> issue unless you typically have jobs running around the clock and being
> started at varying intervals.  However, since the DB dump or snapshot
> that typically starts most DB backup schemes should be an atomic
> operation that occurs with the tables locked, if another job starts
> while your catalog job is running it really shouldn't matter.  It's not
> going to be able to modify the catalog while the catalog is being dumped.
>
>
I should have read more carefully about concurrent jobs.  It seems that 
enabling it basically kills your ability to prioritize jobs, since only 
jobs of the same priority run at the same time (and the mixed priority 
option only minimally changes that).  So if there are X concurrent jobs, 
and X currently running jobs, it's not possible to queue up another ten 
jobs and have them consume the available backup slots in a specific 
order based on priority.  They will presumably run in the order they are 
started, which means the scheduler has to take over as the priority manager.

So simply having the catalog backup be a different priority ensures that 
no other job can run at the same time, provided mixed priorities are 
disallowed (that would allow the higher-priority backup jobs to start 
while the catalog backup is under way).  Which is just as well, since I 
don't like the idea of relying on database or table locks.



--
Gaining the trust of online customers is vital for the success of any company
that requires sensitive data to be transmitted over the Web.   Learn how to 
best implement a security strategy that keeps consumers' information secure 
and instills the confidence they need to proceed with transactions.
http://p.sf.net/sfu/oracle-sfdevnl 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Excluding subversion folders

2011-01-10 Thread Steve Ellis
On 1/10/2011 7:29 AM, Guy wrote:
> Indeed it was and that for me is the right thing. It's all in subversion 
> which is it's self backed up.
>
> ---Guy
> (via iPhone)
>
> On 10 Jan 2011, at 15:18, Dan Langille  wrote:
>
However, if you have people planning on making commits to subversion, 
you've also now excluded their (uncommitted) changes as well.  If that 
is what you want (which it could be if all your subversion checkouts are 
really 'read-only'--i.e. for reference), then you're all set--otherwise, 
if you want to exclude just the contents of the .svn directory (which 
has unmodified copies of everything in the parent directory), you might 
want to use one of the file or directory names from the .svn directory 
(there are several, unfortunately, none quite as unlikely to otherwise 
appear as .svn).

-se

--
Gaining the trust of online customers is vital for the success of any company
that requires sensitive data to be transmitted over the Web.   Learn how to 
best implement a security strategy that keeps consumers' information secure 
and instills the confidence they need to proceed with transactions.
http://p.sf.net/sfu/oracle-sfdevnl 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Serializing catalog backup

2011-01-10 Thread Phil Stracchino
On 01/10/11 15:48, Mike Ruskai wrote:
> I know how to backup the database.  That's not my question.  My question 
> is how to create a schedule, in an environment with multiple concurrent 
> jobs, that guarantees that the catalog backup runs after all other 
> scheduled jobs, and by itself.
> 
> So if I have three machines currently running a backup job 
> simultaneously, the catalog backup has to wait until they finish, even 
> if the maximum number of concurrent jobs is four.  Once they do finish, 
> and the catalog backup starts, no other jobs can start until the catalog 
> backup is complete.
> 
> Can this be done in Bacula, or do I need to do custom synchronization in 
> run-before scripts?

If you set the Catalog job to a lesser priority (say, 15, where the
highest priority is 1, and most normal jobs are priority 10), you
guarantee that the Catalog job will not be started while any
higher-priority job is running.

Guaranteeing that no higher-priority job can start while the Catalog job
is running is a more difficult problem, but one that usually isn't an
issue unless you typically have jobs running around the clock and being
started at varying intervals.  However, since the DB dump or snapshot
that typically starts most DB backup schemes should be an atomic
operation that occurs with the tables locked, if another job starts
while your catalog job is running it really shouldn't matter.  It's not
going to be able to modify the catalog while the catalog is being dumped.



-- 
  Phil Stracchino, CDK#2 DoD#299792458 ICBM: 43.5607, -71.355
  ala...@caerllewys.net   ala...@metrocast.net   p...@co.ordinate.org
 Renaissance Man, Unix ronin, Perl hacker, Free Stater
 It's not the years, it's the mileage.

--
Gaining the trust of online customers is vital for the success of any company
that requires sensitive data to be transmitted over the Web.   Learn how to 
best implement a security strategy that keeps consumers' information secure 
and instills the confidence they need to proceed with transactions.
http://p.sf.net/sfu/oracle-sfdevnl 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Serializing catalog backup

2011-01-10 Thread Mike Ruskai
On 1/8/2011 2:07 PM, Dan Langille wrote:
> On 1/7/2011 3:03 PM, Mike Ruskai wrote:
>> Right now, I'm backing up to a drive array with one concurrent job.  I'd
>> like to increase that, and really need to for a new backup environment
>> I'm creating soon.
>>
>> With the single concurrent job, getting a consistent catalog backup is
>> trivial - I just schedule it to run one minute after the last data
>> backup job, and give it the lowest priority, so it always runs after
>> everything else.
>>
>> How do I accomplish the same general thing with multiple concurrent
>> jobs?  That means the catalog backup needs to run by itself, so the
>> database is consistent, and needs to run after all other jobs have
>> completed.
>
> This is a FAQ (or should be listed there).
>
> Short answer: You do the same thing you are doing now.
>
> Long answer: run before script: run the dump program for your database 
> and create text file.  Backup that text file.  Some people delete said 
> file in the run after script.  I don't. I keep it.
>

I know how to backup the database.  That's not my question.  My question 
is how to create a schedule, in an environment with multiple concurrent 
jobs, that guarantees that the catalog backup runs after all other 
scheduled jobs, and by itself.

So if I have three machines currently running a backup job 
simultaneously, the catalog backup has to wait until they finish, even 
if the maximum number of concurrent jobs is four.  Once they do finish, 
and the catalog backup starts, no other jobs can start until the catalog 
backup is complete.

Can this be done in Bacula, or do I need to do custom synchronization in 
run-before scripts?


--
Gaining the trust of online customers is vital for the success of any company
that requires sensitive data to be transmitted over the Web.   Learn how to 
best implement a security strategy that keeps consumers' information secure 
and instills the confidence they need to proceed with transactions.
http://p.sf.net/sfu/oracle-sfdevnl 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Restoring to a folder?

2011-01-10 Thread Romer Ventura


Thanks

Romer Ventura


-Original Message-
From: Dan Langille [mailto:d...@langille.org] 
Sent: Monday, January 10, 2011 11:32 AM
To: Romer Ventura
Cc: bacula-users@lists.sourceforge.net
Subject: Re: [Bacula-users] Restoring to a folder?

On 1/10/2011 10:54 AM, Romer Ventura wrote:
> -Original Message-
> From: Dan Langille [mailto:d...@langille.org]
> Sent: Monday, January 10, 2011 10:02 AM
> To: Romer Ventura
> Cc: bacula-users@lists.sourceforge.net
> Subject: Re: [Bacula-users] Restoring to a folder?
>
> On 1/10/2011 10:21 AM, Romer Ventura wrote:
>> Hello,
>>
>> I was trying to do a restore and after I marked all the folder/files 
>> I needed the process started and successfully finished, however, when 
>> I go to the folder which where the files are suppoused to be, they are
not.
>> The folder is empty. The client from where Bacula takes this data has 
>> not changed so I know it didn't restore it to the remote client.
>>
>> Any ideas?
>>
>> Here is some info:
>>
>> Job {
>>
>> Name = "RestoreFiles"
>>
>> Type = Restore
>>
>> Client=housigma25-fd
>>
>> FileSet="Full Set"
>>
>> Storage = SLDLTv4
>>
>> Pool = Default
>>
>> Messages = Standard
>>
>> Where = /srv/staging/restores
>>
>> }
>>
>> FileSet {
>>
>> Name = "Full Set"
>>
>> Include {
>>
>> Options {
>>
>> signature = MD5
>>
>> }
>>
>> File = /usr/sbin
>>
>> }
>>
>> Exclude {
>>
>> File = /var/lib/bacula
>>
>> File = /nonexistant/path/to/file/archive/dir
>>
>> File = /proc
>>
>> File = /tmp
>>
>> File = /.journal
>>
>> File = /.fsck
>>
>> }
>>
>> }
>>
>> LOG FILE:
>>
>> 08-Jan 17:01 housigma25-dir JobId 177: Start Restore Job
>> RestoreFiles.2011-01-08_17.01.42_05
>>
>> 08-Jan 17:01 housigma25-dir JobId 177: Using Device "DLTv4"
>>
>> 08-Jan 17:01 housigma25-sd JobId 177: 3307 Issuing autochanger 
>> "unload slot 7, drive 0" command.
>>
>> 08-Jan 17:02 housigma25-sd JobId 177: 3304 Issuing autochanger "load 
>> slot 6, drive 0" command.
>>
>> 08-Jan 17:05 housigma25-sd JobId 177: 3305 Autochanger "load slot 6, 
>> drive 0", status is OK.
>>
>> 08-Jan 17:05 housigma25-sd JobId 177: Ready to read from volume "CNH913"
>> on device "DLTv4" (/dev/nst0).
>>
>> 08-Jan 17:05 housigma25-sd JobId 177: Forward spacing Volume "CNH913"
>> to file:block 90:0.
>>
>> 08-Jan 17:32 housigma25-sd JobId 177: End of Volume at file 107 on 
>> device "DLTv4" (/dev/nst0), Volume "CNH913"
>>
>> 08-Jan 17:34 housigma25-sd JobId 177: 3307 Issuing autochanger 
>> "unload slot 6, drive 0" command.
>>
>> 08-Jan 17:35 housigma25-sd JobId 177: 3304 Issuing autochanger "load 
>> slot 5, drive 0" command.
>>
>> 08-Jan 17:37 housigma25-sd JobId 177: 3305 Autochanger "load slot 5, 
>> drive 0", status is OK.
>>
>> 08-Jan 17:37 housigma25-sd JobId 177: Ready to read from volume "CNH914"
>> on device "DLTv4" (/dev/nst0).
>>
>> 08-Jan 17:37 housigma25-sd JobId 177: Forward spacing Volume "CNH914"
>> to file:block 121:0.
>>
>> 08-Jan 17:40 housigma25-sd JobId 177: End of Volume at file 122 on 
>> device "DLTv4" (/dev/nst0), Volume "CNH914"
>>
>> 08-Jan 17:41 housigma25-sd JobId 177: 3307 Issuing autochanger 
>> "unload slot 5, drive 0" command.
>>
>> 08-Jan 17:42 housigma25-sd JobId 177: 3304 Issuing autochanger "load 
>> slot 7, drive 0" command.
>>
>> 08-Jan 17:45 housigma25-sd JobId 177: 3305 Autochanger "load slot 7, 
>> drive 0", status is OK.
>>
>> 08-Jan 17:45 housigma25-sd JobId 177: Ready to read from volume "CNH909"
>> on device "DLTv4" (/dev/nst0).
>>
>> 08-Jan 17:45 housigma25-sd JobId 177: Forward spacing Volume "CNH909"
>> to file:block 52:0.
>>
>> 08-Jan 17:48 housigma25-sd JobId 177: End of Volume at file 52 on 
>> device "DLTv4" (/dev/nst0), Volume "CNH909"
>>
>> 08-Jan 17:48 housigma25-sd JobId 177: End of all volumes.
>>
>> 08-Jan 17:48 housigma25-dir JobId 177: Bacula housigma25-dir 5.0.2
>> (28Apr10): 08-Jan-2011 17:48:03
>>
>> Build OS: i486-pc-linux-gnu debian 5.0.4
>>
>> JobId: 177
>>
>> Job: RestoreFiles.2011-01-08_17.01.42_05
>>
>> Restore Client: housigma34-fd
>>
>> Start time: 08-Jan-2011 17:01:44
>>
>> End time: 08-Jan-2011 17:48:03
>>
>> Files Expected: 47,545
>>
>> Files Restored: 47,545
>>
>> Bytes Restored: 13,386,304,625
>>
>> Rate: 4817.0 KB/s
>>
>> FD Errors: 0
>>
>> FD termination status: OK
>>
>> SD termination status: OK
>>
>> Termination: Restore OK
>>
>> I know that it didn't restore it to the remote client because the 
>> folder on the server is 12.5GB and has 45,547 files, which is less 
>> that the supposed restore.
>>
>> Any ideas?
>
> Have you looked in /srv/staging/restores on housigma34-fd?

If you top post, it makes it very difficult to follow the story... :)

 > Housigma34-fd is a Windows server so there is no /srv/staging/restores

Well, that is where the job is trying to restore.  Look at the WHERE field
listed above.

 >
 > In addition to that, the restore job has: "client= housigma25-fd" 
which is a
 > debian server and also the localhost.

Look at restore client in the output above.  It is not restoring wher

Re: [Bacula-users] Restore single Job from two different Storages

2011-01-10 Thread Phil Stracchino
On 01/10/11 14:21, Rodrigo Renie Braga wrote:
> Yes, it does (see the following configuration). One more thing... I was
> using the latest 3.0 version of Bacula and I updated it to 5.0.3
> (Clients and Director). Could this be the problem? Just to make sure,
> I'm creating a fresh install of Bacula 5.0.3 just to see if this problem
> remains...
> 
> Pool {
> Name = pool.tpa.full
> Pool Type = Backup
> Storage = st.tpa
> Volume Use Duration = 28d
> Volume Retention = 90d
> Maximum Volumes = 21
> Recycle = yes
> AutoPrune = yes
> Scratch Pool = scratch.tpa
> RecyclePool = scratch.tpa
> Cleaning Prefix = "CLN"
> }
> 
> Pool {
> Name = pool.tpb.diff
> Pool Type = Backup
> Storage = st.tpb
> Volume Use Duration = 6d
> Volume Retention = 30d
> Maximum Volumes = 4
> Recycle = yes
> AutoPrune = yes
> Scratch Pool = scratch.tpb
> RecyclePool = scratch.tpb
> Cleaning Prefix = "CLN"
> }

That all looks good to me.

One comment here:

> Schedule {
> Name = sch.tpa
> Run = Level=Full Pool=pool.tpa.full 1st sun at 01:00
> Run = Level=Differential FullPool=pool.tpa.full Pool=pool.tpb.diff
> 2nd-5th sun at 01:00
> Run = Level=Incremental FullPool=pool.tpa.full Pool=pool.tpb.inc
> mon-sat at 01:00
> }

Schedule-level Pool overrides have been deprecated because they could
not be made to work reliably in all cases.  There were just too many
cases in which it would do the wrong thing (for instance, when promoting
jobs).  You should be using the Full Pool, Differential Pool,
Incremental Pool directives in your Job or JobDefs resources.

This won't affect the problem you're seeing, though.


-- 
  Phil Stracchino, CDK#2 DoD#299792458 ICBM: 43.5607, -71.355
  ala...@caerllewys.net   ala...@metrocast.net   p...@co.ordinate.org
 Renaissance Man, Unix ronin, Perl hacker, Free Stater
 It's not the years, it's the mileage.

--
Gaining the trust of online customers is vital for the success of any company
that requires sensitive data to be transmitted over the Web.   Learn how to 
best implement a security strategy that keeps consumers' information secure 
and instills the confidence they need to proceed with transactions.
http://p.sf.net/sfu/oracle-sfdevnl 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] compiling static

2011-01-10 Thread Guy
Hi all,

I've been trying to make this work for the last two days..

I have a Thus NAS system which is running linux (sort of).  I would like to 
compile bacula-sd and vchanger static so that I can push those two binaries on 
to it and make it into a dedicated Virtual Tape store.

I'm using a Centos 5 (latest updated with yum).  I've downloaded bacula-5.0.3 
and then issuing this command

./configure --without-openssl --disable-build-dird --enable-static-fd 
--enable-static-sd --disable-libtool --with-sqlite3

The configure seems to run just fine.. so I move on to "make", this works away 
for a time and then errors with this:

Linking bacula-sd ...
/usr/bin/g++   -L../lib -o bacula-sd stored.o ansi_label.o vtape.o 
autochanger.o acquire.o append.o askdir.o authenticate.o block.o butil.o dev.o 
device.o dircmd.o dvd.o ebcdic.o fd_cmds.o job.o label.o lock.o mac.o 
match_bsr.o mount.o parse_bsr.o pythonsd.o read.o read_record.o record.o 
reserve.o scan.o sd_plugins.o spool.o status.o stored_conf.o vol_mgr.o wait.o 
-lacl  \
   -lbacpy -lbaccfg -lbac -lm   -lpthread -ldl   \
  -lcap
../lib/libbac.a(bsys.o): In function `Zinflate(char*, int, char*, int&)':
/home/guy/Downloads/bacula-5.0.3/src/lib/bsys.c:735: undefined reference to 
`inflateInit_'
/home/guy/Downloads/bacula-5.0.3/src/lib/bsys.c:748: undefined reference to 
`inflateEnd'
/home/guy/Downloads/bacula-5.0.3/src/lib/bsys.c:745: undefined reference to 
`inflate'
../lib/libbac.a(bsys.o): In function `Zdeflate(char*, int, char*, int&)':
/home/guy/Downloads/bacula-5.0.3/src/lib/bsys.c:696: undefined reference to 
`deflateInit_'
/home/guy/Downloads/bacula-5.0.3/src/lib/bsys.c:711: undefined reference to 
`deflateEnd'
/home/guy/Downloads/bacula-5.0.3/src/lib/bsys.c:708: undefined reference to 
`deflate'
collect2: ld returned 1 exit status
make[1]: *** [bacula-sd] Error 1
make[1]: Leaving directory `/home/guy/Downloads/bacula-5.0.3/src/stored'


  == Error in /home/guy/Downloads/bacula-5.0.3/src/stored ==


Does anyone have any insight into how I can compile bacula-sd and vchanger so I 
can get them running on my Thus NAS5200Pro.

Cheers,
--Guy
--
Gaining the trust of online customers is vital for the success of any company
that requires sensitive data to be transmitted over the Web.   Learn how to 
best implement a security strategy that keeps consumers' information secure 
and instills the confidence they need to proceed with transactions.
http://p.sf.net/sfu/oracle-sfdevnl 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Restore single Job from two different Storages

2011-01-10 Thread Rodrigo Renie Braga
Yes, it does (see the following configuration). One more thing... I was
using the latest 3.0 version of Bacula and I updated it to 5.0.3 (Clients
and Director). Could this be the problem? Just to make sure, I'm creating a
fresh install of Bacula 5.0.3 just to see if this problem remains...

Pool {
Name = pool.tpa.full
Pool Type = Backup
Storage = st.tpa
Volume Use Duration = 28d
Volume Retention = 90d
Maximum Volumes = 21
Recycle = yes
AutoPrune = yes
Scratch Pool = scratch.tpa
RecyclePool = scratch.tpa
Cleaning Prefix = "CLN"
}

Pool {
Name = pool.tpb.diff
Pool Type = Backup
Storage = st.tpb
Volume Use Duration = 6d
Volume Retention = 30d
Maximum Volumes = 4
Recycle = yes
AutoPrune = yes
Scratch Pool = scratch.tpb
RecyclePool = scratch.tpb
Cleaning Prefix = "CLN"
}

Schedule {
Name = sch.tpa
Run = Level=Full Pool=pool.tpa.full 1st sun at 01:00
Run = Level=Differential FullPool=pool.tpa.full Pool=pool.tpb.diff
2nd-5th sun at 01:00
Run = Level=Incremental FullPool=pool.tpa.full Pool=pool.tpb.inc mon-sat
at 01:00
}


2011/1/10 Phil Stracchino 

> On 01/10/11 13:06, Rodrigo Renie Braga wrote:
> > Actually, that's correct, I have one Pool for Full Backups (using
> > storage TPA) and another Pool for Diff Backups (using storage TPB).
> >
> > That's not a correct configuration? Because I have a specific necessity
> > of Volume Retention and Volume Duration for each one of the Diffential
> > and Full backups...
>
> That sounds correct.  Do the Pool resources specify the correct Storage
> devices?
>
>
>
> --
>  Phil Stracchino, CDK#2 DoD#299792458 ICBM: 43.5607, -71.355
>  ala...@caerllewys.net   ala...@metrocast.net   p...@co.ordinate.org
> Renaissance Man, Unix ronin, Perl hacker, Free Stater
> It's not the years, it's the mileage.
>
>
> --
> Gaining the trust of online customers is vital for the success of any
> company
> that requires sensitive data to be transmitted over the Web.   Learn how to
> best implement a security strategy that keeps consumers' information secure
> and instills the confidence they need to proceed with transactions.
> http://p.sf.net/sfu/oracle-sfdevnl
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>
--
Gaining the trust of online customers is vital for the success of any company
that requires sensitive data to be transmitted over the Web.   Learn how to 
best implement a security strategy that keeps consumers' information secure 
and instills the confidence they need to proceed with transactions.
http://p.sf.net/sfu/oracle-sfdevnl ___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Restore single Job from two different Storages

2011-01-10 Thread Phil Stracchino
On 01/10/11 13:06, Rodrigo Renie Braga wrote:
> Actually, that's correct, I have one Pool for Full Backups (using
> storage TPA) and another Pool for Diff Backups (using storage TPB).
> 
> That's not a correct configuration? Because I have a specific necessity
> of Volume Retention and Volume Duration for each one of the Diffential
> and Full backups...

That sounds correct.  Do the Pool resources specify the correct Storage
devices?



-- 
  Phil Stracchino, CDK#2 DoD#299792458 ICBM: 43.5607, -71.355
  ala...@caerllewys.net   ala...@metrocast.net   p...@co.ordinate.org
 Renaissance Man, Unix ronin, Perl hacker, Free Stater
 It's not the years, it's the mileage.

--
Gaining the trust of online customers is vital for the success of any company
that requires sensitive data to be transmitted over the Web.   Learn how to 
best implement a security strategy that keeps consumers' information secure 
and instills the confidence they need to proceed with transactions.
http://p.sf.net/sfu/oracle-sfdevnl 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Can't connect to Remote. - Update

2011-01-10 Thread Martin Simmons
> On Sun, 9 Jan 2011 21:56:55 -0500, Wayne Spivak said:
> 
> After tweaking (and upgrading to 5.0.3) the Bacula-Dir.Conf file a bit (and
> finding that I'm an idiot) I solved my problems with all the clients within
> (on the inside) my firewall.  In fact, I added another client (windows)
> without problems.

What fixed it in the end?


> But the two clients that are outside (Fedora 13 and Fedora 8 boxes), which
> worked under Bacula 5.02 under Fedora 11, don't work and I'm still baffled.
> 
> The error is that a connection from the Remote can't connect to the Storage
> Daemon.  All the files for bacula-fd are the same (except for the fd name).
> 
> I can telnet both ways on 9102 and remote to director/storage server and
> from the remote to the director/storage server on 9101 and 9103.
> 
> Any further tips would be appreciated and on finding the error.

Are you using a fqdn in the director storage config and can the remote clients
resolve it?

__Martin

--
Gaining the trust of online customers is vital for the success of any company
that requires sensitive data to be transmitted over the Web.   Learn how to 
best implement a security strategy that keeps consumers' information secure 
and instills the confidence they need to proceed with transactions.
http://p.sf.net/sfu/oracle-sfdevnl 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Restore single Job from two different Storages

2011-01-10 Thread Rodrigo Renie Braga
Actually, that's correct, I have one Pool for Full Backups (using storage
TPA) and another Pool for Diff Backups (using storage TPB).

That's not a correct configuration? Because I have a specific necessity of
Volume Retention and Volume Duration for each one of the Diffential and Full
backups...

BTW, i'm using Bacula 5.0.3...


2011/1/10 Phil Stracchino 

> On 01/10/11 12:21, Rodrigo Renie Braga wrote:
> > Hello List
> >
> > I've been trying to get help from the Bacula IRC Channel, but no success.
> >
> > I have two tape Storages, TPA and TPB. For all my Clients, I run a Full
> > Backup which saves the data on TPA, and every subsequent Differential
> > backup uses the TPB tapes.
> >
> > My problem is when making a Full restore from a Differential backup
> > (i.e., Bacula will join the Differential and Full backup to restore the
> > most updated version of the backup). The Restore Job starts OK getting
> > the tapes from TPB, but when it goes for the tapes of the Full Backup on
> > TPA, it stops with a "wrong volume mounted" error. That's because Bacula
> > tries to get a TPA Volume on the TPB Storage, seems like Bacula doesn't
> > know, at Restore time, that the tapes of the Full Backup are on another
> > Storage.
> >
> > I've "worked around it" by restoring only the Full Backup first and then
> > the subsequent Differential Backups later, on top of the Full Restore,
> > but I don't know if this is the best solution.
> >
> > Any ideas?
>
> Sounds to me like you need your TPA and TPB volumes to be in different
> Pools assigned to different storage devices.
>
>
> --
>  Phil Stracchino, CDK#2 DoD#299792458 ICBM: 43.5607, -71.355
>  ala...@caerllewys.net   ala...@metrocast.net   p...@co.ordinate.org
> Renaissance Man, Unix ronin, Perl hacker, Free Stater
> It's not the years, it's the mileage.
>
>
> --
> Gaining the trust of online customers is vital for the success of any
> company
> that requires sensitive data to be transmitted over the Web.   Learn how to
> best implement a security strategy that keeps consumers' information secure
> and instills the confidence they need to proceed with transactions.
> http://p.sf.net/sfu/oracle-sfdevnl
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>
--
Gaining the trust of online customers is vital for the success of any company
that requires sensitive data to be transmitted over the Web.   Learn how to 
best implement a security strategy that keeps consumers' information secure 
and instills the confidence they need to proceed with transactions.
http://p.sf.net/sfu/oracle-sfdevnl ___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Restore single Job from two different Storages

2011-01-10 Thread Phil Stracchino
On 01/10/11 12:21, Rodrigo Renie Braga wrote:
> Hello List
> 
> I've been trying to get help from the Bacula IRC Channel, but no success.
> 
> I have two tape Storages, TPA and TPB. For all my Clients, I run a Full
> Backup which saves the data on TPA, and every subsequent Differential
> backup uses the TPB tapes.
> 
> My problem is when making a Full restore from a Differential backup
> (i.e., Bacula will join the Differential and Full backup to restore the
> most updated version of the backup). The Restore Job starts OK getting
> the tapes from TPB, but when it goes for the tapes of the Full Backup on
> TPA, it stops with a "wrong volume mounted" error. That's because Bacula
> tries to get a TPA Volume on the TPB Storage, seems like Bacula doesn't
> know, at Restore time, that the tapes of the Full Backup are on another
> Storage.
> 
> I've "worked around it" by restoring only the Full Backup first and then
> the subsequent Differential Backups later, on top of the Full Restore,
> but I don't know if this is the best solution.
> 
> Any ideas?

Sounds to me like you need your TPA and TPB volumes to be in different
Pools assigned to different storage devices.


-- 
  Phil Stracchino, CDK#2 DoD#299792458 ICBM: 43.5607, -71.355
  ala...@caerllewys.net   ala...@metrocast.net   p...@co.ordinate.org
 Renaissance Man, Unix ronin, Perl hacker, Free Stater
 It's not the years, it's the mileage.

--
Gaining the trust of online customers is vital for the success of any company
that requires sensitive data to be transmitted over the Web.   Learn how to 
best implement a security strategy that keeps consumers' information secure 
and instills the confidence they need to proceed with transactions.
http://p.sf.net/sfu/oracle-sfdevnl 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Restoring to a folder?

2011-01-10 Thread Dan Langille
On 1/10/2011 10:54 AM, Romer Ventura wrote:
> -Original Message-
> From: Dan Langille [mailto:d...@langille.org]
> Sent: Monday, January 10, 2011 10:02 AM
> To: Romer Ventura
> Cc: bacula-users@lists.sourceforge.net
> Subject: Re: [Bacula-users] Restoring to a folder?
>
> On 1/10/2011 10:21 AM, Romer Ventura wrote:
>> Hello,
>>
>> I was trying to do a restore and after I marked all the folder/files I
>> needed the process started and successfully finished, however, when I
>> go to the folder which where the files are suppoused to be, they are not.
>> The folder is empty. The client from where Bacula takes this data has
>> not changed so I know it didn't restore it to the remote client.
>>
>> Any ideas?
>>
>> Here is some info:
>>
>> Job {
>>
>> Name = "RestoreFiles"
>>
>> Type = Restore
>>
>> Client=housigma25-fd
>>
>> FileSet="Full Set"
>>
>> Storage = SLDLTv4
>>
>> Pool = Default
>>
>> Messages = Standard
>>
>> Where = /srv/staging/restores
>>
>> }
>>
>> FileSet {
>>
>> Name = "Full Set"
>>
>> Include {
>>
>> Options {
>>
>> signature = MD5
>>
>> }
>>
>> File = /usr/sbin
>>
>> }
>>
>> Exclude {
>>
>> File = /var/lib/bacula
>>
>> File = /nonexistant/path/to/file/archive/dir
>>
>> File = /proc
>>
>> File = /tmp
>>
>> File = /.journal
>>
>> File = /.fsck
>>
>> }
>>
>> }
>>
>> LOG FILE:
>>
>> 08-Jan 17:01 housigma25-dir JobId 177: Start Restore Job
>> RestoreFiles.2011-01-08_17.01.42_05
>>
>> 08-Jan 17:01 housigma25-dir JobId 177: Using Device "DLTv4"
>>
>> 08-Jan 17:01 housigma25-sd JobId 177: 3307 Issuing autochanger "unload
>> slot 7, drive 0" command.
>>
>> 08-Jan 17:02 housigma25-sd JobId 177: 3304 Issuing autochanger "load
>> slot 6, drive 0" command.
>>
>> 08-Jan 17:05 housigma25-sd JobId 177: 3305 Autochanger "load slot 6,
>> drive 0", status is OK.
>>
>> 08-Jan 17:05 housigma25-sd JobId 177: Ready to read from volume "CNH913"
>> on device "DLTv4" (/dev/nst0).
>>
>> 08-Jan 17:05 housigma25-sd JobId 177: Forward spacing Volume "CNH913"
>> to file:block 90:0.
>>
>> 08-Jan 17:32 housigma25-sd JobId 177: End of Volume at file 107 on
>> device "DLTv4" (/dev/nst0), Volume "CNH913"
>>
>> 08-Jan 17:34 housigma25-sd JobId 177: 3307 Issuing autochanger "unload
>> slot 6, drive 0" command.
>>
>> 08-Jan 17:35 housigma25-sd JobId 177: 3304 Issuing autochanger "load
>> slot 5, drive 0" command.
>>
>> 08-Jan 17:37 housigma25-sd JobId 177: 3305 Autochanger "load slot 5,
>> drive 0", status is OK.
>>
>> 08-Jan 17:37 housigma25-sd JobId 177: Ready to read from volume "CNH914"
>> on device "DLTv4" (/dev/nst0).
>>
>> 08-Jan 17:37 housigma25-sd JobId 177: Forward spacing Volume "CNH914"
>> to file:block 121:0.
>>
>> 08-Jan 17:40 housigma25-sd JobId 177: End of Volume at file 122 on
>> device "DLTv4" (/dev/nst0), Volume "CNH914"
>>
>> 08-Jan 17:41 housigma25-sd JobId 177: 3307 Issuing autochanger "unload
>> slot 5, drive 0" command.
>>
>> 08-Jan 17:42 housigma25-sd JobId 177: 3304 Issuing autochanger "load
>> slot 7, drive 0" command.
>>
>> 08-Jan 17:45 housigma25-sd JobId 177: 3305 Autochanger "load slot 7,
>> drive 0", status is OK.
>>
>> 08-Jan 17:45 housigma25-sd JobId 177: Ready to read from volume "CNH909"
>> on device "DLTv4" (/dev/nst0).
>>
>> 08-Jan 17:45 housigma25-sd JobId 177: Forward spacing Volume "CNH909"
>> to file:block 52:0.
>>
>> 08-Jan 17:48 housigma25-sd JobId 177: End of Volume at file 52 on
>> device "DLTv4" (/dev/nst0), Volume "CNH909"
>>
>> 08-Jan 17:48 housigma25-sd JobId 177: End of all volumes.
>>
>> 08-Jan 17:48 housigma25-dir JobId 177: Bacula housigma25-dir 5.0.2
>> (28Apr10): 08-Jan-2011 17:48:03
>>
>> Build OS: i486-pc-linux-gnu debian 5.0.4
>>
>> JobId: 177
>>
>> Job: RestoreFiles.2011-01-08_17.01.42_05
>>
>> Restore Client: housigma34-fd
>>
>> Start time: 08-Jan-2011 17:01:44
>>
>> End time: 08-Jan-2011 17:48:03
>>
>> Files Expected: 47,545
>>
>> Files Restored: 47,545
>>
>> Bytes Restored: 13,386,304,625
>>
>> Rate: 4817.0 KB/s
>>
>> FD Errors: 0
>>
>> FD termination status: OK
>>
>> SD termination status: OK
>>
>> Termination: Restore OK
>>
>> I know that it didn't restore it to the remote client because the
>> folder on the server is 12.5GB and has 45,547 files, which is less
>> that the supposed restore.
>>
>> Any ideas?
>
> Have you looked in /srv/staging/restores on housigma34-fd?

If you top post, it makes it very difficult to follow the story... :)

 > Housigma34-fd is a Windows server so there is no /srv/staging/restores

Well, that is where the job is trying to restore.  Look at the WHERE 
field listed above.

 >
 > In addition to that, the restore job has: "client= housigma25-fd" 
which is a
 > debian server and also the localhost.

Look at restore client in the output above.  It is not restoring where 
where you think it is.

 > Also, the available space for both server has not changed so if the 
restore
 > happened somewhere, I would notice 13GB less than reported.

When you are running the restore job, you probably want to mod the job 
(as opposed to y/n when prompt

[Bacula-users] Restore single Job from two different Storages

2011-01-10 Thread Rodrigo Renie Braga
Hello List

I've been trying to get help from the Bacula IRC Channel, but no success.

I have two tape Storages, TPA and TPB. For all my Clients, I run a Full
Backup which saves the data on TPA, and every subsequent Differential backup
uses the TPB tapes.

My problem is when making a Full restore from a Differential backup (i.e.,
Bacula will join the Differential and Full backup to restore the most
updated version of the backup). The Restore Job starts OK getting the tapes
from TPB, but when it goes for the tapes of the Full Backup on TPA, it stops
with a "wrong volume mounted" error. That's because Bacula tries to get a
TPA Volume on the TPB Storage, seems like Bacula doesn't know, at Restore
time, that the tapes of the Full Backup are on another Storage.

I've "worked around it" by restoring only the Full Backup first and then the
subsequent Differential Backups later, on top of the Full Restore, but I
don't know if this is the best solution.

Any ideas?
--
Gaining the trust of online customers is vital for the success of any company
that requires sensitive data to be transmitted over the Web.   Learn how to 
best implement a security strategy that keeps consumers' information secure 
and instills the confidence they need to proceed with transactions.
http://p.sf.net/sfu/oracle-sfdevnl ___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] FileSet Question...

2011-01-10 Thread Martin Simmons
> On Sun, 9 Jan 2011 23:28:31 +0100, Kianusch Sayah Karadji said:
> 
> I need to backup (actually to archive) lots of data.  24 filesystems - each
> filesystem containing 2Mio+ files / 700GB.
> 
> I need to save the data only once - since they do not change after they are
> written.
> 
> To have better control over what to backup and not to have a single backup
> run for several days, I thought I'd use File="\|/myScript.sh" in my fileset
> definition - and myScript.sh will return the next directory to backup - the
> first time it should return /Directory001, the next time /Directory002, ...
> (This script works already)
> 
> The question is - will this idea work with bacula? - Will this setup always
> perform Full Backups - (since the Fileset changes on each run) - or will
> Incremental also work?

Fileset changes are calculated from the config file, not the list of files
returned by the "\|" script, so it will do Incremental too.  Don't use the
"accurate" option though, because it will get confused.


> ... or ... are there other/better solutions for this kind of backup?

You could use a separate fileset and job for each directory.

__Martin

--
Gaining the trust of online customers is vital for the success of any company
that requires sensitive data to be transmitted over the Web.   Learn how to 
best implement a security strategy that keeps consumers' information secure 
and instills the confidence they need to proceed with transactions.
http://p.sf.net/sfu/oracle-sfdevnl 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Restoring to a folder?

2011-01-10 Thread Romer Ventura
Housigma34-fd is a Windows server so there is no /srv/staging/restores

In addition to that, the restore job has: "client= housigma25-fd" which is a
debian server and also the localhost. 

Also, the available space for both server has not changed so if the restore
happened somewhere, I would notice 13GB less than reported.

Thanks

Romer Ventura


-Original Message-
From: Dan Langille [mailto:d...@langille.org] 
Sent: Monday, January 10, 2011 10:02 AM
To: Romer Ventura
Cc: bacula-users@lists.sourceforge.net
Subject: Re: [Bacula-users] Restoring to a folder?

On 1/10/2011 10:21 AM, Romer Ventura wrote:
> Hello,
>
> I was trying to do a restore and after I marked all the folder/files I 
> needed the process started and successfully finished, however, when I 
> go to the folder which where the files are suppoused to be, they are not.
> The folder is empty. The client from where Bacula takes this data has 
> not changed so I know it didn't restore it to the remote client.
>
> Any ideas?
>
> Here is some info:
>
> Job {
>
> Name = "RestoreFiles"
>
> Type = Restore
>
> Client=housigma25-fd
>
> FileSet="Full Set"
>
> Storage = SLDLTv4
>
> Pool = Default
>
> Messages = Standard
>
> Where = /srv/staging/restores
>
> }
>
> FileSet {
>
> Name = "Full Set"
>
> Include {
>
> Options {
>
> signature = MD5
>
> }
>
> File = /usr/sbin
>
> }
>
> Exclude {
>
> File = /var/lib/bacula
>
> File = /nonexistant/path/to/file/archive/dir
>
> File = /proc
>
> File = /tmp
>
> File = /.journal
>
> File = /.fsck
>
> }
>
> }
>
> LOG FILE:
>
> 08-Jan 17:01 housigma25-dir JobId 177: Start Restore Job
> RestoreFiles.2011-01-08_17.01.42_05
>
> 08-Jan 17:01 housigma25-dir JobId 177: Using Device "DLTv4"
>
> 08-Jan 17:01 housigma25-sd JobId 177: 3307 Issuing autochanger "unload 
> slot 7, drive 0" command.
>
> 08-Jan 17:02 housigma25-sd JobId 177: 3304 Issuing autochanger "load 
> slot 6, drive 0" command.
>
> 08-Jan 17:05 housigma25-sd JobId 177: 3305 Autochanger "load slot 6, 
> drive 0", status is OK.
>
> 08-Jan 17:05 housigma25-sd JobId 177: Ready to read from volume "CNH913"
> on device "DLTv4" (/dev/nst0).
>
> 08-Jan 17:05 housigma25-sd JobId 177: Forward spacing Volume "CNH913" 
> to file:block 90:0.
>
> 08-Jan 17:32 housigma25-sd JobId 177: End of Volume at file 107 on 
> device "DLTv4" (/dev/nst0), Volume "CNH913"
>
> 08-Jan 17:34 housigma25-sd JobId 177: 3307 Issuing autochanger "unload 
> slot 6, drive 0" command.
>
> 08-Jan 17:35 housigma25-sd JobId 177: 3304 Issuing autochanger "load 
> slot 5, drive 0" command.
>
> 08-Jan 17:37 housigma25-sd JobId 177: 3305 Autochanger "load slot 5, 
> drive 0", status is OK.
>
> 08-Jan 17:37 housigma25-sd JobId 177: Ready to read from volume "CNH914"
> on device "DLTv4" (/dev/nst0).
>
> 08-Jan 17:37 housigma25-sd JobId 177: Forward spacing Volume "CNH914" 
> to file:block 121:0.
>
> 08-Jan 17:40 housigma25-sd JobId 177: End of Volume at file 122 on 
> device "DLTv4" (/dev/nst0), Volume "CNH914"
>
> 08-Jan 17:41 housigma25-sd JobId 177: 3307 Issuing autochanger "unload 
> slot 5, drive 0" command.
>
> 08-Jan 17:42 housigma25-sd JobId 177: 3304 Issuing autochanger "load 
> slot 7, drive 0" command.
>
> 08-Jan 17:45 housigma25-sd JobId 177: 3305 Autochanger "load slot 7, 
> drive 0", status is OK.
>
> 08-Jan 17:45 housigma25-sd JobId 177: Ready to read from volume "CNH909"
> on device "DLTv4" (/dev/nst0).
>
> 08-Jan 17:45 housigma25-sd JobId 177: Forward spacing Volume "CNH909" 
> to file:block 52:0.
>
> 08-Jan 17:48 housigma25-sd JobId 177: End of Volume at file 52 on 
> device "DLTv4" (/dev/nst0), Volume "CNH909"
>
> 08-Jan 17:48 housigma25-sd JobId 177: End of all volumes.
>
> 08-Jan 17:48 housigma25-dir JobId 177: Bacula housigma25-dir 5.0.2
> (28Apr10): 08-Jan-2011 17:48:03
>
> Build OS: i486-pc-linux-gnu debian 5.0.4
>
> JobId: 177
>
> Job: RestoreFiles.2011-01-08_17.01.42_05
>
> Restore Client: housigma34-fd
>
> Start time: 08-Jan-2011 17:01:44
>
> End time: 08-Jan-2011 17:48:03
>
> Files Expected: 47,545
>
> Files Restored: 47,545
>
> Bytes Restored: 13,386,304,625
>
> Rate: 4817.0 KB/s
>
> FD Errors: 0
>
> FD termination status: OK
>
> SD termination status: OK
>
> Termination: Restore OK
>
> I know that it didn't restore it to the remote client because the 
> folder on the server is 12.5GB and has 45,547 files, which is less 
> that the supposed restore.
>
> Any ideas?

Have you looked in /srv/staging/restores on housigma34-fd?

--
Dan Langille - http://langille.org/


--
Gaining the trust of online customers is vital for the success of any company
that requires sensitive data to be transmitted over the Web.   Learn how to 
best implement a security strategy that keeps consumers' information secure 
and instills the confidence they need to proceed with transactions.
http://p.sf.net/sfu/oracle-sfdevnl 
___
Bacula-users mailing list
Bacula-users@lists.sour

Re: [Bacula-users] Ignoring Job Errors

2011-01-10 Thread Dan Langille
On 1/10/2011 5:03 AM, rauch.hol...@googlemail.com wrote:
> Hi,
>
> even when looking through
>
> http://www.bacula.org/5.0.x-manuals/en/main/main/
>
> I haven't found a way to ignore errors in backup jobs (I only want to be
> notified about the errors, but I don't want the entire backup job to be
> canceled).
>
> What's the right option to achieve this behavior and in which configuration
> resource (Client, Job, etc.) can it be used?
>
> (Apologies in case I overlooked something).
>
> Thanks in advance&  kind regards,

What errors?

-- 
Dan Langille - http://langille.org/

--
Gaining the trust of online customers is vital for the success of any company
that requires sensitive data to be transmitted over the Web.   Learn how to 
best implement a security strategy that keeps consumers' information secure 
and instills the confidence they need to proceed with transactions.
http://p.sf.net/sfu/oracle-sfdevnl 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Restoring to a folder?

2011-01-10 Thread Dan Langille
On 1/10/2011 10:21 AM, Romer Ventura wrote:
> Hello,
>
> I was trying to do a restore and after I marked all the folder/files I
> needed the process started and successfully finished, however, when I go
> to the folder which where the files are suppoused to be, they are not.
> The folder is empty. The client from where Bacula takes this data has
> not changed so I know it didn’t restore it to the remote client.
>
> Any ideas?
>
> Here is some info:
>
> Job {
>
> Name = "RestoreFiles"
>
> Type = Restore
>
> Client=housigma25-fd
>
> FileSet="Full Set"
>
> Storage = SLDLTv4
>
> Pool = Default
>
> Messages = Standard
>
> Where = /srv/staging/restores
>
> }
>
> FileSet {
>
> Name = "Full Set"
>
> Include {
>
> Options {
>
> signature = MD5
>
> }
>
> File = /usr/sbin
>
> }
>
> Exclude {
>
> File = /var/lib/bacula
>
> File = /nonexistant/path/to/file/archive/dir
>
> File = /proc
>
> File = /tmp
>
> File = /.journal
>
> File = /.fsck
>
> }
>
> }
>
> LOG FILE:
>
> 08-Jan 17:01 housigma25-dir JobId 177: Start Restore Job
> RestoreFiles.2011-01-08_17.01.42_05
>
> 08-Jan 17:01 housigma25-dir JobId 177: Using Device "DLTv4"
>
> 08-Jan 17:01 housigma25-sd JobId 177: 3307 Issuing autochanger "unload
> slot 7, drive 0" command.
>
> 08-Jan 17:02 housigma25-sd JobId 177: 3304 Issuing autochanger "load
> slot 6, drive 0" command.
>
> 08-Jan 17:05 housigma25-sd JobId 177: 3305 Autochanger "load slot 6,
> drive 0", status is OK.
>
> 08-Jan 17:05 housigma25-sd JobId 177: Ready to read from volume "CNH913"
> on device "DLTv4" (/dev/nst0).
>
> 08-Jan 17:05 housigma25-sd JobId 177: Forward spacing Volume "CNH913" to
> file:block 90:0.
>
> 08-Jan 17:32 housigma25-sd JobId 177: End of Volume at file 107 on
> device "DLTv4" (/dev/nst0), Volume "CNH913"
>
> 08-Jan 17:34 housigma25-sd JobId 177: 3307 Issuing autochanger "unload
> slot 6, drive 0" command.
>
> 08-Jan 17:35 housigma25-sd JobId 177: 3304 Issuing autochanger "load
> slot 5, drive 0" command.
>
> 08-Jan 17:37 housigma25-sd JobId 177: 3305 Autochanger "load slot 5,
> drive 0", status is OK.
>
> 08-Jan 17:37 housigma25-sd JobId 177: Ready to read from volume "CNH914"
> on device "DLTv4" (/dev/nst0).
>
> 08-Jan 17:37 housigma25-sd JobId 177: Forward spacing Volume "CNH914" to
> file:block 121:0.
>
> 08-Jan 17:40 housigma25-sd JobId 177: End of Volume at file 122 on
> device "DLTv4" (/dev/nst0), Volume "CNH914"
>
> 08-Jan 17:41 housigma25-sd JobId 177: 3307 Issuing autochanger "unload
> slot 5, drive 0" command.
>
> 08-Jan 17:42 housigma25-sd JobId 177: 3304 Issuing autochanger "load
> slot 7, drive 0" command.
>
> 08-Jan 17:45 housigma25-sd JobId 177: 3305 Autochanger "load slot 7,
> drive 0", status is OK.
>
> 08-Jan 17:45 housigma25-sd JobId 177: Ready to read from volume "CNH909"
> on device "DLTv4" (/dev/nst0).
>
> 08-Jan 17:45 housigma25-sd JobId 177: Forward spacing Volume "CNH909" to
> file:block 52:0.
>
> 08-Jan 17:48 housigma25-sd JobId 177: End of Volume at file 52 on device
> "DLTv4" (/dev/nst0), Volume "CNH909"
>
> 08-Jan 17:48 housigma25-sd JobId 177: End of all volumes.
>
> 08-Jan 17:48 housigma25-dir JobId 177: Bacula housigma25-dir 5.0.2
> (28Apr10): 08-Jan-2011 17:48:03
>
> Build OS: i486-pc-linux-gnu debian 5.0.4
>
> JobId: 177
>
> Job: RestoreFiles.2011-01-08_17.01.42_05
>
> Restore Client: housigma34-fd
>
> Start time: 08-Jan-2011 17:01:44
>
> End time: 08-Jan-2011 17:48:03
>
> Files Expected: 47,545
>
> Files Restored: 47,545
>
> Bytes Restored: 13,386,304,625
>
> Rate: 4817.0 KB/s
>
> FD Errors: 0
>
> FD termination status: OK
>
> SD termination status: OK
>
> Termination: Restore OK
>
> I know that it didn’t restore it to the remote client because the folder
> on the server is 12.5GB and has 45,547 files, which is less that the
> supposed restore.
>
> Any ideas?

Have you looked in /srv/staging/restores on housigma34-fd?

-- 
Dan Langille - http://langille.org/

--
Gaining the trust of online customers is vital for the success of any company
that requires sensitive data to be transmitted over the Web.   Learn how to 
best implement a security strategy that keeps consumers' information secure 
and instills the confidence they need to proceed with transactions.
http://p.sf.net/sfu/oracle-sfdevnl 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Restoring to a folder?

2011-01-10 Thread Romer Ventura
Hello,

 

I was trying to do a restore and after I marked all the folder/files I
needed the process started and successfully finished, however, when I go to
the folder which where the files are suppoused to be, they are not. The
folder is empty. The client from where Bacula takes this data has not
changed so I know it didn't restore it to the remote client. 

 

Any ideas?

Here is some info:

Job {

  Name = "RestoreFiles"

  Type = Restore

  Client=housigma25-fd

  FileSet="Full Set"

  Storage = SLDLTv4

  Pool = Default

  Messages = Standard

  Where = /srv/staging/restores

}

FileSet {

  Name = "Full Set"

  Include {

Options {

  signature = MD5

}

File = /usr/sbin

  }

  Exclude {

File = /var/lib/bacula

File = /nonexistant/path/to/file/archive/dir

File = /proc

File = /tmp

File = /.journal

File = /.fsck

  }

}

 

LOG FILE:

08-Jan 17:01 housigma25-dir JobId 177: Start Restore Job
RestoreFiles.2011-01-08_17.01.42_05

08-Jan 17:01 housigma25-dir JobId 177: Using Device "DLTv4"

08-Jan 17:01 housigma25-sd JobId 177: 3307 Issuing autochanger "unload slot
7, drive 0" command.

08-Jan 17:02 housigma25-sd JobId 177: 3304 Issuing autochanger "load slot 6,
drive 0" command.

08-Jan 17:05 housigma25-sd JobId 177: 3305 Autochanger "load slot 6, drive
0", status is OK.

08-Jan 17:05 housigma25-sd JobId 177: Ready to read from volume "CNH913" on
device "DLTv4" (/dev/nst0).

08-Jan 17:05 housigma25-sd JobId 177: Forward spacing Volume "CNH913" to
file:block 90:0.

08-Jan 17:32 housigma25-sd JobId 177: End of Volume at file 107 on device
"DLTv4" (/dev/nst0), Volume "CNH913"

08-Jan 17:34 housigma25-sd JobId 177: 3307 Issuing autochanger "unload slot
6, drive 0" command.

08-Jan 17:35 housigma25-sd JobId 177: 3304 Issuing autochanger "load slot 5,
drive 0" command.

08-Jan 17:37 housigma25-sd JobId 177: 3305 Autochanger "load slot 5, drive
0", status is OK.

08-Jan 17:37 housigma25-sd JobId 177: Ready to read from volume "CNH914" on
device "DLTv4" (/dev/nst0).

08-Jan 17:37 housigma25-sd JobId 177: Forward spacing Volume "CNH914" to
file:block 121:0.

08-Jan 17:40 housigma25-sd JobId 177: End of Volume at file 122 on device
"DLTv4" (/dev/nst0), Volume "CNH914"

08-Jan 17:41 housigma25-sd JobId 177: 3307 Issuing autochanger "unload slot
5, drive 0" command.

08-Jan 17:42 housigma25-sd JobId 177: 3304 Issuing autochanger "load slot 7,
drive 0" command.

08-Jan 17:45 housigma25-sd JobId 177: 3305 Autochanger "load slot 7, drive
0", status is OK.

08-Jan 17:45 housigma25-sd JobId 177: Ready to read from volume "CNH909" on
device "DLTv4" (/dev/nst0).

08-Jan 17:45 housigma25-sd JobId 177: Forward spacing Volume "CNH909" to
file:block 52:0.

08-Jan 17:48 housigma25-sd JobId 177: End of Volume at file 52 on device
"DLTv4" (/dev/nst0), Volume "CNH909"

08-Jan 17:48 housigma25-sd JobId 177: End of all volumes.

08-Jan 17:48 housigma25-dir JobId 177: Bacula housigma25-dir 5.0.2
(28Apr10): 08-Jan-2011 17:48:03

  Build OS:   i486-pc-linux-gnu debian 5.0.4

  JobId:  177

  Job:RestoreFiles.2011-01-08_17.01.42_05

  Restore Client: housigma34-fd

  Start time: 08-Jan-2011 17:01:44

  End time:   08-Jan-2011 17:48:03

  Files Expected: 47,545

  Files Restored: 47,545

  Bytes Restored: 13,386,304,625

  Rate:   4817.0 KB/s

  FD Errors:  0

  FD termination status:  OK

  SD termination status:  OK

  Termination:Restore OK

 

I know that it didn't restore it to the remote client because the folder on
the server is 12.5GB and has 45,547 files, which is less that the supposed
restore.

 

Any ideas?

 

Thanks



Romer Ventura

 

--
Gaining the trust of online customers is vital for the success of any company
that requires sensitive data to be transmitted over the Web.   Learn how to 
best implement a security strategy that keeps consumers' information secure 
and instills the confidence they need to proceed with transactions.
http://p.sf.net/sfu/oracle-sfdevnl ___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Excluding subversion folders

2011-01-10 Thread Guy
Indeed it was and that for me is the right thing. It's all in subversion which 
is it's self backed up. 

---Guy
(via iPhone)

On 10 Jan 2011, at 15:18, Dan Langille  wrote:

> On 1/10/2011 10:03 AM, Guy wrote:
>> On 10 Jan 2011, at 14:42, Christian Manal wrote:
>> 
>>> Am 10.01.2011 15:04, schrieb Guy:
 Hi all,
 
 I would like to exclude any folder on a client that is under subversion.  
 All Directories which are maintained by subversion have a ".svn" directory 
 structure under them.
 
 Can any clever people create a FileSet exclude which will skip any 
 directory which contains a .svn folder?
 
 Cheers,
 ---Guy
>>> 
>>> 
>>> Hi,
>>> 
>>> there is a fileset option called "ExcludeDirContaining" (look at the
>>> docs for more info), which basically excludes all directories and their
>>> children that contain a certain file. I don't know if that also works
>>> with directory names, though.
>>> 
>>> But if it doesn't, you could alway run something like
>>> 
>>>   find / -type d -name .svn -exec touch {}/../.excludeme \;
>>> 
>>> as "ClientRunBeforeJob".
> 
> > Hi,
> >
> > The "ExcludeDirContaining" worked!.. the .svn is a dir.  It seems the 
> > documentation implies that it's a filename-string but works when it's a DIR 
> > too :)
> 
> I think you will find that the directory containing .svn is also excluded 
> from backup.
> 
> -- 
> Dan Langille - http://langille.org/

--
Gaining the trust of online customers is vital for the success of any company
that requires sensitive data to be transmitted over the Web.   Learn how to 
best implement a security strategy that keeps consumers' information secure 
and instills the confidence they need to proceed with transactions.
http://p.sf.net/sfu/oracle-sfdevnl 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Excluding subversion folders

2011-01-10 Thread Phil Stracchino
On 01/10/11 10:18, Dan Langille wrote:
> On 1/10/2011 10:03 AM, Guy wrote:
>  > The "ExcludeDirContaining" worked!.. the .svn is a dir.  It seems the 
> documentation implies that it's a filename-string but works when it's a 
> DIR too :)
> 
> I think you will find that the directory containing .svn is also 
> excluded from backup.

Yes, that's what he wanted:  "Any directory under SVN version control."

-- 
  Phil Stracchino, CDK#2 DoD#299792458 ICBM: 43.5607, -71.355
  ala...@caerllewys.net   ala...@metrocast.net   p...@co.ordinate.org
 Renaissance Man, Unix ronin, Perl hacker, Free Stater
 It's not the years, it's the mileage.

--
Gaining the trust of online customers is vital for the success of any company
that requires sensitive data to be transmitted over the Web.   Learn how to 
best implement a security strategy that keeps consumers' information secure 
and instills the confidence they need to proceed with transactions.
http://p.sf.net/sfu/oracle-sfdevnl 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Excluding subversion folders

2011-01-10 Thread Dan Langille
On 1/10/2011 10:03 AM, Guy wrote:
> On 10 Jan 2011, at 14:42, Christian Manal wrote:
>
>> Am 10.01.2011 15:04, schrieb Guy:
>>> Hi all,
>>>
>>> I would like to exclude any folder on a client that is under subversion.  
>>> All Directories which are maintained by subversion have a ".svn" directory 
>>> structure under them.
>>>
>>> Can any clever people create a FileSet exclude which will skip any 
>>> directory which contains a .svn folder?
>>>
>>> Cheers,
>>> ---Guy
>>
>>
>> Hi,
>>
>> there is a fileset option called "ExcludeDirContaining" (look at the
>> docs for more info), which basically excludes all directories and their
>> children that contain a certain file. I don't know if that also works
>> with directory names, though.
>>
>> But if it doesn't, you could alway run something like
>>
>>find / -type d -name .svn -exec touch {}/../.excludeme \;
>>
>> as "ClientRunBeforeJob".

 > Hi,
 >
 > The "ExcludeDirContaining" worked!.. the .svn is a dir.  It seems the 
documentation implies that it's a filename-string but works when it's a 
DIR too :)

I think you will find that the directory containing .svn is also 
excluded from backup.

-- 
Dan Langille - http://langille.org/

--
Gaining the trust of online customers is vital for the success of any company
that requires sensitive data to be transmitted over the Web.   Learn how to 
best implement a security strategy that keeps consumers' information secure 
and instills the confidence they need to proceed with transactions.
http://p.sf.net/sfu/oracle-sfdevnl 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Excluding subversion folders

2011-01-10 Thread Jari Fredriksson
On 10.1.2011 17:03, Guy wrote:
> Hi,
> 
> The "ExcludeDirContaining" worked!.. the .svn is a dir.  It seems the 
> documentation implies that it's a filename-string but works when it's a DIR 
> too :)
> 

Nice thing in *nix is that "everything is a file" ;)

-- 

You will gain money by a speculation or lottery.



signature.asc
Description: OpenPGP digital signature
--
Gaining the trust of online customers is vital for the success of any company
that requires sensitive data to be transmitted over the Web.   Learn how to 
best implement a security strategy that keeps consumers' information secure 
and instills the confidence they need to proceed with transactions.
http://p.sf.net/sfu/oracle-sfdevnl ___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Fwd: Extremly slow backup

2011-01-10 Thread Phil Stracchino
On 01/10/11 09:40, Marcin Krol wrote:
>> But ...  xfs is optimized for long streaming reads and writes.
> 
> The only reliable alternative for me is ext3 and its performance is 
> unacceptable when handling directories with lots of small files.

Have you tried jfs?


-- 
  Phil Stracchino, CDK#2 DoD#299792458 ICBM: 43.5607, -71.355
  ala...@caerllewys.net   ala...@metrocast.net   p...@co.ordinate.org
 Renaissance Man, Unix ronin, Perl hacker, Free Stater
 It's not the years, it's the mileage.

--
Gaining the trust of online customers is vital for the success of any company
that requires sensitive data to be transmitted over the Web.   Learn how to 
best implement a security strategy that keeps consumers' information secure 
and instills the confidence they need to proceed with transactions.
http://p.sf.net/sfu/oracle-sfdevnl 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Excluding subversion folders

2011-01-10 Thread Guy
Hi,

The "ExcludeDirContaining" worked!.. the .svn is a dir.  It seems the 
documentation implies that it's a filename-string but works when it's a DIR too 
:)

Thanks
---Guy

On 10 Jan 2011, at 14:42, Christian Manal wrote:

> Am 10.01.2011 15:04, schrieb Guy:
>> Hi all,
>> 
>> I would like to exclude any folder on a client that is under subversion.  
>> All Directories which are maintained by subversion have a ".svn" directory 
>> structure under them.
>> 
>> Can any clever people create a FileSet exclude which will skip any directory 
>> which contains a .svn folder?
>> 
>> Cheers,
>> ---Guy
> 
> 
> Hi,
> 
> there is a fileset option called "ExcludeDirContaining" (look at the
> docs for more info), which basically excludes all directories and their
> children that contain a certain file. I don't know if that also works
> with directory names, though.
> 
> But if it doesn't, you could alway run something like
> 
>   find / -type d -name .svn -exec touch {}/../.excludeme \;
> 
> as "ClientRunBeforeJob".
> 
> 
> Regards,
> Christian Manal
> 
> --
> Gaining the trust of online customers is vital for the success of any company
> that requires sensitive data to be transmitted over the Web.   Learn how to 
> best implement a security strategy that keeps consumers' information secure 
> and instills the confidence they need to proceed with transactions.
> http://p.sf.net/sfu/oracle-sfdevnl 
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users


--
Gaining the trust of online customers is vital for the success of any company
that requires sensitive data to be transmitted over the Web.   Learn how to 
best implement a security strategy that keeps consumers' information secure 
and instills the confidence they need to proceed with transactions.
http://p.sf.net/sfu/oracle-sfdevnl 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Fwd: Extremly slow backup

2011-01-10 Thread Marcin Krol
> Additionally, if the emailserver is using mdir format there are
> thousands (millions?) of tiny files and there's a fixed overhead in
> opening each file no matter what its size is - that results in slow
> speeds when handling lots of small files.

I just sent small test job from my mail server - 15 GB in 1 directory 
and 5 files. After 30 minutes it has transfered just 2 GBs to spool with 
average transfer rate at 1.6 MB/s. So slowness is not caused by dealing 
with lots of files.

I'm thinking that maybe CentOS compilation of bacula client is somehow 
broken. I've used .src.rpm downloaded from bacula.org to build my 
packages. OTOH I'm using exactly same packages on two more machines and 
they work fine.

M.

--
Gaining the trust of online customers is vital for the success of any company
that requires sensitive data to be transmitted over the Web.   Learn how to 
best implement a security strategy that keeps consumers' information secure 
and instills the confidence they need to proceed with transactions.
http://p.sf.net/sfu/oracle-sfdevnl 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Excluding subversion folders

2011-01-10 Thread Christian Manal
Am 10.01.2011 15:04, schrieb Guy:
> Hi all,
> 
> I would like to exclude any folder on a client that is under subversion.  All 
> Directories which are maintained by subversion have a ".svn" directory 
> structure under them.
> 
> Can any clever people create a FileSet exclude which will skip any directory 
> which contains a .svn folder?
> 
> Cheers,
> ---Guy


Hi,

there is a fileset option called "ExcludeDirContaining" (look at the
docs for more info), which basically excludes all directories and their
children that contain a certain file. I don't know if that also works
with directory names, though.

But if it doesn't, you could alway run something like

   find / -type d -name .svn -exec touch {}/../.excludeme \;

as "ClientRunBeforeJob".


Regards,
Christian Manal

--
Gaining the trust of online customers is vital for the success of any company
that requires sensitive data to be transmitted over the Web.   Learn how to 
best implement a security strategy that keeps consumers' information secure 
and instills the confidence they need to proceed with transactions.
http://p.sf.net/sfu/oracle-sfdevnl 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Fwd: Extremly slow backup

2011-01-10 Thread Marcin Krol
> But ...  xfs is optimized for long streaming reads and writes.

The only reliable alternative for me is ext3 and its performance is 
unacceptable when handling directories with lots of small files.

M.

--
Gaining the trust of online customers is vital for the success of any company
that requires sensitive data to be transmitted over the Web.   Learn how to 
best implement a security strategy that keeps consumers' information secure 
and instills the confidence they need to proceed with transactions.
http://p.sf.net/sfu/oracle-sfdevnl 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Slow backup even with a dedicated line: solved!

2011-01-10 Thread Oliver Hoffmann
> On Mon, 10 Jan 2011, Oliver Hoffmann wrote:
> 
> > I did some tests with different gzip levels and with no compression
> > at all. It makes a difference but not as expected. Without
> > compression I still have a rate of only 11346.1 KB/s. Anything else
> > I should try?
> 
> Are you sure the cross-over connection is operating at 1Gbps?  Are
> you sure that route interface is being used?  It just seems
> coincidental that you're still being capped to almost exactly 100Mbps.
> 
> Gavin
> 

As said before, I did some tests with ftp and scp. Looks reasonable. 
Oops, got it. The communication between the fd and the director was
correct but fd to sd went still over the slow 100Mbit line.
Now I have a rate of ca. 88 kb/s :-)
Thanx for pointing me in the right direction!

Cheers,

Oliver




--
Gaining the trust of online customers is vital for the success of any company
that requires sensitive data to be transmitted over the Web.   Learn how to 
best implement a security strategy that keeps consumers' information secure 
and instills the confidence they need to proceed with transactions.
http://p.sf.net/sfu/oracle-sfdevnl 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Multiple device SD configuration

2011-01-10 Thread Phil Stracchino
On 01/10/11 01:56, Silver Salonen wrote:
> On Monday 10 January 2011 03:14:22 Phil Stracchino wrote:
>> But then there isn't an SD named babylon4-sd2 to connect to.
>>
>> What's the correct way to configure this?
> 
> The name of the Storage resource in Director configuration does not matter to 
> SD, so you can easily use "babylon4-sd2".

Well well, learn something new every day.  "The name of the Storage
resource in the Director doesn't actually have to match the name of the
Storage daemon it's on."

And with that, I have VirtualFulls working, although the amount of time
that the Director is unresponsive while starting one is a little
disconcerting.  A little more Pool tuning yet to do...


-- 
  Phil Stracchino, CDK#2 DoD#299792458 ICBM: 43.5607, -71.355
  ala...@caerllewys.net   ala...@metrocast.net   p...@co.ordinate.org
 Renaissance Man, Unix ronin, Perl hacker, Free Stater
 It's not the years, it's the mileage.

--
Gaining the trust of online customers is vital for the success of any company
that requires sensitive data to be transmitted over the Web.   Learn how to 
best implement a security strategy that keeps consumers' information secure 
and instills the confidence they need to proceed with transactions.
http://p.sf.net/sfu/oracle-sfdevnl 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Excluding subversion folders

2011-01-10 Thread Guy
Hi all,

I would like to exclude any folder on a client that is under subversion.  All 
Directories which are maintained by subversion have a ".svn" directory 
structure under them.

Can any clever people create a FileSet exclude which will skip any directory 
which contains a .svn folder?

Cheers,
---Guy
--
Gaining the trust of online customers is vital for the success of any company
that requires sensitive data to be transmitted over the Web.   Learn how to 
best implement a security strategy that keeps consumers' information secure 
and instills the confidence they need to proceed with transactions.
http://p.sf.net/sfu/oracle-sfdevnl 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Recommended File set template for windows 2003? - and a Base Job Question

2011-01-10 Thread Mister IT Guru
I've just added up the amount of space required by running estimates on 
all my windows jobs. I was a touch should when it totaled up to just 
over  1TB. Now this is clearly too much data to synchronise offsite, so 
I am seriously considering how to reduce the footprint of all my 
servers, because seriously, it's pointless to use all my bandwith for 
backups.


Now, using a portable HD, would be find, and I surly would not mind a 
weekly round trip to the other side of the planet, so long as I flew 
with enough leg room, bandwith, and I could take my GF with me, that'll 
be great. (so, I'm basically saying, it's not fine to use a portable 
drive!).


I'm come to two conclusions, one that I have to tighten up my file sets, 
or make use of Base Jobs


Currently my file sets are

   FileSet {
  Name = CDE-Windows
   Include {
File = C:/
File = D:/
File = E:/
Options {
}
  }
   }


So, I'm aware that I'm umm, over doing it at the moment! Which is why I 
looked at base jobs. Now, it would be great to no longer backup the OS, 
or have any of the OS files in my backup, but I worry what will happen 
when my base job expires, will that not make all the jobs impossible to 
restore? Any clues, hints, and tips are most welcome.



--
The Solo System Admin
http://solosysad.blogspot.com/
--
Mister IT Guru At Gmx Dot Com

--
Gaining the trust of online customers is vital for the success of any company
that requires sensitive data to be transmitted over the Web.   Learn how to 
best implement a security strategy that keeps consumers' information secure 
and instills the confidence they need to proceed with transactions.
http://p.sf.net/sfu/oracle-sfdevnl ___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Retention per Job or Fileset?

2011-01-10 Thread Thomas Mueller
Am Mon, 10 Jan 2011 12:59:04 +0100 schrieb Michael Patzer:

> Hi all,
> 
> i've two jobs for one client, going to the same pool - and couldn't
> change that.
> 
> the first job has a retention of 22 days, but the second job needs a
> retention of 4 days.
> 
> any ideas how i could setup such a construct without defining the same
> client twice,
> or using a different pool?

why not using 2 pools?



- Thomas


--
Gaining the trust of online customers is vital for the success of any company
that requires sensitive data to be transmitted over the Web.   Learn how to 
best implement a security strategy that keeps consumers' information secure 
and instills the confidence they need to proceed with transactions.
http://p.sf.net/sfu/oracle-sfdevnl 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Retention per Job or Fileset?

2011-01-10 Thread Michael Patzer
Hi all,

i've two jobs for one client, going to the same pool - and couldn't
change that.

the first job has a retention of 22 days, but the second job needs a
retention of 4 days.

any ideas how i could setup such a construct without defining the same
client twice,
or using a different pool?

i was thinking about somethink like "delete all jobs for client older 4
days" at runafterjob,
but don't know how to build this, becouse it seems only possible to
delete jobids.

thx
michael




--
Gaining the trust of online customers is vital for the success of any company
that requires sensitive data to be transmitted over the Web.   Learn how to 
best implement a security strategy that keeps consumers' information secure 
and instills the confidence they need to proceed with transactions.
http://p.sf.net/sfu/oracle-sfdevnl 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Partition information and boot records

2011-01-10 Thread Martin Simmons
> On Sat, 8 Jan 2011 08:43:51 -0500, John Drescher said:
> 
> > Can bacula store this information? I'm thinking of how to do recoveries,
> > and I'm wondering how i'd restore to bare metal, if i didn't have
> > partition info, and the mbr to hand seperatly from the backup?
> >
> > Any thoughts?
> >
> 
> sfdisk can store the partition info this. dd can create a backup of
> the mbr. You can create a run before job that calls both of these.

Beware that the disk used for the recovery might not be the same size as the
original.

It might be better to recover by reinstalling the OS in the normal way and
then restore the backup.

__Martin

--
Gaining the trust of online customers is vital for the success of any company
that requires sensitive data to be transmitted over the Web.   Learn how to 
best implement a security strategy that keeps consumers' information secure 
and instills the confidence they need to proceed with transactions.
http://p.sf.net/sfu/oracle-sfdevnl 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Fwd: Extremly slow backup

2011-01-10 Thread Phil Stracchino
On 01/10/11 02:49, Marcin Krol wrote:
>> Out of interest, which filesystem are you using on this volume?
> 
> xfs, as always when dealing with lots of small files.

But ...  xfs is optimized for long streaming reads and writes.


-- 
  Phil Stracchino, CDK#2 DoD#299792458 ICBM: 43.5607, -71.355
  ala...@caerllewys.net   ala...@metrocast.net   p...@co.ordinate.org
 Renaissance Man, Unix ronin, Perl hacker, Free Stater
 It's not the years, it's the mileage.

--
Gaining the trust of online customers is vital for the success of any company
that requires sensitive data to be transmitted over the Web.   Learn how to 
best implement a security strategy that keeps consumers' information secure 
and instills the confidence they need to proceed with transactions.
http://p.sf.net/sfu/oracle-sfdevnl 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Slow backup even with a dedicated line

2011-01-10 Thread Gavin McCullagh
On Mon, 10 Jan 2011, Oliver Hoffmann wrote:

> I did some tests with different gzip levels and with no compression at 
> all. It makes a difference but not as expected. Without compression I 
> still have a rate of only 11346.1 KB/s. Anything else I should try?

Are you sure the cross-over connection is operating at 1Gbps?  Are you sure
that route interface is being used?  It just seems coincidental that you're
still being capped to almost exactly 100Mbps.

Gavin


--
Gaining the trust of online customers is vital for the success of any company
that requires sensitive data to be transmitted over the Web.   Learn how to 
best implement a security strategy that keeps consumers' information secure 
and instills the confidence they need to proceed with transactions.
http://p.sf.net/sfu/oracle-sfdevnl 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Ignoring Job Errors

2011-01-10 Thread rauch . holger
Hi, 



even when looking through   



http://www.bacula.org/5.0.x-manuals/en/main/main/   



I haven't found a way to ignore errors in backup jobs (I only want to be

notified about the errors, but I don't want the entire backup job to be 

canceled).  



What's the right option to achieve this behavior and in which configuration 

resource (Client, Job, etc.) can it be used?


(Apologies in case I overlooked something).


Thanks in advance & kind regards,   



   Holger


signature.asc
Description: Digital signature
--
Gaining the trust of online customers is vital for the success of any company
that requires sensitive data to be transmitted over the Web.   Learn how to 
best implement a security strategy that keeps consumers' information secure 
and instills the confidence they need to proceed with transactions.
http://p.sf.net/sfu/oracle-sfdevnl ___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula configration for FC LTO-5 tape drive

2011-01-10 Thread Arunav Mandal
Hi,

I am running a sample job and I only get 1.5TB per LTO5 tape so that means
hardware compression is not working. But when I check in
/sys/class/scsi_tape/nst0/default_compression and it says 1,
default_blksize=-1 and default_density=-1. Am I missing something in regards
to compression? I have 5 test tapes labeled LTO5-001 to LTO5-005 and bacula
randomly picked up LTO5-005 after LTO5-001 and how can I force bacula to go
in serial order and not random order when it picks up tapes. All the tapes
were in volstatus append to start with.

Thanks in advance,

Arunav. 



-Original Message-
From: Phil Stracchino [mailto:ala...@metrocast.net] 
Sent: Wednesday, January 05, 2011 11:35 PM
To: Arunav Mandal
Cc: bacula-users@lists.sourceforge.net
Subject: Re: [Bacula-users] Bacula configration for FC LTO-5 tape drive

On 01/05/11 17:31, Arunav Mandal wrote:
> Is deduplication possible in Bacula.

Not yet.  :)


(Well, OK, not deduplication as such.  Yet.  But look into the Base Jobs
feature.  It *MAY* do what you want.)


-- 
  Phil Stracchino, CDK#2 DoD#299792458 ICBM: 43.5607, -71.355
  ala...@caerllewys.net   ala...@metrocast.net   p...@co.ordinate.org
 Renaissance Man, Unix ronin, Perl hacker, Free Stater
 It's not the years, it's the mileage.


--
Gaining the trust of online customers is vital for the success of any company
that requires sensitive data to be transmitted over the Web.   Learn how to 
best implement a security strategy that keeps consumers' information secure 
and instills the confidence they need to proceed with transactions.
http://p.sf.net/sfu/oracle-sfdevnl 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users