Re: [Bacula-users] prevent bacula to backup recently restored files

2007-10-24 Thread Michael Short
On 10/24/07, Martin Vogt <[EMAIL PROTECTED]> wrote:
> how can I prevent bacula to re-write files to tape after I've recently
> restored them to same location. A similiar problem is when I've moved them
> to a new (logical) volume which is mounted under same mount point
> afterwards. No files were changed but complete tree is re-backuped when next
> backup job is running. I'm pretty sure it's ctime, mtime, atime related, but
> what's a safe way to backup only changed files? I can give you a fully
> "stat" output of files in mentioned trees, but first look via "ls" gives me
> correct "old" dates.

Just make sure that the files have an older creation/modification date
and bacula will ignore them.

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] wx-console on Windows won't talk to director on Linux (bug?)

2007-10-24 Thread David Lee Lambert
On Wednesday 24 October 2007 15:40:36 Arno Lehmann wrote:
> 24.10.2007 15:42,, David L. Lambert wrote::
> > I’ve been running Bacula [...].  However, when I run the wx-console on
> > Windows and point it at the director running on Ubuntu, I get errors
> > like the following:
> >
> > .helpautodisplay: is an invalid command.
> >
> > #.messages
> >
> > : is an invalid command.
> >
> > #.help: is an invalid command.
>
> Can you use bwx-console from a linux machine to connect to that DIR
> without these issues?

I got the same errors from two different Linux boxes,  with the same 
configuration file.

However,  I figured out how to fix it.  I had no "CommandACL = " line in 
the "Console" section for that client in the director configuration-file.  
After adding "CommandACL= *all*",  it works.

Thanks.

-- 
David L. Lambert
  Software Developer,  Precision Motor Transport Group, LLC
  Work phone 517-349-3011 x215
  Cell phone 586-873-8813

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Job started twice

2007-10-24 Thread Michael Short
> ... and it looks completely sane to me. No fancy schedules, no run=
> directives, nothing uncommon in it. But what is in the secnet-def
> JobDefauls resource?

I won't be able to run the test until later, this is my JobDefault for
reference:

JobDefs {
  Name = "secnet-def"
  Type = Backup
  Level = Full
  FileSet = "secnet-def"
  Storage = "File"
  Messages = "Standard"
  Priority = 10
}

The FileSet "secnet-def" is overridden by the the client
configuration. Tonite I will leave the director in debugging mode for
the night and see if the jobs are interacting. I scheduled the backup
to run on its lonesome but it seemed to work fine.


Sincerely,
-Michael

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] wx-console on Windows won't talk to director on Linux (bug?)

2007-10-24 Thread Arno Lehmann
Hi,

24.10.2007 15:42,, David L. Lambert wrote::
> 
> 
>  
> 
> I’ve been running Bacula for several months to back up several Linux 
> servers to disk on one of them.  I also installed Bacula on a Windows 
> workstation, and was able to run backup jobs from it (that is, pulling 
> data from the file-daemon there).  However, when I run the wx-console on 
> Windows and point it at the director running on Ubuntu, I get errors 
> like the following:
> 
>  
> 
> Welcome to bacula wx-console 2.0.3 (06 March 2007)!
> 
> Using this configuration file: C:\Documents and Settings\All 
> Users\Application Data\Bacula\wx-console.conf
> 
> Connecting...
> 
> Connected
> 
> 1000 OK: IT-dir Version: 2.0.3 (06 March 2007)

Hmm... no version mismatch here (by the way - an upgarde to 2.2.4 or 
above is strongly recommended...)

Can you use bwx-console from a linux machine to connect to that DIR 
without these issues?

Arno


> .helpautodisplay: is an invalid command.
> 
> #.messages
> 
> : is an invalid command.
> 
> #.help: is an invalid command.
> 
> .messages
> 
> : is an invalid command.
> 
> .messages
> 
> : is an invalid command.
> 
> .messages
> 
> : is an invalid command.
> 
> .messages
> 
> : is an invalid command.
> 
>  
> 
> When I go into the “restore” screen, the drop-down boxes for client, 
> etc., don’t have suitably-populated lists; instead, they say stuff like 
> “.clients?: is an invalid command” where the “?” is actually a little 
> square box.  My guess is that the console is sending lines terminated by 
> “\r\n”, and the director is only stripping the “\n” from each line 
> before parsing it. 
> 
>  
> 
>  
> 
>  
> 
> // -- /// /
> 
> /*/ David Lee Lambert /*/
> 
> // Software Developer, Precision Motor Transport Group, LLC //
> 
> // 517-349-3011 x223 (work) … 586-873-8813 (cell) //
> 
>  
> 
> 
> 
> 
> -
> This SF.net email is sponsored by: Splunk Inc.
> Still grepping through log files to find problems?  Stop.
> Now Search log events and configuration files using AJAX and a browser.
> Download your FREE copy of Splunk now >> http://get.splunk.com/
> 
> 
> 
> 
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users

-- 
Arno Lehmann
IT-Service Lehmann
www.its-lehmann.de

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Job started twice

2007-10-24 Thread Arno Lehmann
Hi,

24.10.2007 15:06,, Michael Short wrote::
> Here is the configuration for the client, it is the same as every other.

... and it looks completely sane to me. No fancy schedules, no run= 
directives, nothing uncommon in it. But what is in the secnet-def 
JobDefauls resource?

Other than that, I can't see anything here.

Probably time to run with debug output...

Arno

> #DIRECTOR
> Client {
>   Name = "sv27"
>   Address = 10.123.0.25
>   FDPort = 1
>   Catalog = "MyCatalog"
>   Password = ""
>   AutoPrune = no
> }
> Job {
>   Name = "sv27"
>   Client = "sv27"
>   JobDefs = "secnet-def"
>   Write Bootstrap = "/home/bacula/sv27.bsr"
>   Schedule = "sv27"
>   Storage = "sv27"
>   Pool = "sv27"
>   Fileset = "sv27"
>   Enabled = "yes"
> }
> Pool {
>   Name = "sv27"
>   Pool Type = Backup
>   Recycle = no
>   AutoPrune = no
>   LabelFormat = "sv27"
>   UseVolumeOnce = yes
> }
> Storage {
>   Name = "sv27"
>   Address = 10.123.0.1
>   SDPort = 10001
>   Password = ""
>   Media Type = File
>   Device = "sv27"
> }
> Storage {
>   Name = "sv27-onsite"
>   Address = 10.123.0.25
>   SDPort = 10001
>   Password = ""
>   Media Type = File
>   Device = "sv27"
> }
> FileSet {
>   Name = "sv27"
>   Ignore FileSet Changes = yes
>   Enable VSS = yes
>   Include {
> Options {
>   signature = MD5
>   compression = GZIP
>   sparse = yes
> }
> File = "c:/"
> File = "d:/"
>   }
> }
> Schedule {
>   Name = "sv27"
>   Run = Level=Incremental sun-sat at 18:00
> }
> 
> 
> #STORAGE DAEMON
> Device {
>   Name = "sv27"
>   Media Type = File
>   Archive Device = "/home/bacula/storage/d1"
>   LabelMedia = yes; # Automatically label new volumes
>   Random Access = Yes; # Filesystem environment
>   AutomaticMount = yes; # Filesystem is always available
>   RemovableMedia = no; # A filesystem is NOT removable
>   AlwaysOpen = no; # Not important for filesystem usage
> }
> 
> -
> This SF.net email is sponsored by: Splunk Inc.
> Still grepping through log files to find problems?  Stop.
> Now Search log events and configuration files using AJAX and a browser.
> Download your FREE copy of Splunk now >> http://get.splunk.com/
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
> 

-- 
Arno Lehmann
IT-Service Lehmann
www.its-lehmann.de

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Problems with 'Automatic Volume Labeling'

2007-10-24 Thread Arno Lehmann
Hi,

24.10.2007 13:15,, Rich wrote::
> On 2007.10.23. 22:02, Arno Lehmann wrote:
> ...
>> That requirement, by the way, is not very problematic. The sample 
>> given in the manual should almost work out of the box, and python is 
>> easier to learn than Baculas variable substitution language :-)
> 
> i do not agree with that ;)

Feel free to... I'm not a person to try to persuade anyone :-)

 This is described in the manual, for example 
 http://www.bacula.org/dev-manual/Python_Scripting.html#SECTION00356
>>> ...
> 
> i'm now trying to understand at least something in all this...
> 
> 1. the example has a section that is not simply stating noop, but is 
> preceded by it :
> 
>def JobInit(self, job):
>   noop = 1
>   if (job.JobId < 2):
>  startid = job.run("run kernsave")
>  job.JobReport = "Python started new Job: jobid=%d\n" % startid
>   print "name=%s version=%s conf=%s working=%s" % (bacula.Name, 
> bacula.Version, bacula.ConfigFile, bacula.WorkingDir)
> 
> manual says "If you do not want a particular event, simply replace the 
> existing code with a noop = 1."

Well... the manual is imperfect here.

The key thing is that, in python, each method needs a body, and even 
if you don't want any action taken by it. So the construct of "noop = 
1" is a more or less standard way of creating a method without any 
functionality.

> what does this section do then ? is preceding it with noop = 1 also 
> disabling it ?

No. To disable it, you'd have to remove the actual code (which only 
prints a message for JobId=1, by the way).

> 2. the volume label itself.
> i guess i should leave at least parts of "def JobStart", right ?
> it creates JobEvents class, which in turn includes what seems to be the 
> correct section - "def NewVolume".
> 
> i suppose this line should be modified :
> Vol = "TestA-%d" % numvol

In NewVolume, a method of the class JobEvents, yes.
Here the actual volume name is created.

> later two other lines have same string :
> 
> job.JobReport = "Exists=%d TestA-%d" % (job.DoesVolumeExist(Vol), numvol)
> job.VolumeName="TestA-%d" % numvol
> 
> must i replace all of these strings ?

No.

> if yes, can't they just be 
> referenced from the first string, "Vol" ?

Yes.

> now, as for constructing the volume labal string... it seems that most 
> variables map to something "job." - like job.Pool, job.Job, job.Level etc.
> 
> how would i use these variables to define the label ?
> 
> i am trying to imitate a volume label like
> "${Pool}_${Job}-${Level}-${Year}.${Month:p/2/0/r}.${Day:p/2/0/r}-${Hour:p/2/0/r}.${Minute:p/2/0/r}"
> 
> which would also require variables like year, month etc. how can i 
> include those ?

By using python's methods. The module datetime offers date objects, 
which can be used to print the current month, for example. The 
strftime method could be used to get the year, month and day nicely 
formatted as a string.

The pool and job name can be accessed using Baculas job methods, 
similar to the example.

In a real-world script, you should ensure you create unique names 
using the DoesVolumeExist method, by the way.

> i'm really lost with all this, but this probably is the best time to do 
> this as i have set up a test system before attempting a large upgrade, 
> so postponing the change can come back to me later...
> ...

As always... right.

Unfortunately, as I don't actually use python events myself (grin) I 
can't provide another working example.

Arno

>> Arno

-- 
Arno Lehmann
IT-Service Lehmann
www.its-lehmann.de

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] bacula-fd 2.2.5 for Mac OS X

2007-10-24 Thread Justin Lott
some day i will get around to contacting Kern about putting these 
somewhere official.

until then, .pkg installers for Mac OS X 10.4 Intel and PowerPC are 
available at http://www.pixelchaos.net

- justin

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] FD - SD problem

2007-10-24 Thread Arno Lehmann
Hi,

24.10.2007 12:33,, GDS.Marshall wrote::
> Hello,
> 
>> Hi,
>>
>> 22.10.2007 21:26,, GDS.Marshall wrote::
>>> version 2.2.4 patched from sourceforge
>>> Linux kernel 2.6.x
>>>
>>> I am running 10+ FD's, one SD, and one Director.  I am having problems
>>> with one of my FD's, the others are fine.  Not sure if it makes any
>>> difference, but the FD is on the same machine as the Director.
>>> I have no issues with the network, I see no errors on either the
>>> interface
>>> of the FD or the SD.  All FD's are plugged into the same netgear switch.
>>> The SD is plugged into a different netgear switch which is then plugged
>>> into the FD's switch.
>> Are the FD and SD running on the same host (your description says that
>> DIR and problem FD are on the same machine, but not if the DIR and SD
>> are on that same machine, too)?
> No, the SD is on its own machine
> 
> FD+DIR   FD   FD
>   |  | |
>  GSW--- Gig Switch
>   |
>  FSW--- Fast Switch
>   |
>   SD

And the problem connection is between the hosts to the left... ok.

...
>>> 22-Oct 18:56 backupserver-sd: Spooling data ...
>>> 22-Oct 18:56 fileserver-fd: fileserver-backup.2007-10-22_18.54.33 Fatal
>>> error: backup.c:892 Network send error to SD. ERR=Success
>> So the connection breaks shortly after data starts being transferred,
>> right?
> Correct, 2193816 is always written.

Funny. Disk full on the SD, perhaps? Might be worth a look into the 
system log on both the machines.

>> It's a little bit surprising to see an error text of Success here... I
>> always thought that sort of things only happened on windows ;-)
> ROTFL.  The FD, Dir, SD are on linux machines, we have not ventured to the
> Windows FD yet.
> 
>>
>>> I know it says "Network send error", however, I have checked the
>>> network,
>>> and can not find a problem with any of the equipment.
>> Do you have a firewall running on that host?
> No firewalls running on any of the bacula hosts, and the switch is not a
> 3com.

Good enough... regarding network problems, you could try to enable the 
heartbeat function in the FD and / or SD. To find the cause of the 
problem, tcpdump or wireshark might help.

If you see RST packages on the connection between FD and SD it's only 
the question who generates them...

...
>> Here it's failed, I think. A higher debug level might reveal more, but
>> this doesn't tell me anything important.
> 
> I am probably going to get flamed for this,

Not by me :-)

> but what value, currently it
> is set to 200, I do not want to put it too high, and swamp the amount of
> data I am supplying the mailing list, but neither do I want to waste the
> mailing lists time by making it too low

Really a difficult question :-)

The best approach might be to run with debug level 400, save the 
resulting logs, and only post the part around the failure first. If 
someone needs more detail, you could post the complete log to a web site.

...
>>> backupserver ~ #
>> With the information from above, I suspect a network problem. Does the
>> client run before job you have run for a very long time? In such a
>> situation, a firewall/router might close the connection between SD and
>> FD because it seems to be idle.
> The run before job might take half an hour max.  There is no firewall or
> router in the setup.

Hmm... half an hour should not trigger a RST due to idleing too long. 
Do your other FDs on the network segment with the DIR have 
long-running scripts, too, or do they transfer data almost immediately 
after the backup jobs are started?

Arno

-- 
Arno Lehmann
IT-Service Lehmann
www.its-lehmann.de

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Error Handling failed Jobs

2007-10-24 Thread Arno Lehmann
Hi,

your mail arrives here as an attachment, which makes it hard to 
reply... you might want to change that.

> Hello List,
> 
> I have actually a problem when a Backup Job fails. If it fails the
> predicted Volume gets marked as error. What i not understand is, if
> there is a connection problem between the fd and sd the volume is
> never used but still gets marked as error.

The important question is why it's marked as "Error". If the volume 
can't be read correctly, that is the expected behaviour.

> Is it possible to
> manipulate the error handling in case of a connection problem?

You can manually update the status of the volume to "Used" or "Full", 
but should first find out if the volume actually is usable.

Arno

> Sincerly


24.10.2007 12:14,, Matthias Baake wrote::
> -
> This SF.net email is sponsored by: Splunk Inc.
> Still grepping through log files to find problems?  Stop.
> Now Search log events and configuration files using AJAX and a browser.
> Download your FREE copy of Splunk now >> http://get.splunk.com/
> 
> 
> 
> 
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users

-- 
Arno Lehmann
IT-Service Lehmann
www.its-lehmann.de

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Disk backup recycling

2007-10-24 Thread Radek Hladik
Hi,
option 0 is the best one however there are financial drawbacks :-) The
whole situation is like this. I have bacula server with two remote SAN
connected drives. SAN does mirroring etc and SAN drives are considered
stable and safe.
I have the backup rotation schema with 1 weekly full backup and 6
differential/incremental backups. I need to backup various routers,
servers, important workstations etc... There is cca 20 clients now.
Total storage needed to be backed up is like 200-250GB. Number of
clients should increase in time.
Since now we used simple ssh+tar solution. I would like to use Bacula to
"tidy up" the whole process and make it more reliable and robust. The
disk space on SAN is expensive and "precious" and I would like to use it
reasonably. So I have no problem with connecting each client after
another, performing full backup to a new volume and deleting old full
backup afterwards - this is how it now works with ssh+tar. Client
connects, backups to temporary file and when backup is complete, old
backup is deleted and temporary file is renamed. Clients are backed up
one after another so I do not need so much overhead disk space. I have
no problem finding reason for extra space for one full backup.
I am still considering other options like spooling to local SATA drive,
or backing up to local drive and synchronizing to SAN drives, but every
solution has some disadvantages...
Full catalog backup will be performed on SAN drives every day and to
remote servers. I find it as the most valuable data to be backed up :-)

Radek


Marek Simon napsal(a):
> My opinion to your ideas:
> 0) Leave the schema as I submited and buy more disk space for backuping. :-)
> 
> 1) It is best variant I think. The other advantage is that the full 
> backup of all clients would take much longer time then 1/7th full and 
> other differential. Now what to do with Catalog:
> You can backup the catalog to some changable media (tape, CD/DWD-RW).
> You can pull the (zipped and may be encrypted) catalog to some or all of 
> your clients.
> You can send your (zipped and maybe encrypted) Catalog to some friend of 
> you (and you can backup his catalog for reciprocation), but it may be a 
> violence of the data privacy (even if the Catalog contain only names and 
> sizes).
> You can forward the bacula messages (completed backups) to some external 
> mail address and then if needed you can reconstruct the job-volume 
> binding from them.
> The complete catalog is too big for sending it by e-mail, but still you 
> can do SQL selection in catalog after the backup and send the job-volume 
> bindings and some other relevant information to the external email 
> address in CSV format.
> Still you can (and I strongly recommend to) backup the catalog every 
> time after the daily bunch of Jobs and extract it when needed with other 
> bacula tools (bextract).
> 
> 2) I thought, you are in lack of disk space, so you can't afford to have 
> the full backup twice plus many differential backups. So I do not see 
> the difference if I have two full backups on a device for a day or for 
> few hours, I need that space anyway. But I think this variant is better 
> to be used it with your original idea: Every full backup volume has its 
> own pool and the Job Schedule is set up to use volume 1 in odd weeks and 
> do the immediate differential (practicaly zero sized) backup to the 
> volume 2 just after the full one and vice-versa in even weeks. 
> Priorities could help you as well in this case. May be some check if the 
> full backup was good would be advisable, but I am not sure if bacula can 
> do this kind of conditional job runs, may be with some python hacking or 
> some After Run and Before Run scripts.
> You can do the same for differential backups - two volumes in two pools, 
> the first is used and the other cleared - in turns.
> And finaly, you can combine it with previous solution and divide it to 
> sevenths or more parts, but then it would be the real Catalog hell.
> 
> 3) It is the worst solution. If you want to have bad sleep every Monday 
> (or else day), try it. It is realy risky to loose the backup even for a 
> while, an accident can strike at any time.
> 
> Marek
> 
> P.S. I could write it in czech, but the other readers can be interested 
> too :-)
> 
> Radek Hladik napsal(a):
>> Hi,
>>  thanks for your answer. Your idea sounds good. However if I understand 
>> it correctly, there will be two full backups for the whole day after 
>> full backup. This is what I am trying to avoid as I will be backing up a 
>> lot of clients. So as I see it I have these possibilities:
>>
>> 1) use your scheme and divide clients into seven groups. One group will 
>> start it's full backup on Monday, second on Tuesday, etd.. So I will 
>> have all the week two full backups for 1/7 clients. This really seems 
>> like I will need to backup the catalog at least dozen times because no 
>> one will be able to deduct which backup is on which volu

Re: [Bacula-users] stored build problems centOS5/postgresql

2007-10-24 Thread Michael Galloway
On Wed, Oct 24, 2007 at 12:37:30PM -0400, Brian A Seklecki (Mobile) wrote:
> 
> Where did you get your pgsql libs/bins/devel-includes?  RH or Pgsql?
> The OpenSSL and Kerberos hooks do not appear to match your system.
>

they came from the centOS updates repo:

[EMAIL PROTECTED] bacula-2.2.5]# rpm -qa | grep postg
postgresql-tcl-8.1.9-1.el5
postgresql-8.1.9-1.el5
postgresql-devel-8.1.9-1.el5
postgresql-libs-8.1.9-1.el5
postgresql-pl-8.1.9-1.el5
postgresql-server-8.1.9-1.el5
 
postgresql.x86_648.1.9-1.el5updates 
Matched from:
postgresql
PostgreSQL client programs and libraries.


-- michael

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] stored build problems centOS5/postgresql

2007-10-24 Thread Brian A Seklecki (Mobile)
On Wed, 2007-10-24 at 12:25 -0400, Michael Galloway wrote:
> good day all, i'm trying to build 2.2.5 on centOS5 with postgresql. my config 
> looks
> /builddir/build/BUILD/postgresql-8.1.9/src/interfaces/libpq/fe-secure.c:949: 
> undefined reference to `SSL_CTX_new'
> /builddir/build/BUILD/postgresql-8.1.9/src/interfaces/libpq/fe-secure.c:946: 
> undefined reference to `SSL_library_init'
> /builddir/build/BUILD/postgresql-8.1.9/src/interfaces/libpq/fe-secure.c:947: 
> undefined reference to `SSL_load_error_strings'

Where did you get your pgsql libs/bins/devel-includes?  RH or Pgsql?
The OpenSSL and Kerberos hooks do not appear to match your system.

~BAS


-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] prevent bacula to backup recently restored files

2007-10-24 Thread Martin Vogt
Hello list,

how can I prevent bacula to re-write files to tape after I've recently
restored them to same location. A similiar problem is when I've moved them
to a new (logical) volume which is mounted under same mount point
afterwards. No files were changed but complete tree is re-backuped when next
backup job is running. I'm pretty sure it's ctime, mtime, atime related, but
what's a safe way to backup only changed files? I can give you a fully
"stat" output of files in mentioned trees, but first look via "ls" gives me
correct "old" dates.

Our bacula server is a 1.38.9 version, most fd-clients also, some are
1.38.11. An update to 2.x.x in short-time is no option because of a working
production environment including a well adapted rescue disc which covers
bacula server and client bare metal restores.

thanks
Martin
-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] stored build problems centOS5/postgresql

2007-10-24 Thread Michael Galloway
good day all, i'm trying to build 2.2.5 on centOS5 with postgresql. my config 
looks
like this:

  Database lib:   -L/usr/lib64 -lpq -lcrypt
  Database name:  bacula
  Database user:  bacula

  Job Output Email:   
  Traceback Email:
  SMTP Host Address:  

  Director Port:  9101
  File daemon Port:   9102
  Storage daemon Port:9103

  Director User:  
  Director Group: 
  Storage Daemon User:
  Storage DaemonGroup:
  File Daemon User:   
  File Daemon Group:  

  SQL binaries Directory  /usr/bin

  Large file support: yes
  Bacula conio support:   yes -ltermcap
  readline support:   no 
  TCP Wrappers support:   no 
  TLS support:no
  Encryption support: no
  ZLIB support:   yes
  enable-smartalloc:  yes
  bat support:no 
  enable-gnome:   no 
  enable-bwx-console: no 
  enable-tray-monitor:
  client-only:no
  build-dird: yes
  build-stored:   yes
  ACL support:yes
  Python support: yes -L/usr/lib64/python2.4/config -lpython2.4 
-lutil -lrt 
  Batch insert enabled:   yes

but the build fails in building stored with this:

/builddir/build/BUILD/postgresql-8.1.9/src/interfaces/libpq/ip.c:81: warning: 
Using 'getaddrinfo' in statically linked applications requires at runtime the 
shared libraries from the glibc version used for linking
../lib/libbac.a(bnet.o): In function `resolv_host':
/usr/local/bacula-2.2.5/src/lib/bnet.c:424: warning: Using 'gethostbyname2' in 
statically linked applications requires at runtime the shared libraries from 
the glibc version used for linking
../lib/libbac.a(address_conf.o): In function `add_address':
/usr/local/bacula-2.2.5/src/lib/address_conf.c:310: warning: Using 
'getservbyname' in statically linked applications requires at runtime the 
shared libraries from the glibc version used for linking
/usr/lib64/libpq.a(fe-misc.o): In function `pqSocketCheck':
/builddir/build/BUILD/postgresql-8.1.9/src/interfaces/libpq/fe-misc.c:972: 
undefined reference to `SSL_pending'
/usr/lib64/libpq.a(fe-secure.o): In function `SSLerrmessage':
/builddir/build/BUILD/postgresql-8.1.9/src/interfaces/libpq/fe-secure.c:1198: 
undefined reference to `ERR_get_error'
/builddir/build/BUILD/postgresql-8.1.9/src/interfaces/libpq/fe-secure.c:1204: 
undefined reference to `ERR_reason_error_string'
/usr/lib64/libpq.a(fe-secure.o): In function `pqsecure_write':
/builddir/build/BUILD/postgresql-8.1.9/src/interfaces/libpq/fe-secure.c:415: 
undefined reference to `SSL_write'
/builddir/build/BUILD/postgresql-8.1.9/src/interfaces/libpq/fe-secure.c:416: 
undefined reference to `SSL_get_error'
/usr/lib64/libpq.a(fe-secure.o): In function `pqsecure_read':
/builddir/build/BUILD/postgresql-8.1.9/src/interfaces/libpq/fe-secure.c:324: 
undefined reference to `SSL_read'
/builddir/build/BUILD/postgresql-8.1.9/src/interfaces/libpq/fe-secure.c:325: 
undefined reference to `SSL_get_error'
/usr/lib64/libpq.a(fe-secure.o): In function `close_SSL':
/builddir/build/BUILD/postgresql-8.1.9/src/interfaces/libpq/fe-secure.c:1165: 
undefined reference to `SSL_shutdown'
/builddir/build/BUILD/postgresql-8.1.9/src/interfaces/libpq/fe-secure.c:1166: 
undefined reference to `SSL_free'
/builddir/build/BUILD/postgresql-8.1.9/src/interfaces/libpq/fe-secure.c:1172: 
undefined reference to `X509_free'
/usr/lib64/libpq.a(fe-secure.o): In function `open_client_SSL':
/builddir/build/BUILD/postgresql-8.1.9/src/interfaces/libpq/fe-secure.c:1035: 
undefined reference to `SSL_connect'
/builddir/build/BUILD/postgresql-8.1.9/src/interfaces/libpq/fe-secure.c:1038: 
undefined reference to `SSL_get_error'
/builddir/build/BUILD/postgresql-8.1.9/src/interfaces/libpq/fe-secure.c:1115: 
undefined reference to `SSL_get_peer_certificate'
/builddir/build/BUILD/postgresql-8.1.9/src/interfaces/libpq/fe-secure.c:1128: 
undefined reference to `X509_get_subject_name'
/builddir/build/BUILD/postgresql-8.1.9/src/interfaces/libpq/fe-secure.c:1128: 
undefined reference to `X509_NAME_oneline'
/builddir/build/BUILD/postgresql-8.1.9/src/interfaces/libpq/fe-secure.c:1132: 
undefined reference to `X509_get_subject_name'
/builddir/build/BUILD/postgresql-8.1.9/src/interfaces/libpq/fe-secure.c:1132: 
undefined reference to `X509_NAME_get_text_by_NID'
/usr/lib64/libpq.a(fe-secure.o): In function `pqsecure_open_client':
/builddir/build/BUILD/postgresql-8.1.9/src/interfaces/libpq/fe-secure.c:270: 
undefined reference to `SSL_new'
/builddir/build/BUILD/postgresql-8.1.9/src/interfaces/libpq/fe-secure.c:270: 
undefined reference to `SSL_set_ex_data'
/builddir/build/BUILD/postgresql-8.1.9/src/interfaces/libpq/fe-secure.c:270: 
undefined reference to `SSL_set_fd'
/usr/lib64/libpq.a(fe-secure.o): In function `destroy_SSL':
/builddir/build/BUILD/postgre

Re: [Bacula-users] iSCSI-Tape problems

2007-10-24 Thread Josh Fisher

Christoph Litauer wrote:
> Dear bacula users,
>
> I am running bacula-mysql 2.2.5 (rpm from sourceforge) on a SuSE SLES
> 10SP1 machine, kernel version 2.6.16 (SMP).
>
> I configured and mounted tape drive via iSCSI. I can tar from and to
> this drive without problems. I can label a tape with btape, read this
> label without problems. But immediatly after writing a few blocks using
> qfill or wr the machine stops working. One of my write attempts left a
> kernel panic message in /var/log/messages:
>
> Oct 24 13:28:26 bacula kernel: Bad page state in process 'btape'
> Oct 24 13:28:26 bacula kernel: page:c16cce40 flags:0x8000
> mapping: mapcount:0 count:1
> Oct 24 13:28:26 bacula kernel: Trying to fix it up, but a reboot is needed
> Oct 24 13:28:26 bacula kernel: Backtrace:
> Oct 24 13:28:26 bacula kernel:  [] bad_page+0x42/0x68
> Oct 24 13:28:26 bacula kernel:  [] __free_pages_ok+0x55/0xe4
> Oct 24 13:28:26 bacula kernel:  [] normalize_buffer+0x31/0x63 [st]
> Oct 24 13:28:26 bacula kernel:  [] st_release+0x1e/0x45 [st]
> Oct 24 13:28:26 bacula kernel:  [] __fput+0xa1/0x167
> Oct 24 13:28:26 bacula kernel:  [] filp_close+0x4e/0x54
> Oct 24 13:28:26 bacula kernel:  [] sysenter_past_esp+0x54/0x79
> Oct 24 13:28:26 bacula kernel: Bad page state in process 'btape'
> Oct 24 13:28:26 bacula kernel: page:c16cce60 flags:0x8080
> mapping: mapcount:0 count:1
> Oct 24 13:28:26 bacula kernel: Trying to fix it up, but a reboot is needed
> Oct 24 13:28:26 bacula kernel: Backtrace:
> Oct 24 13:28:26 bacula kernel:  [] bad_page+0x42/0x68
> Oct 24 13:28:26 bacula kernel:  [] __free_pages_ok+0x55/0xe4
> Oct 24 13:28:26 bacula kernel:  [] normalize_buffer+0x31/0x63 [st]
> Oct 24 13:28:26 bacula kernel:  [] st_release+0x1e/0x45 [st]
> Oct 24 13:28:26 bacula kernel:  [] __fput+0xa1/0x167
> Oct 24 13:28:26 bacula kernel:  [] filp_close+0x4e/0x54
> Oct 24 13:28:26 bacula kernel:  [] sysenter_past_esp+0x54/0x79
>
> So this seems to be a kernel bug. Googling lead me to
> http://www.mail-archive.com/bacula-users@lists.sourceforge.net/msg17890.html
> but this didn't help too much ...
>   

This can happen when ClearPageReserved() is not called for a reserved
kernel memory page. Could be an iSCSI driver (included with the  Linux
kernel) problem or, if you are using a hardware iSCSI adapter, it could
be a problem with its driver.

> My question is: What is the difference between the tape write actions of
> tar and btape? Maybe I will be able to configure the tape device so that
> btape doesn't run in kernel panics any more?
>   

Well, btape is very likely using a different block size than you used
with tar. The default block size for btape is 126x512 = 64,512 bytes.
The default for tar is 20x512 = 10,240 bytes. Try setting 'Maximum block
size = 10240' and 'Minimum block size = 10240' in the bacula-sd.conf
config file being used by btape. If that works, then try tar with '-b
64512' and see if tar causes the same problem with large block sizes. If
neither wroks with the larger block size, then you will have to use the
smaller block size until the iSCSI bug is fixed.



-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Problem backing up catalog

2007-10-24 Thread mark . bergman


In the message dated: Wed, 24 Oct 2007 14:39:00 BST,
The pithy ruminations from Simon Barrett on 
 were:
=> On Tuesday 23 October 2007 14:52:21 Mateus Interciso wrote:
=> > On Tue, 23 Oct 2007 14:44:15 +0100, Chris Howells wrote:
=> > > Mateus Interciso wrote:

[SNIP!]

=> 
=> 
=> On this matter; adding the password to the RunBeforeJob line causes my 
=> database password to appear on the status emails:
=> 
=> 24-Oct 13:09 fs01-dir: BeforeJob: run command 
"/etc/bacula/make_catalog_backup 
=> bacula bacula MyPasswordHere"
=> 
=> Status emails are sent in clear text across our network.  Is there a 
=> recommended solution to include sensitive variables in the config files 
=> without exposing them like this?  

Sure. Here's one easy solution:

In $BACULA/bacula-dir.conf, have the catalog backup job call a wrapper
script instead of calling make_catalog_backup directly, as in:

=== bacula-dir.conf snippet ===
# Backup the catalog database (after the nightly save)
Job {
  Name = "BackupCatalog"
  Type = Backup
  Level = Full
  Messages = Standard
  Priority = 10
  Storage = pv132t
  Prefer Mounted Volumes = yes
  Maximum Concurrent Jobs = 1  
  Pool = Incremental
  Incremental Backup Pool = Incremental
  SpoolData = yes
  Client = parthenon-fd
  FileSet="Catalog"
  Schedule = "AfterBackup"
  RunBeforeJob = "/usr/local/bacula/bin/make_catalog_backup.wrapper"
  RunAfterJob  = "/usr/local/bacula/bin/run_after_catalog_backup"
  Write Bootstrap = "/usr/local/bacula/var/working/BackupCatalog.bsr"
  Priority = 11   # run after main backup
}
===

The wrapper script is something like:

=== make_catalog_backup.wrapper ===
#! /bin/sh
exec /usr/local/bacula/bin/make_catalog_backup bacula bacula $PASSWORD
===


This will prevent mail from bacula from including the database password. The 
advantage to this method is that it doesn't change make_catalog_backup, so that 
future bacula upgrades will be transparent.

The good news is that mysql is security-conscious enough to overwrite the
command line parameter for the password, so a "ps" display doesn't show the
password as part of the mysql command.

Unfortunately, make_catalog_backup is not that smart, and a "ps" (or grepping
through /proc) will show the password on the command-line. If the backup server
is a single user machine that you consider secure, this may not represent too
much of a risk.

On the other hand, if you want to eliminate this problem completely, skip 
the wrapper script and modify make_catalog_backup so that it uses hard-coded 
values from within the script instead of command-line parameters for the 
dbname, the dbuser, and the password.

=> 
=> Regards,
=> 
=> Simon Barrett
=> 


Mark Bergman  [EMAIL PROTECTED]
System Administrator
Section of Biomedical Image Analysis 215-662-7310
Department of Radiology,   University of Pennsylvania

http://pgpkeys.pca.dfn.de:11371/pks/lookup?search=mark.bergman%40.uphs.upenn.edu


-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] name of current client

2007-10-24 Thread Markus Goldberg
Hi,
how can i get the name of the current client (host) in bacula-dir.conf.
Please don't answer: 'use python' - i don't know how.

I want to use it in a line 'File = "|sh -c 'grep \"^ClientName ...
How can i fill ClientName ?

thanks,
   Markus


Markus Goldberg | Universität Hildesheim
 | Rechenzentrum
Tel +49 5121 883203 | Marienburger Platz 22, D-31141 Hildesheim, Germany
Fax +49 5121 883205 | email [EMAIL PROTECTED]


-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] iSCSI-Tape problems

2007-10-24 Thread Christoph Litauer
Dear bacula users,

I am running bacula-mysql 2.2.5 (rpm from sourceforge) on a SuSE SLES
10SP1 machine, kernel version 2.6.16 (SMP).

I configured and mounted tape drive via iSCSI. I can tar from and to
this drive without problems. I can label a tape with btape, read this
label without problems. But immediatly after writing a few blocks using
qfill or wr the machine stops working. One of my write attempts left a
kernel panic message in /var/log/messages:

Oct 24 13:28:26 bacula kernel: Bad page state in process 'btape'
Oct 24 13:28:26 bacula kernel: page:c16cce40 flags:0x8000
mapping: mapcount:0 count:1
Oct 24 13:28:26 bacula kernel: Trying to fix it up, but a reboot is needed
Oct 24 13:28:26 bacula kernel: Backtrace:
Oct 24 13:28:26 bacula kernel:  [] bad_page+0x42/0x68
Oct 24 13:28:26 bacula kernel:  [] __free_pages_ok+0x55/0xe4
Oct 24 13:28:26 bacula kernel:  [] normalize_buffer+0x31/0x63 [st]
Oct 24 13:28:26 bacula kernel:  [] st_release+0x1e/0x45 [st]
Oct 24 13:28:26 bacula kernel:  [] __fput+0xa1/0x167
Oct 24 13:28:26 bacula kernel:  [] filp_close+0x4e/0x54
Oct 24 13:28:26 bacula kernel:  [] sysenter_past_esp+0x54/0x79
Oct 24 13:28:26 bacula kernel: Bad page state in process 'btape'
Oct 24 13:28:26 bacula kernel: page:c16cce60 flags:0x8080
mapping: mapcount:0 count:1
Oct 24 13:28:26 bacula kernel: Trying to fix it up, but a reboot is needed
Oct 24 13:28:26 bacula kernel: Backtrace:
Oct 24 13:28:26 bacula kernel:  [] bad_page+0x42/0x68
Oct 24 13:28:26 bacula kernel:  [] __free_pages_ok+0x55/0xe4
Oct 24 13:28:26 bacula kernel:  [] normalize_buffer+0x31/0x63 [st]
Oct 24 13:28:26 bacula kernel:  [] st_release+0x1e/0x45 [st]
Oct 24 13:28:26 bacula kernel:  [] __fput+0xa1/0x167
Oct 24 13:28:26 bacula kernel:  [] filp_close+0x4e/0x54
Oct 24 13:28:26 bacula kernel:  [] sysenter_past_esp+0x54/0x79

So this seems to be a kernel bug. Googling lead me to
http://www.mail-archive.com/bacula-users@lists.sourceforge.net/msg17890.html
but this didn't help too much ...

My question is: What is the difference between the tape write actions of
tar and btape? Maybe I will be able to configure the tape device so that
btape doesn't run in kernel panics any more?

-- 
Regards
Christoph

Christoph Litauer  [EMAIL PROTECTED]
Uni Koblenz, Computing Center, http://www.uni-koblenz.de/~litauer
Postfach 201602, 56016 Koblenz Fon: +49 261 287-1311, Fax: -100 1311
PGP-Fingerprint: F39C E314 2650 650D 8092 9514 3A56 FBD8 79E3 27B2


-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Problem backing up catalog

2007-10-24 Thread Dan Langille
On 24 Oct 2007 at 14:39, Simon Barrett wrote:

> On Tuesday 23 October 2007 14:52:21 Mateus Interciso wrote:
> > On Tue, 23 Oct 2007 14:44:15 +0100, Chris Howells wrote:
> > > Mateus Interciso wrote:
> > >> But on the bacula-dir.conf file, I do have setted up the bacula
> > >> password on the Catalog configuration, so why is he not even trying to
> > >> use it? The other backups run absolutly normal.
> > >
> > > You are running make_catalog_backup with the wrong arguments. This is
> > > configured via the RunBeforeJob line, not the catalog resource.
> > >
> > >  From 'make_catalog_backup' (which is a shell script).
> > >
> > > #  $1 is the name of the database to be backed up and the name # of
> > > the output file (default = bacula). #  $2 is the user name with which to
> > > access the database # (default = bacula).
> > > #  $3 is the password with which to access the database or "" if no
> > > password # (default "")
> > >
> > >
> > > So you need a third argument which is the db password. Modify the
> > >
> > > RunBeforeJob = "make_catalog_backup bacula bacula" to read
> > >
> > > to read
> > >
> > > RunBeforeJob = "make_catalog_backup bacula bacula "
> >
> > -
> >
> > Now it worked, thanks :D
> > I just wonder why it was working before, since the configuration files
> > are exactly the same, and the env is the same as well
> 
> 
> On this matter; adding the password to the RunBeforeJob line causes my 
> database password to appear on the status emails:
> 
> 24-Oct 13:09 fs01-dir: BeforeJob: run command 
> "/etc/bacula/make_catalog_backup 
> bacula bacula MyPasswordHere"
> 
> Status emails are sent in clear text across our network.  Is there a 
> recommended solution to include sensitive variables in the config files 
> without exposing them like this?  

http://www.bacula.org/dev-manual/Catalog_Maintenance.html

Click on Security considerations


-- 
Dan Langille - http://www.langille.org/
Available for hire: http://www.freebsddiary.org/dan_langille.php



-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] wx-console on Windows won't talk to director on Linux (bug?)

2007-10-24 Thread David L. Lambert
 

I've been running Bacula for several months to back up several Linux
servers to disk on one of them.  I also installed Bacula on a Windows
workstation, and was able to run backup jobs from it (that is, pulling
data from the file-daemon there).  However, when I run the wx-console on
Windows and point it at the director running on Ubuntu, I get errors
like the following:

 

Welcome to bacula wx-console 2.0.3 (06 March 2007)!

Using this configuration file: C:\Documents and Settings\All
Users\Application Data\Bacula\wx-console.conf

Connecting...

Connected

1000 OK: IT-dir Version: 2.0.3 (06 March 2007)

.helpautodisplay: is an invalid command.

#.messages

: is an invalid command.

#.help: is an invalid command.

.messages

: is an invalid command.

.messages

: is an invalid command.

.messages

: is an invalid command.

.messages

: is an invalid command.

 

When I go into the "restore" screen, the drop-down boxes for client,
etc., don't have suitably-populated lists; instead, they say stuff like
".clients?: is an invalid command" where the "?" is actually a little
square box.  My guess is that the console is sending lines terminated by
"\r\n", and the director is only stripping the "\n" from each line
before parsing it.  

 

 

 

--

David Lee Lambert

Software Developer, Precision Motor Transport Group, LLC

517-349-3011 x223 (work) ... 586-873-8813 (cell)

 

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Problem backing up catalog

2007-10-24 Thread Simon Barrett
On Tuesday 23 October 2007 14:52:21 Mateus Interciso wrote:
> On Tue, 23 Oct 2007 14:44:15 +0100, Chris Howells wrote:
> > Mateus Interciso wrote:
> >> But on the bacula-dir.conf file, I do have setted up the bacula
> >> password on the Catalog configuration, so why is he not even trying to
> >> use it? The other backups run absolutly normal.
> >
> > You are running make_catalog_backup with the wrong arguments. This is
> > configured via the RunBeforeJob line, not the catalog resource.
> >
> >  From 'make_catalog_backup' (which is a shell script).
> >
> > #  $1 is the name of the database to be backed up and the name # of
> > the output file (default = bacula). #  $2 is the user name with which to
> > access the database # (default = bacula).
> > #  $3 is the password with which to access the database or "" if no
> > password # (default "")
> >
> >
> > So you need a third argument which is the db password. Modify the
> >
> > RunBeforeJob = "make_catalog_backup bacula bacula" to read
> >
> > to read
> >
> > RunBeforeJob = "make_catalog_backup bacula bacula "
>
> -
>
> Now it worked, thanks :D
> I just wonder why it was working before, since the configuration files
> are exactly the same, and the env is the same as well


On this matter; adding the password to the RunBeforeJob line causes my 
database password to appear on the status emails:

24-Oct 13:09 fs01-dir: BeforeJob: run command "/etc/bacula/make_catalog_backup 
bacula bacula MyPasswordHere"

Status emails are sent in clear text across our network.  Is there a 
recommended solution to include sensitive variables in the config files 
without exposing them like this?  

Regards,

Simon Barrett

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] [SOLVED] Re: spurious tracebacks

2007-10-24 Thread IEM - network operating center (IOhannes m zmoelnig)
IEM - network operating center (IOhannes m zmoelnig) wrote:
> IEM - network operating center (IOhannes m zmoelnig) wrote:
>> hi all
>>
>>
>> the emails usually are sent when nothing interesting is happening (my 
>> backups are all finished at night; the emails are sent somewhen during 
>> the day).
>>
>> however, receiving tracebacks gives me an uneasy feeling.
>>
>> any idea why they might occur? (ok, my information i give here is rather 
>> sparse, so i should ask: how should i investigate why they occur?)
> 
> seems i forgot something crucial (at least i think it is important):
> 
> all 3 affected daemons are running after that.
> most likely they are not the same processes (PID) before the tracebacks, 
>   but bacula is doing backups before and after the tracebacks occured 
> (and  no backups are scheduled at the moment the traceback occurs).
> 

it turned out to be rather simple:
after a good look at the headers of the traceback-emails, i noticed that 
they have not at all been sent from my backup machine, but from the 
machine where i did my initial tests (before deploying bacula).

so the tracebacks where totally unrelated to my running backup-system, 
and instead might be related to my everyday updating and fuzzing around 
on the test machine...

sorry for the noise

fasd.r
IOhannes



-- 
IEM - network operation center
mailto:[EMAIL PROTECTED]

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Job started twice

2007-10-24 Thread Michael Short
Here is the configuration for the client, it is the same as every other.

#DIRECTOR
Client {
  Name = "sv27"
  Address = 10.123.0.25
  FDPort = 1
  Catalog = "MyCatalog"
  Password = ""
  AutoPrune = no
}
Job {
  Name = "sv27"
  Client = "sv27"
  JobDefs = "secnet-def"
  Write Bootstrap = "/home/bacula/sv27.bsr"
  Schedule = "sv27"
  Storage = "sv27"
  Pool = "sv27"
  Fileset = "sv27"
  Enabled = "yes"
}
Pool {
  Name = "sv27"
  Pool Type = Backup
  Recycle = no
  AutoPrune = no
  LabelFormat = "sv27"
  UseVolumeOnce = yes
}
Storage {
  Name = "sv27"
  Address = 10.123.0.1
  SDPort = 10001
  Password = ""
  Media Type = File
  Device = "sv27"
}
Storage {
  Name = "sv27-onsite"
  Address = 10.123.0.25
  SDPort = 10001
  Password = ""
  Media Type = File
  Device = "sv27"
}
FileSet {
  Name = "sv27"
  Ignore FileSet Changes = yes
  Enable VSS = yes
  Include {
Options {
  signature = MD5
  compression = GZIP
  sparse = yes
}
File = "c:/"
File = "d:/"
  }
}
Schedule {
  Name = "sv27"
  Run = Level=Incremental sun-sat at 18:00
}


#STORAGE DAEMON
Device {
  Name = "sv27"
  Media Type = File
  Archive Device = "/home/bacula/storage/d1"
  LabelMedia = yes; # Automatically label new volumes
  Random Access = Yes; # Filesystem environment
  AutomaticMount = yes; # Filesystem is always available
  RemovableMedia = no; # A filesystem is NOT removable
  AlwaysOpen = no; # Not important for filesystem usage
}

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Disk backup recycling

2007-10-24 Thread Marek Simon
My opinion to your ideas:
0) Leave the schema as I submited and buy more disk space for backuping. :-)

1) It is best variant I think. The other advantage is that the full 
backup of all clients would take much longer time then 1/7th full and 
other differential. Now what to do with Catalog:
You can backup the catalog to some changable media (tape, CD/DWD-RW).
You can pull the (zipped and may be encrypted) catalog to some or all of 
your clients.
You can send your (zipped and maybe encrypted) Catalog to some friend of 
you (and you can backup his catalog for reciprocation), but it may be a 
violence of the data privacy (even if the Catalog contain only names and 
sizes).
You can forward the bacula messages (completed backups) to some external 
mail address and then if needed you can reconstruct the job-volume 
binding from them.
The complete catalog is too big for sending it by e-mail, but still you 
can do SQL selection in catalog after the backup and send the job-volume 
bindings and some other relevant information to the external email 
address in CSV format.
Still you can (and I strongly recommend to) backup the catalog every 
time after the daily bunch of Jobs and extract it when needed with other 
bacula tools (bextract).

2) I thought, you are in lack of disk space, so you can't afford to have 
the full backup twice plus many differential backups. So I do not see 
the difference if I have two full backups on a device for a day or for 
few hours, I need that space anyway. But I think this variant is better 
to be used it with your original idea: Every full backup volume has its 
own pool and the Job Schedule is set up to use volume 1 in odd weeks and 
do the immediate differential (practicaly zero sized) backup to the 
volume 2 just after the full one and vice-versa in even weeks. 
Priorities could help you as well in this case. May be some check if the 
full backup was good would be advisable, but I am not sure if bacula can 
do this kind of conditional job runs, may be with some python hacking or 
some After Run and Before Run scripts.
You can do the same for differential backups - two volumes in two pools, 
the first is used and the other cleared - in turns.
And finaly, you can combine it with previous solution and divide it to 
sevenths or more parts, but then it would be the real Catalog hell.

3) It is the worst solution. If you want to have bad sleep every Monday 
(or else day), try it. It is realy risky to loose the backup even for a 
while, an accident can strike at any time.

Marek

P.S. I could write it in czech, but the other readers can be interested 
too :-)

Radek Hladik napsal(a):
> Hi,
>   thanks for your answer. Your idea sounds good. However if I understand 
> it correctly, there will be two full backups for the whole day after 
> full backup. This is what I am trying to avoid as I will be backing up a 
> lot of clients. So as I see it I have these possibilities:
>
> 1) use your scheme and divide clients into seven groups. One group will 
> start it's full backup on Monday, second on Tuesday, etd.. So I will 
> have all the week two full backups for 1/7 clients. This really seems 
> like I will need to backup the catalog at least dozen times because no 
> one will be able to deduct which backup is on which volume :-)
> 2) modify your scheme as there will be another differential backup right 
> after the full backup before next job starts. It will effectively erase 
> the last week full backup.
> 3) use only 7 volumes and retention 6 days and live with the fact, that 
> there is no backup during backup.
>
> Now I only need to decide which option will be the best one :-)
>
> Radek
>
>   
>

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Problems with 'Automatic Volume Labeling'

2007-10-24 Thread Rich
On 2007.10.23. 22:02, Arno Lehmann wrote:
...
> That requirement, by the way, is not very problematic. The sample 
> given in the manual should almost work out of the box, and python is 
> easier to learn than Baculas variable substitution language :-)

i do not agree with that ;)

>>> This is described in the manual, for example 
>>> http://www.bacula.org/dev-manual/Python_Scripting.html#SECTION00356
>> ...

i'm now trying to understand at least something in all this...

1. the example has a section that is not simply stating noop, but is 
preceded by it :

   def JobInit(self, job):
  noop = 1
  if (job.JobId < 2):
 startid = job.run("run kernsave")
 job.JobReport = "Python started new Job: jobid=%d\n" % startid
  print "name=%s version=%s conf=%s working=%s" % (bacula.Name, 
bacula.Version, bacula.ConfigFile, bacula.WorkingDir)

manual says "If you do not want a particular event, simply replace the 
existing code with a noop = 1."

what does this section do then ? is preceding it with noop = 1 also 
disabling it ?

2. the volume label itself.
i guess i should leave at least parts of "def JobStart", right ?
it creates JobEvents class, which in turn includes what seems to be the 
correct section - "def NewVolume".

i suppose this line should be modified :
Vol = "TestA-%d" % numvol

later two other lines have same string :

job.JobReport = "Exists=%d TestA-%d" % (job.DoesVolumeExist(Vol), numvol)
job.VolumeName="TestA-%d" % numvol

must i replace all of these strings ? if yes, can't they just be 
referenced from the first string, "Vol" ?

now, as for constructing the volume labal string... it seems that most 
variables map to something "job." - like job.Pool, job.Job, job.Level etc.

how would i use these variables to define the label ?

i am trying to imitate a volume label like
"${Pool}_${Job}-${Level}-${Year}.${Month:p/2/0/r}.${Day:p/2/0/r}-${Hour:p/2/0/r}.${Minute:p/2/0/r}"

which would also require variables like year, month etc. how can i 
include those ?

i'm really lost with all this, but this probably is the best time to do 
this as i have set up a test system before attempting a large upgrade, 
so postponing the change can come back to me later...
...
> Arno
-- 
  Rich

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] spurious tracebacks

2007-10-24 Thread IEM - network operating center (IOhannes m zmoelnig)
IEM - network operating center (IOhannes m zmoelnig) wrote:
> hi all
> 
> 
> the emails usually are sent when nothing interesting is happening (my 
> backups are all finished at night; the emails are sent somewhen during 
> the day).
> 
> however, receiving tracebacks gives me an uneasy feeling.
> 
> any idea why they might occur? (ok, my information i give here is rather 
> sparse, so i should ask: how should i investigate why they occur?)

seems i forgot something crucial (at least i think it is important):

all 3 affected daemons are running after that.
most likely they are not the same processes (PID) before the tracebacks, 
  but bacula is doing backups before and after the tracebacks occured 
(and  no backups are scheduled at the moment the traceback occurs).


fma.dsr
IOhannes

-- 
IEM - network operation center
mailto:[EMAIL PROTECTED]

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] spurious tracebacks

2007-10-24 Thread IEM - network operating center (IOhannes m zmoelnig)
Brian A Seklecki (Mobile) wrote:

>> every now and then (the last one was today; then 1 month before; then 10 
>> days before that,...) i get traceback emails from my backup-server, each 
>> for all of the 3 daemons (dir, file, storage) running there.
> 
> *) Run the SD in foreground mode with debug level setup


i will try that.


> *) Run the SD in ktrace/ptrace

i rty to avoid this as long as possible: i don't feel (yet) like running 
SD in ptrace for an entire month to see something in the end.

> Is something possibly sending the process a signal?  A log rotation
> script etc?

i was thinking along these lines too.
but then the event happens at indetermined times (at least for me:-)), 
so it shouldn't be related to a cron-job (there are no cron-entries 
regarding bacula that i would have found; nor do the exact time when 
such an event occurs relate to anything found in the cron-settings)

logrotate is in operation and it is running a monthly rotation  which 
does not correspond to the tracebacks.


anyhow, thanks for the suggestions.

fdamrd
IOhannes


-- 
IEM - network operation center
mailto:[EMAIL PROTECTED]

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Error Handling failed Jobs

2007-10-24 Thread Matthias Baake


msg.pgp
Description: PGP message
-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] FD - SD problem

2007-10-24 Thread GDS.Marshall
Hello,

> Hi,
>
> 22.10.2007 21:26,, GDS.Marshall wrote::
>> version 2.2.4 patched from sourceforge
>> Linux kernel 2.6.x
>>
>> I am running 10+ FD's, one SD, and one Director.  I am having problems
>> with one of my FD's, the others are fine.  Not sure if it makes any
>> difference, but the FD is on the same machine as the Director.
>
>> I have no issues with the network, I see no errors on either the
>> interface
>> of the FD or the SD.  All FD's are plugged into the same netgear switch.
>> The SD is plugged into a different netgear switch which is then plugged
>> into the FD's switch.
>
> Are the FD and SD running on the same host (your description says that
> DIR and problem FD are on the same machine, but not if the DIR and SD
> are on that same machine, too)?
No, the SD is on its own machine

FD+DIR   FD   FD
  |  | |
 GSW--- Gig Switch
  |
 FSW--- Fast Switch
  |
  SD

>
>> I run a backup job (or via schedule) and the amount/size/volume of data
>> is
>> transfered each time, and then everything stops/hangs/does nothing.
>>
>> ls -l
>> /var/data/bacula/spool/backupserver-sd.data.472.fileserver-backup.2007-10-22_18.54.33.DLT-V4.spool
>> -rw-r- 1 root bacula 2193816 Oct 22 18:56
>>
>> A short while later, I will get a console message
>> 22-Oct 18:56 backupserver-sd: 3301 Issuing autochanger "loaded? drive 0"
>> command.
>> 22-Oct 18:56 backupserver-sd: 3302 Autochanger "loaded? drive 0", result
>> is Slot 3.
>> 22-Oct 18:56 backupserver-sd: Volume "CNI906" previously written, moving
>> to end of data.
>> 22-Oct 18:56 backupserver-sd: Ready to append to end of Volume "CNI906"
>> at
>> file=1.
>> 22-Oct 18:56 backupserver-sd: Spooling data ...
>> 22-Oct 18:56 fileserver-fd: fileserver-backup.2007-10-22_18.54.33 Fatal
>> error: backup.c:892 Network send error to SD. ERR=Success
>
> So the connection breaks shortly after data starts being transferred,
> right?
Correct, 2193816 is always written.

>
> It's a little bit surprising to see an error text of Success here... I
> always thought that sort of things only happened on windows ;-)
ROTFL.  The FD, Dir, SD are on linux machines, we have not ventured to the
Windows FD yet.

>
>
>> I know it says "Network send error", however, I have checked the
>> network,
>> and can not find a problem with any of the equipment.
>
> Do you have a firewall running on that host?
No firewalls running on any of the bacula hosts, and the switch is not a
3com.

>
>> I have run the fd and sd with debug options to provide additional
>> output,
>> I hope this helps.
>>
>> If any other information would help in diagnosis, please just ask for
>> it.
>>
>>
>> /usr/local/sbin/bacula-fd -f -s -d 200 -u root -g bacula -c
>> /etc/bacula/bacula-fd.conf
>>
>> /home/spencer/bacula-sd -f -d 200 -s -u root -g bacula -c
>> /etc/bacula/bacula-sd.conf
>>
>> cat /root/bacula-fd.log
>> bacula-fd: filed_conf.c:438 Inserting director res: fileserver-mon
>> fileserver-fd: jcr.c:132 read_last_jobs seek to 188
>> fileserver-fd: jcr.c:139 Read num_items=10
>> fileserver-fd: pythonlib.c:113 No script dir. prog=FDStartUp
>> fileserver-fd: filed.c:225 filed: listening on port 9102
>> fileserver-fd: bnet_server.c:96 Addresses host[ipv4:0.0.0.0:9102]
>> fileserver-fd: bnet.c:666 who=client host=192.168.1.30 port=36387
>> fileserver-fd: jcr.c:602 OnEntry JobStatus=fileserver-fd: jcr.c:622
>> OnExit
>> JobStatus=C set=C
>> fileserver-fd: find.c:81 init_find_files ff=8094e60
>> fileserver-fd: job.c:233 > fileserver-fd: job.c:249 Executing Hello command.
>> fileserver-fd: job.c:353 Calling Authenticate
>> fileserver-fd: cram-md5.c:71 send: auth cram-md5
>> <[EMAIL PROTECTED]> ssl=0
>> fileserver-fd: cram-md5.c:131 cram-get: auth cram-md5
>> <[EMAIL PROTECTED]> ssl=0
>> fileserver-fd: cram-md5.c:150 sending resp to challenge:
>> 6U+ZK4lCcB/uXh+k+X/qdB
>> fileserver-fd: job.c:357 OK Authenticate
>> fileserver-fd: job.c:233 > Job=-Console-.2007-10-22_18.53.31
>> SDid=0 SDtime=0 Authorization=dummy
>> fileserver-fd: job.c:249 Executing JobId= command.
>> fileserver-fd: job.c:451 JobId=0 Auth=dummy
>> fileserver-fd: job.c:233 > status command.
>> fileserver-fd: runscript.c:102 runscript: running all RUNSCRIPT object
>> (ClientAfterJob) JobStatus=C
>> fileserver-fd: pythonlib.c:237 No startup module.
>> fileserver-fd: job.c:337 Calling term_find_files
>> fileserver-fd: job.c:340 Done with term_find_files
>> fileserver-fd: mem_pool.c:377 garbage collect memory pool
>> fileserver-fd: job.c:342 Done with free_jcr
>> fileserver-fd: bnet.c:666 who=client host=192.168.1.30 port=36387
>> fileserver-fd: jcr.c:602 OnEntry JobStatus=fileserver-fd: jcr.c:622
>> OnExit
>> JobStatus=C set=C
>> fileserver-fd: find.c:81 init_find_files ff=8094e60
>> fileserver-fd: job.c:233 > fileserver-fd: job.c:249 Executing Hello command.
>> fileserver-fd: job.c:353 Calling Authenticate
>> fileserver-fd: cram-md5.c:71 send: auth cram-md5
>> <[EMAIL PROTECTED]> ssl=0
>> fileserver-fd: cram-md5.c:131 cram

Re: [Bacula-users] Calling client script ( Before / After jobs ) with parameters

2007-10-24 Thread Arno Lehmann
Hi,

24.10.2007 10:44,, [EMAIL PROTECTED] wrote::
> El mar, 23-10-2007 a las 20:35 +0200, Arno Lehmann escribió:
>> Hello,
>>
> 
>>> I'm a completely python ignorant, and before spend time trying to
>>> understand how to translate this behavior to python way of life, I
>>> prefer ear from you.
>> You won't get my ear :-)
>>
> 
> Ahem... ok, may be I prefer listen :P
 >
> The abstract idea is to define the rman variables informed in the client
> definitions and using the same job definition for standard backup
> procedures

Yeah... well, passing the parameters through the command definition in 
the job should work. Getting them from outside of Bacula might be more 
complicated, but you'd have to try that...


>> The rest of your approach seems reasonable and should be easily 
>> implemented. I found python was easy to learn and use, so for me it 
>> was definitely worth the effort.
>>
>> Of course, using bash or whatever shell you prefer can be done, too. I 
>> think it's more or less a question of what you prefer.
>>
>> If you start working on it, and expect to spend some time on this 
>> anyway, I'd suggest to pipe the rman output to Bacula directly, 
>> without the need of a dump file in between. That saves you a bit of 
>> disk space on the client.
>>
> 
> 
> That's very interesting, I will investigate how rman works with fifos,
> but afaik, fifo has a maximum throughput of 8K/s, it's correct?

Not per se... see here:

neuelf:/tmp # mkfifo testfifo
neuelf:/tmp # cat testfifo > /dev/null &
[1] 22889
neuelf:/tmp # dd if=/dev/zero bs=4096 of=testfifo count=2M
2097152+0 records in
2097152+0 records out
8589934592 bytes (8.6 GB) copied, 17.5983 s, 488 MB/s
[1]+  Donecat testfifo > /dev/null

A bit better than 8k/second, I'd say.

This is on a dual-core Opteron running a 2.6.22.5 linux kernel, by the 
way. The machine is practically idle all the time.

> 
>> The necessary parts are all available - you can backup from and 
>> restore to a FIFO in Bacula, and you can call scripts to set up the 
>> pipes and reader/writer processes. There are examples available, I'd 
>> start reading at wiki.bacula.org if I were you.
>>
> 
> I will take a look to see if fifo approach fits well in bacula/rman
> integration.

As far as I know, it should. Not being an Oracle user, I don't have 
the personal experience, though.

Arno

>> Of course, whatever you come up with might be worth adding to the 
>> wiki, too.
>>
>> Arno
>>
> 
> Thanks again
> 
> D.
> 
> 

-- 
Arno Lehmann
IT-Service Lehmann
www.its-lehmann.de

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Calling client script ( Before / After jobs ) with parameters

2007-10-24 Thread [EMAIL PROTECTED]
El mar, 23-10-2007 a las 20:35 +0200, Arno Lehmann escribió:
> Hello,
> 

> > 
> > I'm a completely python ignorant, and before spend time trying to
> > understand how to translate this behavior to python way of life, I
> > prefer ear from you.
> 
> You won't get my ear :-)
> 

Ahem... ok, may be I prefer listen :P

The abstract idea is to define the rman variables informed in the client
definitions and using the same job definition for standard backup
procedures

> The rest of your approach seems reasonable and should be easily 
> implemented. I found python was easy to learn and use, so for me it 
> was definitely worth the effort.
> 
> Of course, using bash or whatever shell you prefer can be done, too. I 
> think it's more or less a question of what you prefer.
> 
> If you start working on it, and expect to spend some time on this 
> anyway, I'd suggest to pipe the rman output to Bacula directly, 
> without the need of a dump file in between. That saves you a bit of 
> disk space on the client.
> 


That's very interesting, I will investigate how rman works with fifos,
but afaik, fifo has a maximum throughput of 8K/s, it's correct?


> The necessary parts are all available - you can backup from and 
> restore to a FIFO in Bacula, and you can call scripts to set up the 
> pipes and reader/writer processes. There are examples available, I'd 
> start reading at wiki.bacula.org if I were you.
> 

I will take a look to see if fifo approach fits well in bacula/rman
integration.

> Of course, whatever you come up with might be worth adding to the 
> wiki, too.
> 
> Arno
> 

Thanks again

D.


-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] need help defining client files for backup...

2007-10-24 Thread Arno Lehmann
Hi,

24.10.2007 04:31,, David Gardner wrote::
> Guys,
> 
> If I understand you correctly, the following should be a_piece_ of
> the bacula-dir.conf. I want to understand this but the problem
> still remains, how do I tie the files on WEB1, DB2 and RptEngine
> into the default job?

Short answer: You don't :-)

In the JobDefs, you have the common settings for the jobs referring 
this default setup.

In each job, you overload the settings that change, in your case, the 
FileSet.

So you end up with a common JobDefs resource used by all your jobs, 
and in each job definition, you have a line for the FileSet, the 
Client, and the bootstrap file to write. Of course, other settings 
could be changed, too - for example the schedule, the pools to use, 
and the retention times.

You end up with something like this:

JobDefs {
   Name = DefaultJob
   ... default settings
}

Client {
   Name = 1
   ...
}

Client {
   Name = 2
   ...
}

FileSet {
   Name = F1
   ...
}

FileSet {
   Name = F2
   ...
}

Job {
   Name = J1
   JobDefs = DefaultJob
   Client = 1
   Fileset = F1
   ...
}

Job {
   Name = J2
   JobDefs = DefaultJob
   Client = 2
   FileSet = F2
   ...
}

Does that make more sense?

Arno

> 
> JobDefs { Name = "DefaultJob" Type = Backup Level = Incremental 
> Client = DURANGO-fd FileSet = "Full Set" Schedule = "WeeklyCycle" 
> Storage = DAT72 Messages = ConsOnly# no email during
> testing Pool = Default Priority = 10 }
> 
> 
> 
> # Localhost to backup Client { Name = DURANGO-fd Address = DURANGO 
> FDPort = 9102 Catalog = MyCatalog Password = "l4...y"#
> password for FileDaemon File Retention = 30 days# 30 days 
> Job Retention = 6 months# six months AutoPrune = yes
> # Prune expired Jobs/Files }
> 
> # Local files FileSet { Name = "Full Set" Include { Options {
> signature = MD5 } File = /home/dgardner }
> 
> }
> 
> 
> # # Database Client to backup # Client { Name = DB2-fd Address =
> DB2 FDPort = 9102 Catalog = MyCatalog Password = "l4...2"
> # password for FileDaemon File Retention = 30 days# 30 days
>  Job Retention = 6 months# six months AutoPrune = yes
> # Prune expired Jobs/Files }
> 
> # Database files FileSet { Name = "Full Set" Include { Options {
> Compression=GZIP } File = /var/lib/mysql/airadvice-backup/c* }
> 
> }
> 
> 
> 
> # # Web Server Client to backup # Client { Name = WEB1-fd Address =
> WEB1 FDPort = 9102 Catalog = MyCatalog Password = "l4...2"
> # password for FileDaemon File Retention = 30 days# 30 days
>  Job Retention = 6 months# six months AutoPrune = yes
> # Prune expired Jobs/Files }
> 
> # Webserver files FileSet { Name = "Full Set" Include { Options {
> Compression=GZIP } File = /web/sites/* }
> 
> }
> 
> 
> # # Report Engine Client to backup # Client { Name = RptEngine1-fd 
> Address = RptEngine1 FDPort = 9102 Catalog = MyCatalog Password =
> "l4...2"# password for FileDaemon File Retention = 30
> days# 30 days Job Retention = 6 months# six months 
> AutoPrune = yes# Prune expired Jobs/Files }
> 
> # RptEngine files FileSet { Name = "Full Set" Include { Options {
> Compression=GZIP } File = /usr/local/reports/A* }
> 
> } /\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\ David
> Gardner email: djgardner(at)yahoo.com Yahoo! IM: djgardner AIM:
> dgardner09 "Everything is a learning experience, even a mistake."
> 
> - Original Message  From: Arno Lehmann <[EMAIL PROTECTED]> 
> To: bacula-users@lists.sourceforge.net Sent: Tuesday, October 23,
> 2007 3:01:24 PM Subject: Re: [Bacula-users] need help defining
> client files for backup...
> 
> 
> Hi,
> 
> 23.10.2007 23:57,, David Gardner wrote::
>> Hey gang,
>> 
>> I've read through all the docs I can find on the subject but just
>>  cannot decipher the correct method of describing in a fileset
>> which directories on which client machine should be backed up.
>> 
>> Here's the Linux Server:/directories I want backed up where the 
>> connections have all been made and I successfully backed up the
> local files:
>> {localhost:}/home/* RptEngine1:/usr/local/reports/A*.pdf 
>> DB2:/var/lib/mysql/* WEB1:/web/sites/*
>> 
>> Any help would be appreciated.
> 
> You define the necessary clients first.
> 
> Then you create the filesets you need - there will be at least four
> of them.
> 
> /home/, /var/lib/mysql/, and /web/sites/ are easy.
> 
> /usr/local/reports/A*.pdf will need wildcards or regexes in an
> options clause.
> 
> Nte that backing up the mysql database files while the database
> server is running is useless - you need to shut down the database
> server, or dump the databases you're interested in to a file and
> back up that (or these) files.
> 
> Hope that gets you started,
> 
> Arno
> 
> 
> 
> 
> 
> 
> __ Do You Yahoo!? 
> Tired of spam?  Yahoo! Mail has the best spam protection around 
> http://mail.yahoo.com
> 
> ---