Re: [Bacula-users] bacula client

2007-08-01 Thread Francisco Rodrigo Cortinas Maseda
I use the following:
 
./configure --enable-smartalloc --sbindir=/opt/bacula/bin 
--sysconfdir=/opt/bacula/bin --with-pid-dir=/opt/bacula/bin/working 
--with-subsys-dir=/opt/bacula/bin/working 
--with-working-dir=/opt/bacula/working --enable-largefile --enable-client-only 
--enable-static-fd --with-scriptdir=/opt/bacula/bin
 
Of course, modify the dirs to suite your installation.
 
Regards.
 
 -Mensaje original-
De: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] En nombre de tanveer haider
Enviado el: miércoles 1 de agosto de 2007 7:38
Para: bacula-users@lists.sourceforge.net
Asunto: [Bacula-users] bacula client



bacula version:2.0.3
OS:Fedora core 6
Mysql:Yes

I have installed bacula server on my machine ,now I want to install 
bacula client on solaris10 machine.Accrding to help doc I find that I need to 
install bacula-fd on client machine, but  there was not described how to make 
it. 
1-may I compile it with some option in ./configure? if yes kindly help 
how can I do so
2-if it is an independent software pack,kidly identify which one I use.

I try to explore it from bacula.org-Downloads-all files but find 
nothing. 

-- 
regards ,
Tanveer Haider Baig



Antes de imprimir este e-mail piense bien si es necesario hacerlo.

*
Este mensaje es privado y CONFIDENCIAL y se dirige exclusivamente a su 
destinatario. Si usted ha recibido este mensaje por error, no debe revelar, 
copiar, distribuir o usarlo en ningún sentido. Le rogamos lo comunique al 
remitente y borre dicho mensaje y cualquier documento adjunto que pudiera 
contener. El correo electrónico via Internet no permite asegurar la 
confidencialidad de los mensajes que se transmiten ni su integridad o correcta 
recepción. JAZZTEL no asume responsabilidad por estas circunstancias. Si el 
destinatario de este mensaje no consintiera la utilización del correo 
electrónico via Internet y la grabación de los mensajes, rogamos lo ponga en 
nuestro conocimiento de forma inmediata.Cualquier opinión expresada en este 
mensaje pertenece únicamente al autor remitente, y no representa necesariamente 
la opinión de JAZZTEL, a no ser que expresamente se diga y el remitente esté 
autorizado para hacerlo.
*
This message is private and CONFIDENTIAL and it is intended exclusively for its 
addressee. If you receive this message in error, you should not disclose, copy, 
distribute this e-mail or use it in any other way. Please inform the sender and 
delete the message and attachments from your system.Internet e-mail neither 
guarantees the confidentiality nor the integrity or proper receipt of the 
messages sent. JAZZTEL does not assume any liability for those circumstances. 
If the addressee of this message does not consent to the use of Internet e-mail 
and message recording, please notify us immediately.Any views or opinions 
contained in this message are solely those of the author, and do not 
necessarily represent those of JAZZTEL, unless otherwise specifically stated 
and the sender is authorised to do so. 
*

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now   http://get.splunk.com/___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Automatic labelling w/o autochanger doesn't

2007-08-01 Thread James Cort
John Drescher wrote:
 Instead, however, I get this:

 - and it'll sit there indefinitely waiting for me to mount a volume
 which it's created in the catalog but not labelled.
 
 Type mount from the console, the actual labeling will happen after the
 drive mounts and it detects it has a blank tape.. I do not believe you
 can mount a blank tape.
Apologies for wasting time - it seems the tapes I got back from the bulk 
erasure service weren't very erased, and the tape drive had decided it 
did not wish to write to them.
-- 
U4EA Technologies
http://www.u4eatech.com


-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now   http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] backing up MySQL databases

2007-08-01 Thread Dimitrios
Based on my research, i found out that backing up MySQL is quite un-efficient.

On a small database, you can just dump the contents of MySQL into a file and 
then backup that file.

On a large database (several Gigs) dumping to a file should be avoided, for 
example, on a hosting service the dump could fill your hosting space. In 
addition, its time consuming and allows for database badness where one table 
which is already backed up changes while the current one is still locked. This 
can be avoided if you dump the database with a system-wide lock but that means 
your web sites or application will be offline for the duration of the backup 
process (and since we are dumping whole gigs of data, it can be very time 
consuming).

Running a replicated database is not ideal for hosting servers, again that will 
double your hosting space and/or entire hosting service (colocation, etc).

Based on my research, it seems the 'best' solution for really big databases is 
to use the Binary Log (mysql 4.1.3 or newer) and do incremental backups of the 
database. Thus, you do a full backup at first and then you only backup the 
Binary Log based on date ranges or snapshop points. Ofcourse this is process 
is badly documented and i couldn't find any scripts that can help me do this.

I'd appriciate your thoughts on this.

Thank you.

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now   http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] backing up MySQL databases

2007-08-01 Thread Rich
On 2007.08.01. 14:37, Dimitrios wrote:
 Based on my research, i found out that backing up MySQL is quite un-efficient.
 
 On a small database, you can just dump the contents of MySQL into a file and 
 then backup that file.
 
 On a large database (several Gigs) dumping to a file should be avoided, for 
 example, on a hosting service the dump could fill your hosting space. In 
 addition, its time consuming and allows for database badness where one 
 table which is already backed up changes while the current one is still 
 locked. This can be avoided if you dump the database with a system-wide lock 
 but that means your web sites or application will be offline for the duration 
 of the backup process (and since we are dumping whole gigs of data, it can be 
 very time consuming).

using innodb, it should be possible to dump all tables as a single 
transaction.

 Running a replicated database is not ideal for hosting servers, again that 
 will double your hosting space and/or entire hosting service (colocation, 
 etc).
 
 Based on my research, it seems the 'best' solution for really big databases 
 is to use the Binary Log (mysql 4.1.3 or newer) and do incremental backups of 
 the database. Thus, you do a full backup at first and then you only backup 
 the Binary Log based on date ranges or snapshop points. Ofcourse this is 
 process is badly documented and i couldn't find any scripts that can help me 
 do this.

binary logs also tend to eat up a LOT of space.

other method might be feeding mysqldump output to pipe, where data is 
directly picked up by bacula client (i think, this was implemented 
somewhere around 2.0 bacula).

has the advantage of using no additional diskspace on client, but can be 
tricky to correctly set up and restore.

 I'd appriciate your thoughts on this.
 
 Thank you.
-- 
  Rich

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now   http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] backing up MySQL databases

2007-08-01 Thread James Harper
The way the backup should work is that once the backup begins, all
database operations should be written to the 'log' file so that the main
database file is consistent. Once that backup is complete, all logs can
be written back to the database file. There is some performance penalty
obviously, but at least you get a consistent backup.

Not sure if MySQL allows for this... under Windows you'll get a
consistent backup because of VSS, but only where consistent = no data
changes while you are backing up... a restore would be as if the power
had been yanked out of the machine at the instant of backing up, unless
MySQL is actually VSS aware?

James

 -Original Message-
 From: [EMAIL PROTECTED] [mailto:bacula-users-
 [EMAIL PROTECTED] On Behalf Of Dimitrios
 Sent: Wednesday, 1 August 2007 21:37
 To: bacula-users@lists.sourceforge.net
 Subject: Re: [Bacula-users] backing up MySQL databases
 
 Based on my research, i found out that backing up MySQL is quite un-
 efficient.
 
 On a small database, you can just dump the contents of MySQL into a
file
 and then backup that file.
 
 On a large database (several Gigs) dumping to a file should be
avoided,
 for example, on a hosting service the dump could fill your hosting
space.
 In addition, its time consuming and allows for database badness
where
 one table which is already backed up changes while the current one is
 still locked. This can be avoided if you dump the database with a
system-
 wide lock but that means your web sites or application will be offline
for
 the duration of the backup process (and since we are dumping whole
gigs of
 data, it can be very time consuming).
 
 Running a replicated database is not ideal for hosting servers, again
that
 will double your hosting space and/or entire hosting service
(colocation,
 etc).
 
 Based on my research, it seems the 'best' solution for really big
 databases is to use the Binary Log (mysql 4.1.3 or newer) and do
 incremental backups of the database. Thus, you do a full backup at
first
 and then you only backup the Binary Log based on date ranges or
snapshop
 points. Ofcourse this is process is badly documented and i couldn't
find
 any scripts that can help me do this.
 
 I'd appriciate your thoughts on this.
 
 Thank you.
 


-
 This SF.net email is sponsored by: Splunk Inc.
 Still grepping through log files to find problems?  Stop.
 Now Search log events and configuration files using AJAX and a
browser.
 Download your FREE copy of Splunk now   http://get.splunk.com/
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now   http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] FD - SD Error

2007-08-01 Thread Janco van der Merwe
Hi Guys,

I have the following:

30-Jul 02:01 subver-dir: Start Backup JobId 57,
Job=sqlengine.2007-07-30_02.00.01
30-Jul 02:01 subver-sd: Volume Daily_Backup-0002 previously written,
moving to end of data.
30-Jul 02:08 sqlengine-fd: Generate VSS snapshots. Driver=VSS Win
2003, Drive(s)=C
30-Jul 04:12 sqlengine-fd: sqlengine.2007-07-30_02.00.01 Fatal
error: ../../filed/backup.c:873 Network send error to SD.
ERR=Input/output error
30-Jul 04:07 subver-sd: sqlengine.2007-07-30_02.00.01 Fatal error:
append.c:259 Network error on data channel. ERR=Connection reset by peer
30-Jul 04:07 subver-sd: Job write elapsed time = 02:05:26, Transfer rate
= 840.4 K bytes/second
30-Jul 04:12 sqlengine-fd: sqlengine.2007-07-30_02.00.01
Error: ../../lib/bnet.c:406 Write error sending len to Storage
daemon:subver:9103: ERR=Input/output error

I have my suspicions about what this could be but I need expert opinions
before I brake something.

The set up: The machine sqlengine is in a DMZ and a Linux Firewall
using IPtables is suppose to NAT everything over ports 9101-9103 to the
internal LAN Bacula server address, subver, but it seems that something
isn't working as it should. I can use the bconsole from sqlengine to
subver if that is anything to go by.

Any suggestions? 



-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now   http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] backing up MySQL databases

2007-08-01 Thread Rich
On 2007.08.01. 14:51, Rich wrote:
 On 2007.08.01. 14:37, Dimitrios wrote:
 Based on my research, i found out that backing up MySQL is quite 
 un-efficient.

 On a small database, you can just dump the contents of MySQL into a file and 
 then backup that file.

 On a large database (several Gigs) dumping to a file should be avoided, for 
 example, on a hosting service the dump could fill your hosting space. In 
 addition, its time consuming and allows for database badness where one 
 table which is already backed up changes while the current one is still 
 locked. This can be avoided if you dump the database with a system-wide lock 
 but that means your web sites or application will be offline for the 
 duration of the backup process (and since we are dumping whole gigs of data, 
 it can be very time consuming).
 
 using innodb, it should be possible to dump all tables as a single 
 transaction.
 
 Running a replicated database is not ideal for hosting servers, again that 
 will double your hosting space and/or entire hosting service (colocation, 
 etc).

 Based on my research, it seems the 'best' solution for really big databases 
 is to use the Binary Log (mysql 4.1.3 or newer) and do incremental backups 
 of the database. Thus, you do a full backup at first and then you only 
 backup the Binary Log based on date ranges or snapshop points. Ofcourse 
 this is process is badly documented and i couldn't find any scripts that can 
 help me do this.
 
 binary logs also tend to eat up a LOT of space.
 
 other method might be feeding mysqldump output to pipe, where data is 
 directly picked up by bacula client (i think, this was implemented 
 somewhere around 2.0 bacula).
 
 has the advantage of using no additional diskspace on client, but can be 
 tricky to correctly set up and restore.

oh, another method i forgot to mention (mostly because it would not be 
possible in most hosted environments :) )

you could lock all tables in mysql, flush all data, then create lvm 
snapshot. do this in runbeforejob.
then, bacula could back up mysql files and remove snapshot in runafterjob.

i still prefer dumps, they are more portable, easier to restore and 
compress better.

 I'd appriciate your thoughts on this.

 Thank you.
-- 
  Rich

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now   http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] backing up MySQL databases

2007-08-01 Thread Dimitrios
On Wed, 01 Aug 2007 15:20:50 +0300 Rich [EMAIL PROTECTED] wrote:

 i still prefer dumps, they are more portable, easier to restore and 
 compress better.

but what about incremental backups? as far as i know, you'd have to backup the 
entire database over and over again and for a very large database its something 
we'd like to avoid.

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now   http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] backing up MySQL databases

2007-08-01 Thread Dimitrios
On Wed, 01 Aug 2007 14:51:43 +0300 Rich [EMAIL PROTECTED] wrote:

 using innodb, it should be possible to dump all tables as a single 
 transaction.

indeed, unfortunately our MySQL has a mix of innodb and myisam tables.

 
 other method might be feeding mysqldump output to pipe, where data is 
 directly picked up by bacula client (i think, this was implemented 
 somewhere around 2.0 bacula).
 has the advantage of using no additional diskspace on client, but can be 
 tricky to correctly set up and restore.

wow, thats possible? its definetly interesting and i'll try googling...


-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now   http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] backing up MySQL databases

2007-08-01 Thread Rich
On 2007.08.01. 15:33, Dimitrios wrote:
 On Wed, 01 Aug 2007 15:20:50 +0300 Rich [EMAIL PROTECTED] wrote:
 
 i still prefer dumps, they are more portable, easier to restore and 
 compress better.
 
 but what about incremental backups? as far as i know, you'd have to backup 
 the entire database over and over again and for a very large database its 
 something we'd like to avoid.

yes, that would be problematic. maybe some hackish method of taking diff 
from two dumps would work, but that would require even more diskspace on 
the client.

if you really need incremental backups, i don't know of any other method 
than using binlogs
-- 
  Rich

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now   http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] backing up MySQL databases

2007-08-01 Thread Drew Bentley
On 8/1/07, Rich [EMAIL PROTECTED] wrote:
 On 2007.08.01. 15:33, Dimitrios wrote:
  On Wed, 01 Aug 2007 15:20:50 +0300 Rich [EMAIL PROTECTED] wrote:
 
  i still prefer dumps, they are more portable, easier to restore and
  compress better.
 
  but what about incremental backups? as far as i know, you'd have to backup 
  the entire database over and over again and for a very large database its 
  something we'd like to avoid.

 yes, that would be problematic. maybe some hackish method of taking diff
 from two dumps would work, but that would require even more diskspace on
 the client.

 if you really need incremental backups, i don't know of any other method
 than using binlogs
 --
   Rich


Yes, in order to do incrementals, you'll need binlogs enabled and
you'll need to back these up. But also consider they take or can take
a huge amount of space, so log rotating is necessary along with a
possible performance hit as well, depending on the database demands,
types of transactions, etc, going on.

Probably one of the better tools I've seen is mysql-zrm from
zmanda.com to help manage these types of backups for MySQL.

-drew

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now   http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] backing up MySQL databases

2007-08-01 Thread Rich
On 2007.08.01. 15:31, Dimitrios wrote:
 On Wed, 01 Aug 2007 14:51:43 +0300 Rich [EMAIL PROTECTED] wrote:
...
 other method might be feeding mysqldump output to pipe, where data is 
 directly picked up by bacula client (i think, this was implemented 
 somewhere around 2.0 bacula).
 has the advantage of using no additional diskspace on client, but can be 
 tricky to correctly set up and restore.
 
 wow, thats possible? its definetly interesting and i'll try googling...

just look into http://www.bacula.org/rel-manual/FileSet_Resource.html - 
readfifo parameter :)
-- 
  Rich

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now   http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] FD - SD Error

2007-08-01 Thread Drew Bentley
On 8/1/07, Janco van der Merwe [EMAIL PROTECTED] wrote:
 Hi Guys,

 I have the following:

 30-Jul 02:01 subver-dir: Start Backup JobId 57,
 Job=sqlengine.2007-07-30_02.00.01
 30-Jul 02:01 subver-sd: Volume Daily_Backup-0002 previously written,
 moving to end of data.
 30-Jul 02:08 sqlengine-fd: Generate VSS snapshots. Driver=VSS Win
 2003, Drive(s)=C
 30-Jul 04:12 sqlengine-fd: sqlengine.2007-07-30_02.00.01 Fatal
 error: ../../filed/backup.c:873 Network send error to SD.
 ERR=Input/output error
 30-Jul 04:07 subver-sd: sqlengine.2007-07-30_02.00.01 Fatal error:
 append.c:259 Network error on data channel. ERR=Connection reset by peer
 30-Jul 04:07 subver-sd: Job write elapsed time = 02:05:26, Transfer rate
 = 840.4 K bytes/second
 30-Jul 04:12 sqlengine-fd: sqlengine.2007-07-30_02.00.01
 Error: ../../lib/bnet.c:406 Write error sending len to Storage
 daemon:subver:9103: ERR=Input/output error

 I have my suspicions about what this could be but I need expert opinions
 before I brake something.

 The set up: The machine sqlengine is in a DMZ and a Linux Firewall
 using IPtables is suppose to NAT everything over ports 9101-9103 to the
 internal LAN Bacula server address, subver, but it seems that something
 isn't working as it should. I can use the bconsole from sqlengine to
 subver if that is anything to go by.

 Any suggestions?



Seems to indicate possible firewall or the fd (client) can't reach the
SD on it's specific port. Can you telnet from the client to the SD on
port 9103? That should indicate if you have it blocked or not.

-drew

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now   http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Automatic labelling w/o autochanger doesn't

2007-08-01 Thread John Drescher
On 8/1/07, James Cort [EMAIL PROTECTED] wrote:
 John Drescher wrote:
  Instead, however, I get this:
 
  - and it'll sit there indefinitely waiting for me to mount a volume
  which it's created in the catalog but not labelled.
 
  Type mount from the console, the actual labeling will happen after the
  drive mounts and it detects it has a blank tape.. I do not believe you
  can mount a blank tape.
 Apologies for wasting time - it seems the tapes I got back from the bulk
 erasure service weren't very erased, and the tape drive had decided it
 did not wish to write to them.
 --
Something like

mt -f /dev/nst0 weof  mt -f /dev/st0 weof

should erase the tapes enough for bacula to use.

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now   http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] FD - SD Error

2007-08-01 Thread Janco van der Merwe
I can telnet to all the bacula ports from the sqlengine machine and I
ran a tcpdump to monitor the packets from the sqlengine machine to the
bacula server and all is OK.

I don't think it's the Firewall anymore, I realized with a shock that
the SQL backup is over 30GB and I didn't configure my Bacula with Large
File support during the install so that I think is the problem. Do you
know how I can enable large file support without having to re compile
again?

On Wed, 2007-08-01 at 08:13 -0500, Drew Bentley wrote:
 On 8/1/07, Janco van der Merwe [EMAIL PROTECTED] wrote:
  Hi Guys,
 
  I have the following:
 
  30-Jul 02:01 subver-dir: Start Backup JobId 57,
  Job=sqlengine.2007-07-30_02.00.01
  30-Jul 02:01 subver-sd: Volume Daily_Backup-0002 previously written,
  moving to end of data.
  30-Jul 02:08 sqlengine-fd: Generate VSS snapshots. Driver=VSS Win
  2003, Drive(s)=C
  30-Jul 04:12 sqlengine-fd: sqlengine.2007-07-30_02.00.01 Fatal
  error: ../../filed/backup.c:873 Network send error to SD.
  ERR=Input/output error
  30-Jul 04:07 subver-sd: sqlengine.2007-07-30_02.00.01 Fatal error:
  append.c:259 Network error on data channel. ERR=Connection reset by peer
  30-Jul 04:07 subver-sd: Job write elapsed time = 02:05:26, Transfer rate
  = 840.4 K bytes/second
  30-Jul 04:12 sqlengine-fd: sqlengine.2007-07-30_02.00.01
  Error: ../../lib/bnet.c:406 Write error sending len to Storage
  daemon:subver:9103: ERR=Input/output error
 
  I have my suspicions about what this could be but I need expert opinions
  before I brake something.
 
  The set up: The machine sqlengine is in a DMZ and a Linux Firewall
  using IPtables is suppose to NAT everything over ports 9101-9103 to the
  internal LAN Bacula server address, subver, but it seems that something
  isn't working as it should. I can use the bconsole from sqlengine to
  subver if that is anything to go by.
 
  Any suggestions?
 
 
 
 Seems to indicate possible firewall or the fd (client) can't reach the
 SD on it's specific port. Can you telnet from the client to the SD on
 port 9103? That should indicate if you have it blocked or not.
 
 -drew


-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now   http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bacula client

2007-08-01 Thread Ryan Novosielski
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

For Solaris, I recommend using all of the binaries from BlastWave. They
are the most current version, and they include Solaris 10 SMF repository
scripts.

Francisco Rodrigo Cortinas Maseda wrote:
 I use the following:
  
 ./configure --enable-smartalloc --sbindir=/opt/bacula/bin
 --sysconfdir=/opt/bacula/bin --with-pid-dir=/opt/bacula/bin/working
 --with-subsys-dir=/opt/bacula/bin/working
 --with-working-dir=/opt/bacula/working --enable-largefile
 --enable-client-only --enable-static-fd --with-scriptdir=/opt/bacula/bin
  
 Of course, modify the dirs to suite your installation.
  
 Regards.
  
  -Mensaje original-
 *De:* [EMAIL PROTECTED]
 [mailto:[EMAIL PROTECTED] *En nombre de
 *tanveer haider
 *Enviado el:* miércoles 1 de agosto de 2007 7:38
 *Para:* bacula-users@lists.sourceforge.net
 *Asunto:* [Bacula-users] bacula client
 
 bacula version:2.0.3
 OS:Fedora core 6
 Mysql:Yes
 
 I have installed bacula server on my machine ,now I want to install
 bacula client on solaris10 machine.Accrding to help doc I find that
 I need to install bacula-fd on client machine, but  there was not
 described how to make it.
 1-may I compile it with some option in ./configure? if yes kindly
 help how can I do so
 2-if it is an independent software pack,kidly identify which one I use.
 
 I try to explore it from bacula.org-Downloads-all files but find
 nothing.
 
 -- 
 regards ,
 Tanveer Haider Baig
 
 Antes de imprimir este e-mail piense bien si es necesario hacerlo.
 
 
 *Este mensaje es privado y CONFIDENCIAL y se dirige exclusivamente a su
 destinatario. Si usted ha recibido este mensaje por error, no debe
 revelar, copiar, distribuir o usarlo en ningún sentido. Le rogamos lo
 comunique al remitente y borre dicho mensaje y cualquier documento
 adjunto que pudiera contener. El correo electrónico via Internet no
 permite asegurar la confidencialidad de los mensajes que se transmiten
 ni su integridad o correcta recepción. JAZZTEL no asume responsabilidad
 por estas circunstancias. Si el destinatario de este mensaje no
 consintiera la utilización del correo electrónico via Internet y la
 grabación de los mensajes, rogamos lo ponga en nuestro conocimiento de
 forma inmediata.Cualquier opinión expresada en este mensaje pertenece
 únicamente al autor remitente, y no representa necesariamente la opinión
 de JAZZTEL, a no ser que expresamente se diga y el remitente esté
 autorizado para hacerlo.
 
 *
 
 *
 This message is private and CONFIDENTIAL and it is intended exclusively
 for its addressee. If you receive this message in error, you should not
 disclose, copy, distribute this e-mail or use it in any other way.
 Please inform the sender and delete the message and attachments from
 your system.Internet e-mail neither guarantees the confidentiality nor
 the integrity or proper receipt of the messages sent. JAZZTEL does not
 assume any liability for those circumstances. If the addressee of this
 message does not consent to the use of Internet e-mail and message
 recording, please notify us immediately.Any views or opinions contained
 in this message are solely those of the author, and do not necessarily
 represent those of JAZZTEL, unless otherwise specifically stated and the
 sender is authorised to do so. ***
 
 
 **
 **
 
 **
 -
 This SF.net email is sponsored by: Splunk Inc.
 Still grepping through log files to find problems?  Stop.
 Now Search log events and configuration files using AJAX and a browser.
 Download your FREE copy of Splunk now   http://get.splunk.com/**
 
 **
 **
 
 **
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users
 **


- --
  _  _ _  _ ___  _  _  _
 |Y#| |  | |\/| |  \ |\ |  | |Ryan Novosielski - Systems Programmer II
 |$| |__| |  | |__/ | \| _| |[EMAIL PROTECTED] - 973/972.0922 (2-0922)
 \__/ Univ. of Med. and Dent.|IST/AST - NJMS Medical Science Bldg - C630
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFGsImVmb+gadEcsb4RAn8CAJ9YHyi9v1rvdijFvXXrjdl4S+UmagCgpHA3
6z0ytW85TW6n7/lipuch2Cs=
=o1Vl
-END PGP SIGNATURE-
begin:vcard
fn:Ryan Novosielski
n:Novosielski;Ryan
org:UMDNJ;IST/AST
adr;dom:MSB C630;;185 South Orange Avenue;Newark;NJ;07103
email;internet:[EMAIL PROTECTED]
title:Systems Programmer III
tel;work:(973) 972-0922

[Bacula-users] space issues

2007-08-01 Thread Megan Kispert

Morning,

I'm running bacula-2.1.26 on a centos 4.5 server.  I have my backups going 
to disk.  One of my disks ran out of space due to a failure on my part to 
exclude a directory that shouldn't have been backed up.  I have two 
volumes on this disk.  I tried to delete jobs for this particular 
problem client, and I also used prune to try to clean up the volumes, 
files, and jobs, but I cannot get the actual used disk space to budge. 
Is there a way to delete files from the volume?

-megan


++
| Megan Kispert
| Code: 423
| GSFC: 301-614-5410
| ADNET: 301.352.4632
| [EMAIL PROTECTED]
++


-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now   http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] space issues

2007-08-01 Thread Robert LeBlanc
Not without losing all your back-ups. With disk, it is best to set a
volume limit so that it will create multiple back-up files. These will
look like tapes and bacula will be able to prune and recycle these,
freeing up disk space the size of the back-up file. You may be able to
use bcopy to extract the back-up into another set of files, but I'm not
sure and it would require more disk space.

Robert LeBlanc

College of Life Sciences Computer Support

Brigham Young University

[EMAIL PROTECTED]

(801)422-1882


-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Megan
Kispert
Sent: Wednesday, August 01, 2007 7:55 AM
To: bacula-users@lists.sourceforge.net
Subject: [Bacula-users] space issues


Morning,

I'm running bacula-2.1.26 on a centos 4.5 server.  I have my backups
going 
to disk.  One of my disks ran out of space due to a failure on my part
to 
exclude a directory that shouldn't have been backed up.  I have two 
volumes on this disk.  I tried to delete jobs for this particular 
problem client, and I also used prune to try to clean up the volumes, 
files, and jobs, but I cannot get the actual used disk space to budge. 
Is there a way to delete files from the volume?

-megan


++
| Megan Kispert
| Code: 423
| GSFC: 301-614-5410
| ADNET: 301.352.4632
| [EMAIL PROTECTED]
++



-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now   http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users




-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now   http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] volumes not reducing in size

2007-08-01 Thread admin
I have an ongoing problem, when my backup jobs fill my hard drive to 100%.
Many times I have gone in and pruned jobs, and files, but the volume size
stays the same, and will not run any new jobs. Is there a fix for this?

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now   http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bscan from more then one DVD volume fails -- Bug?

2007-08-01 Thread Sascha Wilde
For those interested:
As I did not get any response on the mail below I reported the
issue now in the Bacula Bug Tracker, it got the ID 912.

cheers
sascha

Sascha Wilde [EMAIL PROTECTED] wrote:
 I'm using bacula 2.0.3 with DVD-Rs as backup media.
 Now I have the problem, that I need to restore files from a fileset
 where the file information has been (partly[0]) pruned from the
 database (too short File Retention...).

 So I thought bscan is the way to go.  The initial full backup spawns
 two DVDs, so I did

   bscan -V Backup0009\|Backup0010\|Backup0011 -v -s -m -c \
   /etc/bacula/bacula-sd.conf /dev/dvd

 Backup0009 and Backup0010 are holding the initial full backup,
 Backup0011 is holding some later incremental backups.

 The first Volume is read without any problems -- then bscan asks to
 insert Backup0010.  But after inserting the DVD and pressing RET, an
 error occurs, which states that the volume is not empty (sic!) and can
 not be written.  This sounds like a bug to me, as the volume should
 be read only and of cause isn't supposed to be empty.

 Is this an known issue?  Should I report it?

 And most important: how can I get my data back? 

 Unfortunately the files I want top restore are on the second part of
 the full backup job, as I still don't get them after successfully
 reading the first volume.

 Any tipps and hints are highly appreciated,
 cheers
 sascha

 [0] One full backup and several incremental backups based on it -- the
 file information for the full and some of the old incremental
 backups are missing.

-- 
Sascha Wilde

There is no reason why anyone would want a computer in their home
Ken Olson, DEC, 1977

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now   http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] How to run many jobs simultaneosly

2007-08-01 Thread Suporte T.I - Grif Rótulos

Hi!

   I've been trying to run many jobs simultaneously, but it does not 
running.

   Follow attach my config files of bacula.
   It's possible to run many jobs simultaneously.

Thanks.
#
Director {
  Name = bacula-dir
  DIRport = 9101
  QueryFile = /usr/local/share/bacula/query.sql
  WorkingDirectory = /var/db/bacula
  PidDirectory = /var/run
  Maximum Concurrent Jobs = 4
  Password = Rr5xwEAr5CV5k2qnVc0aFhwPiRfuWtQ2WawPri2CBxPWON2Mn9# 
Console password
  Messages = Daemon
}

# Schedules

#
# Bacula
#
Schedule {
  Name = Diario-bacula
  Run = Level=Full Pool=Daily_bacula Mon-Sat at 04:00am
}
#
# Bacula LTO-3
#
Schedule {
  Name = Diario-bacula_lto
  Run = Level=Full Pool=Daily_bacula_lto Mon-Fri at 05:00am
}
#
# Bizon
#
Schedule {
  Name = Semanal-bizon
  Run = Level=Full Pool=Weekly_bizon Thu at 03:40am
}
#
# Migrations
#
Schedule {
  Name = Diario-migrations
  Run = Level=Full Pool=Daily_migrations Mon-Sat at 00:50am
}
#
# Rayden
#
Schedule {
  Name = Semanal-rayden
  Run = Level=Full Pool=Weekly_rayden Thu at 02:30am
}
#
# Sagati
#
Schedule {
  Name = Semanal-sagati
  Run = Level=Full Pool=Weekly_sagati Thu at 12:05am
}
#
# Sonic
#
Schedule {
  Name = Diario-sonic
  Run = Level=Full Pool=Daily_sonic Mon-Sat at 03:20am
}
#
# Srpom00
#
Schedule {
  Name = Diario-srpom00
  Run = Level=Full Pool=Daily_srpom00 Mon-Sat at 12:10am
}
#
# Srpom03
#
Schedule {
  Name = Diario-srpom03
  Run = Level=Full Pool=Daily_srpom03 Mon-Sat at 12:30am
}
#
# Srpom05
#
Schedule {
  Name = Diario-srpom05
  Run = Level=Full Pool=Daily_srpom05 Mon-Sat at 11:00pm
}
#
# Svpom02
#
Schedule {
  Name = Diario-svpom02
  Run = Level=Full Pool=Daily_svpom02 Mon-Sat at 11:30pm
}
#
# Svpombi
#
Schedule {
  Name = Diario-svpombi
  Run = Level=Full Pool=Daily_svpombi Mon-Sat at 03:00am
}
#
# Webpi
#
Schedule {
  Name = Diario-webpi
  Run = Level=Full Pool=Daily_webpi Mon-Sat at 03:20am
}
#---
#
# PICAPAU 
#
Schedule {
  Name = Diario-picapau
  Run = Level=Full Pool=Daily_picapau Mon-Sat at 03:30am
}
#---


# Jobs

#
# Bacula
#
Job {
  Name = Bacula
  Type = Backup
  Level = Full
  Client=bacula-fd
  FileSet=bacula
  Messages = Standard
  Storage = FILE-STORAGE
  Pool = Daily_bacula
  Schedule = Diario-bacula
  Write Bootstrap = /var/db/bacula/Bacula.bsr
  Priority = 10
}
#
# Bacula LTO-3
#
Job {
  Name = Bacula_LTO
  Type = Backup
  Level = Full
  Client=bacula-fd
  FileSet=bacula_lto
  Messages = Standard
  Storage = LTO-3
  Pool = Daily_bacula_lto
  Schedule = Diario-bacula_lto
  Write Bootstrap = /var/db/bacula/Bacula_lto.bsr
  Priority = 10
}
#
# Bizon
#
Job {
  Name = Bizon
  Type = Backup
  Level = Full
  Client=bizon-fd
  FileSet=bizon
  Messages = Standard
  Storage = FILE-STORAGE
  Pool = Weekly_bizon
  Schedule = Semanal-bizon
  Write Bootstrap = /var/db/bacula/Bizon.bsr
  Priority = 10
}
#
# Migrations
#
Job {
  Name = Migrations
  Type = Backup
  Level = Full
  Client=migrations-fd
  FileSet=migrations
  Messages = Standard
  Storage = FILE-STORAGE
  Pool = Daily_migrations
  Schedule = Diario-migrations
  Write Bootstrap = /var/db/bacula/Migrations.bsr
  Priority = 10
}
#
# Rayden
#
Job {
  Name = Rayden
  Type = Backup
  Level = Full
  Client=rayden-fd
  FileSet=rayden
  Messages = Standard
  Storage = FILE-STORAGE
  Pool = Weekly_rayden
  Schedule = Semanal-rayden
  Write Bootstrap = /var/db/bacula/Rayden.bsr
  Priority = 10
}
#
# Sagati
#
Job {
  Name = Sagati
  Type = Backup
  Level = Full
  Client=sagati-fd
  FileSet=sagati
  Messages = Standard
  Storage = FILE-STORAGE
  Pool = Weekly_sagati
  Schedule = Semanal-sagati
  Write Bootstrap = /var/db/bacula/Sagati.bsr
  Priority = 10
}
#
# Sonic
#
Job {
  Name = Sonic
  Type = Backup
  Level = Full
  Client=sonic-fd
  FileSet=sonic
  Messages = Standard
  Storage = FILE-STORAGE
  Pool = Daily_sonic
  Schedule = Diario-sonic
  Write Bootstrap = /var/db/bacula/Sonic.bsr
  Priority = 10
}
#
# Srpom00
# 
Job {
  Name = Srpom00
  Type = Backup
  Level = Full
  Client=srpom00-fd
  FileSet=srpom00
  Messages = Standard
  Storage = FILE-STORAGE
  Pool = Daily_srpom00
  Schedule = Diario-srpom00
  Write Bootstrap = /var/db/bacula/Srpom00.bsr
  Priority = 10
}
#
# Srpom03
#
Job {
  Name = Srpom03
  Type = Backup
  Level = Full
  Client=srpom03-fd
  FileSet=srpom03
  Messages = Standard
  Storage = FILE-STORAGE
  Pool = Daily_srpom03
  Schedule = Diario-srpom03
  Write Bootstrap = /var/db/bacula/Srpom03.bsr
  Priority = 10
}
#
# Srpom05
#
Job {
  Name = Srpom05
  Type = Backup
  Level = Full
  Client=srpom05-fd
  FileSet=srpom05
  Messages = Standard
  Storage = FILE-STORAGE
  Pool = Daily_srpom05
  Schedule = Diario-srpom05
  Write Bootstrap = /var/db/bacula/Srpom05.bsr
  Priority = 10
}
#
# Svpom02
#
Job {
  Name = Svpom02
  Type = Backup
  Level = Full
  Client=svpom02-fd
  FileSet=svpom02
  Messages = Standard
  Storage = FILE-STORAGE
  Pool = Daily_svpom02
  Schedule = Diario-svpom02
  Write Bootstrap = /var/db/bacula/Svpom02.bsr
  Priority = 10
}
#
# Svpombi
#
Job {
  

Re: [Bacula-users] backing up MySQL databases

2007-08-01 Thread Rich
On 2007.08.01. 16:16, Drew Bentley wrote:
...
 if you really need incremental backups, i don't know of any other method
 than using binlogs
 --
   Rich

 Yes, in order to do incrementals, you'll need binlogs enabled and
 you'll need to back these up. But also consider they take or can take
 a huge amount of space, so log rotating is necessary along with a
 possible performance hit as well, depending on the database demands,
 types of transactions, etc, going on.

looking at bacula projects page, i noticed one item that might allow 
incremental backups of db dumps, if ever implemented :

Item:   8   Directive/mode to backup only file changes, not entire file
...
 -drew
-- 
  Rich

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now   http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] space issues

2007-08-01 Thread Rich
On 2007.08.01. 17:10, Robert LeBlanc wrote:
 Not without losing all your back-ups. With disk, it is best to set a
 volume limit so that it will create multiple back-up files. These will
 look like tapes and bacula will be able to prune and recycle these,
 freeing up disk space the size of the back-up file. You may be able to
 use bcopy to extract the back-up into another set of files, but I'm not
 sure and it would require more disk space.

or use single file for each backup, as i am doing. makes it much easier 
to synchronise them, too.

deletion would still have to be performed manually, at least for now - i 
hope project 5 gets some love :)

Item:   5   Deletion of Disk-Based Bacula Volumes

so if bacula was able to delete expired disk volumes, that would take 
some burden off admins (and responsibility, as currently it must be 
determined which is the first full volume before desired deletion date 
for each job)

 Robert LeBlanc
 
 College of Life Sciences Computer Support
 
 Brigham Young University
 
 [EMAIL PROTECTED]
 
 (801)422-1882
...
-- 
  Rich

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now   http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] How to run many jobs simultaneosly

2007-08-01 Thread John Drescher
You also need a Maximum Concurrent Jobs in the Storage sections and
probably the Client sections of bacula-dir.conf

John

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now   http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] volumes not reducing in size

2007-08-01 Thread Rich
On 2007.08.01. 17:22, [EMAIL PROTECTED] wrote:
 I have an ongoing problem, when my backup jobs fill my hard drive to 100%.
 Many times I have gone in and pruned jobs, and files, but the volume size
 stays the same, and will not run any new jobs. Is there a fix for this?

this question is being asked quite often, and actually, another thread 
is going in parallel right now :)

unfortunately, sf archive seems to be down since jul 27, so, if you are 
subscribed, check out thread space issues
-- 
  Rich

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now   http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] space issues

2007-08-01 Thread Megan Kispert

Thank you everyone.



++
| Megan Kispert
| Code: 423
| GSFC: 301-614-5410
| ADNET: 301.352.4632
| [EMAIL PROTECTED]
++


On Wed, 1 Aug 2007, Rich wrote:

 On 2007.08.01. 17:10, Robert LeBlanc wrote:
 Not without losing all your back-ups. With disk, it is best to set a
 volume limit so that it will create multiple back-up files. These will
 look like tapes and bacula will be able to prune and recycle these,
 freeing up disk space the size of the back-up file. You may be able to
 use bcopy to extract the back-up into another set of files, but I'm not
 sure and it would require more disk space.

 or use single file for each backup, as i am doing. makes it much easier
 to synchronise them, too.

 deletion would still have to be performed manually, at least for now - i
 hope project 5 gets some love :)

 Item:   5   Deletion of Disk-Based Bacula Volumes

 so if bacula was able to delete expired disk volumes, that would take
 some burden off admins (and responsibility, as currently it must be
 determined which is the first full volume before desired deletion date
 for each job)

 Robert LeBlanc

 College of Life Sciences Computer Support

 Brigham Young University

 [EMAIL PROTECTED]

 (801)422-1882
 ...
 --
  Rich

 -
 This SF.net email is sponsored by: Splunk Inc.
 Still grepping through log files to find problems?  Stop.
 Now Search log events and configuration files using AJAX and a browser.
 Download your FREE copy of Splunk now   http://get.splunk.com/
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users


-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now   http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Where is the MediaID increment in MySQL

2007-08-01 Thread Jason King
I've had to make some adjustments during my trials of bacula and my 
MediaID numbering sequence is way off. My current LAST tape is labeled 
Tape-10 with a MediaID of 10. If I put in a fresh new tape and label 
it Tape-11, the MediaID would probably be like 41 or 42 or so because 
of the deleting and relabeling of tapes when I was first trying out 
bacula. Now in order to have the MediaID be in line with the rest of the 
media, I have to go into MySQL and manually update that record. This is 
not a big deal at all. It isn't as if they thing will not work if the 
MediaID is out of sequence nor is it hard to make the change. It's just 
for the logistics of keeping things straight in my own mind. But I 
cannot find where bacula labels the MediaID from. There has to be some 
increment field somewhere, but I cannot figure out where it is. Does 
anyone else know where I can find the MediaID increment at?

Jason

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now   http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Where is the MediaID increment in MySQL

2007-08-01 Thread Dan Langille
On 1 Aug 2007 at 13:42, Jason King wrote:

 I've had to make some adjustments during my trials of bacula and my 
 MediaID numbering sequence is way off. My current LAST tape is labeled 
 Tape-10 with a MediaID of 10. If I put in a fresh new tape and label 
 it Tape-11, the MediaID would probably be like 41 or 42 or so because 
 of the deleting and relabeling of tapes when I was first trying out 
 bacula. Now in order to have the MediaID be in line with the rest of the 
 media, I have to go into MySQL and manually update that record. This is 
 not a big deal at all. It isn't as if they thing will not work if the 
 MediaID is out of sequence nor is it hard to make the change. It's just 
 for the logistics of keeping things straight in my own mind. But I 
 cannot find where bacula labels the MediaID from. There has to be some 
 increment field somewhere, but I cannot figure out where it is. Does 
 anyone else know where I can find the MediaID increment at?

Ignore MediaID.  Let it be off.  Let it not match.  It does not 
matter whatsoever.  You have much more important things to do.

Take a deep breath and let it go.

If you are truly obsessed, look up auto-increment in the MySQL 
documentation. 

-- 
Dan Langille - http://www.langille.org/
Available for hire: http://www.freebsddiary.org/dan_langille.php



-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now   http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula Status report

2007-08-01 Thread Frank Sweetser
Kern Sibbald wrote:

 I would appreciate if beta testers would retest the current SVN.  Thanks.

Other than weird-files2 (known glitch due to cp deficiency) it passes
everything fine on Mac OS 10.4.  Also passes everything on my fedora 7 test
system.

-- 
Frank Sweetser fs at wpi.edu  |  For every problem, there is a solution that
WPI Senior Network Engineer   |  is simple, elegant, and wrong. - HL Mencken
GPG fingerprint = 6174 1257 129E 0D21 D8D4  E8A3 8E39 29E3 E2E8 8CEC

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now   http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users