[Bacula-users] long-term archival of tapes -- growth of the catalog

2011-09-21 Thread Gavin McCullagh
Hi,

we've been happily using Bacula now for a few years, with a couple of big
disk arrays as the storage devices/media.  We use something along the lines
of what the manual documents for fully automated disk-based backups.  This
has worked well and is really quick and convenient for doing restores from.
It's expensive in hard disks though and none of the data is offline so a
particularly nasty online incident might take out the backups as well as
the live data.

Our plan now is to add a Dell LTO5 drive in and start using a COPY job once
per month.  Almost everything gets a monthly Full backup to disk, so we'll
then copy those jobs to tape and move them off-site.  A couple of tapes
will be retired from the pool each year and will be stored moreorless
indefinitely.

Do other people do this?  If so, how do you deal with pruning?  Do you just
let the database grow over time or do you let the data get pruned and use
bscan or other low-level volume tools to read them if necessary?  Is there
another approach I'm missing?

Gavin


--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2dcopy1
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] ld: warning: symbol `plugin_list' has differing sizes:

2011-09-21 Thread Martin Simmons
> On Tue, 20 Sep 2011 15:49:32 -0400, Kim Culhan said:
> 
> On Sunday, September 4, 2011 11:31 am, Ren Sato wrote:
> 
> > I am receiving the following ld warnings on make:
> 
> > ld: warning: symbol `plugin_list' has differing sizes:
> >(file /export/home/philip/bacula-5.0.3/src/lib/.libs/libbac.so 
> > value=0x8
> >  file /opt/local/mysql/lib/libmysqlclient_r.so value=0x18);
> >/export/home/philip/bacula-5.0.3/src/lib/.libs/libbac.so definition 
> > taken
> [similar warnings omitted]
> 
> > uname -a
> > SunOS godzilla 5.10 Generic_142910-17 i86pc i386 i86pc
> 
> Not seeing those warnings here

Which version of mysql are you using?  I think the symbol plugin_list is new
in 5.5 (and not present in 5.1).

__Martin

--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2dcopy1
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Restore performance

2011-09-21 Thread Erik P. Olsen
I am running a very smooth bacula 5.0.3 on Fedora 14. Everything seems to be OK 
except restores which are incredibly slow. How can I debug it to see what's 
wrong?

-- 
Erik

--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2dcopy1
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Restore performance

2011-09-21 Thread Gavin McCullagh
On Wed, 21 Sep 2011, Erik P. Olsen wrote:

> I am running a very smooth bacula 5.0.3 on Fedora 14. Everything seems to be 
> OK 
> except restores which are incredibly slow. How can I debug it to see what's 
> wrong?

Start off by telling us what part of the restore process is slow:

 - building the file tree for selection
 - the actual restore of files to disk
 - something else

Gavin


--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2dcopy1
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Restore performance

2011-09-21 Thread Gavin McCullagh
Hi,

On Wed, 21 Sep 2011, Gavin McCullagh wrote:

> On Wed, 21 Sep 2011, Erik P. Olsen wrote:
> 
> > I am running a very smooth bacula 5.0.3 on Fedora 14. Everything seems to 
> > be OK 
> > except restores which are incredibly slow. How can I debug it to see what's 
> > wrong?
> 
> Start off by telling us what part of the restore process is slow:
> 
>  - building the file tree for selection
>  - the actual restore of files to disk
>  - something else

A few minutes after replying I noticed that you asked this in more detail
back in February and said that building the tree was the slow part.

I have suffered similar problems with MySQL explored in the thread linked
below.  By logging slow queries I managed to identify a single query which
was the primary cause.

http://adsm.org/lists/html/Bacula-users/2010-11/msg00112.html
http://adsm.org/lists/html/Bacula-users/2010-11/msg00187.html

Then a couple of months ago I noticed a new detail in the wiki about
_removing_ bogus indexes.  It turns out that by adding extra indexes you
can slow down MySQL SELECT queries (I think someone said it doesn't always
choose the optimal/correct index).

http://wiki.bacula.org/doku.php?id=faq

http://adsm.org/lists/html/Bacula-users/2011-08/msg4.html
http://adsm.org/lists/html/Bacula-users/2011-08/msg7.html

It would be worth taking a look at that.

Gavin




--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2dcopy1
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] long-term archival of tapes -- growth of the catalog

2011-09-21 Thread John Drescher
> Do other people do this?  If so, how do you deal with pruning?  Do you just
> let the database grow over time or do you let the data get pruned and use
> bscan or other low-level volume tools to read them if necessary?  Is there
> another approach I'm missing?
>

I let the database grow. My postgresql db is around 30GB but it sits
on its own filesystem on a raid that has 300 GB so there is lots of
space to grow..

John

--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2dcopy1
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Yet another 'Invalid command ".messages"' on restore

2011-09-21 Thread Marcio Merlone

Greetings,

I am a happy user of bacula 5.0.1 on a Ubuntu 10.04.3 LTS server. It 
backs up 8 clients without any problem. All clients have bacula-fd 5.0.1 
or above, so no old client with new server - all linux servers use 5.0.1 
and the lonely windows server uses 5.0.3. I also disabled automatic 
messages display on preferences as I found on list archives. But yet, 
when trying to restore something it often - but not always - gives the 
error "Invalid command ".messages"".


Can anybody help me find a solution? That problem annoys me for years...

Thanks and best regards.

--
*Marcio Merlone*
--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2dcopy1___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Restore performance

2011-09-21 Thread Marcio Merlone

Em 21-09-2011 09:29, Gavin McCullagh escreveu:

On Wed, 21 Sep 2011, Erik P. Olsen wrote:

I am running a very smooth bacula 5.0.3 on Fedora 14. Everything seems to be OK
except restores which are incredibly slow. How can I debug it to see what's 
wrong?

  - building the file tree for selection

+1 on this.

I am running bacula 5.0.1-1ubuntu1 on a Ubuntu 10.04.3 LTS server. 
Database is about 150GB big and any restore takes 2 or 3 minutes to 
build the tree.



--
*Marcio Merlone*
<>--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2dcopy1___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Restore performance

2011-09-21 Thread Gavin McCullagh
Hi,

On Wed, 21 Sep 2011, Marcio Merlone wrote:

> Em 21-09-2011 09:29, Gavin McCullagh escreveu:
> >On Wed, 21 Sep 2011, Erik P. Olsen wrote:
> >>I am running a very smooth bacula 5.0.3 on Fedora 14. Everything seems to 
> >>be OK
> >>except restores which are incredibly slow. How can I debug it to see what's 
> >>wrong?
> >  - building the file tree for selection
> +1 on this.
> 
> I am running bacula 5.0.1-1ubuntu1 on a Ubuntu 10.04.3 LTS server.
> Database is about 150GB big and any restore takes 2 or 3 minutes to
> build the tree.

A 150GB database.  That's pretty large.  How many clients have you?

Which database are you using (MySQL, Postgresql)?  If you haven't seen it
already, this might be useful:

http://wiki.bacula.org/doku.php?id=faq#restore_takes_a_long_time_to_retrieve_sql_results_from_mysql_catalog

Gavin


--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2dcopy1
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Restore performance

2011-09-21 Thread Marcio Merlone

Em 21-09-2011 10:29, Gavin McCullagh escreveu:

A 150GB database.  That's pretty large.  How many clients have you?
About a dozen clients - some inactive but still with valid backup - File 
Retention = 6 months, Job Retention = 1 year. Most clients have few or 
no changes, but my storage server has 2,825,101 files on a full backup 
and almost 100.000 on an incremental.



Which database are you using (MySQL, Postgresql)?

MySQL.


If you haven't seen it
already, this might be useful:

http://wiki.bacula.org/doku.php?id=faq#restore_takes_a_long_time_to_retrieve_sql_results_from_mysql_catalog
Yes, I was there when you mail arrived. It made think about going to 
Postgres. Is there any upgrade path or helper to go from mysql to 
postgres? I am now optimizing File table and will then run dbcheck.


Best regards,

--
*Marcio Merlone*
--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2dcopy1___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Yet another 'Invalid command ".messages"' on restore

2011-09-21 Thread Marcello Romani
Il 21/09/2011 15:09, Marcio Merlone ha scritto:
> Greetings,
>
> I am a happy user of bacula 5.0.1 on a Ubuntu 10.04.3 LTS server. It
> backs up 8 clients without any problem. All clients have bacula-fd 5.0.1
> or above, so no old client with new server - all linux servers use 5.0.1
> and the lonely windows server uses 5.0.3. I also disabled automatic
> messages display on preferences as I found on list archives. But yet,
> when trying to restore something it often - but not always - gives the
> error "Invalid command ".messages"".
>
> Can anybody help me find a solution? That problem annoys me for years...
>
> Thanks and best regards.
>
> --
> *Marcio Merlone*
>
>
>
> --
> All the data continuously generated in your IT infrastructure contains a
> definitive record of customers, application performance, security
> threats, fraudulent activity and more. Splunk takes this data and makes
> sense of it. Business sense. IT sense. Common sense.
> http://p.sf.net/sfu/splunk-d2dcopy1
>
>
>
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users

Hi,
 I suppose you're referring to Bat. If yes, do the following:
Settings => Preferences
Uncheck "Check messages", or leave it checked but insert a very long 
interval (like 3600 seconds) below.

This is not enough, though. If you have unread messages you'll still get 
the error on restore.
Open an ssh (i.e. command line) connection to the bacula server. Launch 
the bconsole command and type the "messages" command. This will clear 
the unread messages queue in bacula.
Now you can connect with BAT restore files without the ".messages" error.

This is the procedure that currently works for me.

HTH

-- 
Marcello Romani

--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2dcopy1
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Restore performance

2011-09-21 Thread Gavin McCullagh
Hi,

On Wed, 21 Sep 2011, Marcio Merlone wrote:

> Em 21-09-2011 10:29, Gavin McCullagh escreveu:
> >A 150GB database.  That's pretty large.  How many clients have you?
> About a dozen clients - some inactive but still with valid backup -
> File Retention = 6 months, Job Retention = 1 year. Most clients have
> few or no changes, but my storage server has 2,825,101 files on a
> full backup and almost 100.000 on an incremental.

It's definitely the database that is 150GB, yeah?  We have 40 clients, some
of which have 5 million files and similar retention times, but our database
is 11GB.

> >Which database are you using (MySQL, Postgresql)?
> MySQL.

Ditto.

> >If you haven't seen it
> >already, this might be useful:
> >
> >http://wiki.bacula.org/doku.php?id=faq#restore_takes_a_long_time_to_retrieve_sql_results_from_mysql_catalog
> Yes, I was there when you mail arrived. It made think about going to
> Postgres. Is there any upgrade path or helper to go from mysql to
> postgres? I am now optimizing File table and will then run dbcheck.

Check that you have the right indexes (and no extra ones).  The removal of
a couple of wrong indexes made a massive difference for us.

Bacula does seem to be better optimised toward Postgresql, but I'm not
convinced you can't get MySQL to work.

Gavin


--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2dcopy1
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Yet another 'Invalid command ".messages"' on restore

2011-09-21 Thread Marcio Merlone

Em 21-09-2011 11:04, Marcello Romani escreveu:

I suppose you're referring to Bat. If yes,

Sorry, yes. :)

do the following: Settings => Preferences Uncheck "Check messages", or 
leave it checked but insert a very long interval (like 3600 seconds) 

Have done ages ago, no luck.

below. This is not enough, though. If you have unread messages you'll 
still get the error on restore. Open an ssh (i.e. command line) 
connection to the bacula server. Launch the bconsole command and type 
the "messages" command. This will clear the unread messages queue in 
bacula. Now you can connect with BAT restore files without the 
".messages" error. This is the procedure that currently works for me. HTH 

Good tip, but it is a workaround, not a solution. What is the solution?

--
*Marcio Merlone*
--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2dcopy1___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Restore performance

2011-09-21 Thread Erik P. Olsen
On 21/09/11 15:56, Marcio Merlone wrote:
> Em 21-09-2011 10:29, Gavin McCullagh escreveu:
>> A 150GB database.  That's pretty large.  How many clients have you?
> About a dozen clients - some inactive but still with valid backup - File
> Retention = 6 months, Job Retention = 1 year. Most clients have few or no
> changes, but my storage server has 2,825,101 files on a full backup and almost
> 100.000 on an incremental.
>
>> Which database are you using (MySQL, Postgresql)?
> MySQL.
>
>> If you haven't seen it
>> already, this might be useful:
>>
>> http://wiki.bacula.org/doku.php?id=faq#restore_takes_a_long_time_to_retrieve_sql_results_from_mysql_catalog
> Yes, I was there when you mail arrived. It made think about going to Postgres.
> Is there any upgrade path or helper to go from mysql to postgres? I am now
> optimizing File table and will then run dbcheck.

I forgot to mention that my problem is with building the file tree. So 
converting to postresql may be the best thing to do.

Will the following scenario work?

1. Backup the catalogue
2. Remove mysql database
3. Build postresql database
4. Restore the catalogue

Or is there more to it than this?

-- 
Erik

--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2dcopy1
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] ld: warning: symbol `plugin_list' has differing sizes:

2011-09-21 Thread Kim Culhan
On Wed, September 21, 2011 7:45 am, Martin Simmons wrote:
>> On Tue, 20 Sep 2011 15:49:32 -0400, Kim Culhan said:
>>
>> On Sunday, September 4, 2011 11:31 am, Ren Sato wrote:
>>
>> > I am receiving the following ld warnings on make:
>>
>> > ld: warning: symbol `plugin_list' has differing sizes:
>> >(file /export/home/philip/bacula-5.0.3/src/lib/.libs/libbac.so
value=0x8
>> >  file /opt/local/mysql/lib/libmysqlclient_r.so value=0x18);
>> >/export/home/philip/bacula-5.0.3/src/lib/.libs/libbac.so
definition taken
>> [similar warnings omitted]
>>
>> > uname -a
>> > SunOS godzilla 5.10 Generic_142910-17 i86pc i386 i86pc
>>
>> Not seeing those warnings here
>
> Which version of mysql are you using?  I think the symbol plugin_list is
new
> in 5.5 (and not present in 5.1).

This was mysql-5.5.15, updated here to mysql-5.5.16

mysql> show plugins;
+---+++-+-+
| Name  | Status | Type   | Library | License |
+---+++-+-+
| binlog| ACTIVE | STORAGE ENGINE | NULL| GPL |
| mysql_native_password | ACTIVE | AUTHENTICATION | NULL| GPL |
| mysql_old_password| ACTIVE | AUTHENTICATION | NULL| GPL |
| MyISAM| ACTIVE | STORAGE ENGINE | NULL| GPL |
| CSV   | ACTIVE | STORAGE ENGINE | NULL| GPL |
| MRG_MYISAM| ACTIVE | STORAGE ENGINE | NULL| GPL |
| MEMORY| ACTIVE | STORAGE ENGINE | NULL| GPL |
| PERFORMANCE_SCHEMA| ACTIVE | STORAGE ENGINE | NULL| GPL |
| InnoDB| ACTIVE | STORAGE ENGINE | NULL| GPL |
| INNODB_TRX| ACTIVE | INFORMATION SCHEMA | NULL| GPL |
| INNODB_LOCKS  | ACTIVE | INFORMATION SCHEMA | NULL| GPL |
| INNODB_LOCK_WAITS | ACTIVE | INFORMATION SCHEMA | NULL| GPL |
| INNODB_CMP| ACTIVE | INFORMATION SCHEMA | NULL| GPL |
| INNODB_CMP_RESET  | ACTIVE | INFORMATION SCHEMA | NULL| GPL |
| INNODB_CMPMEM | ACTIVE | INFORMATION SCHEMA | NULL| GPL |
| INNODB_CMPMEM_RESET   | ACTIVE | INFORMATION SCHEMA | NULL| GPL |
| partition | ACTIVE | STORAGE ENGINE | NULL| GPL |
+---+++-+-+
17 rows in set (0.00 sec)


BTW compiling mysql-5.5.16 with the Oracle Solaris Studio 12.2 compiler.

-kim
--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2dcopy1___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Restore performance

2011-09-21 Thread Gavin McCullagh
On Wed, 21 Sep 2011, Erik P. Olsen wrote:

> I forgot to mention that my problem is with building the file tree. So 
> converting to postresql may be the best thing to do.
> 
> Will the following scenario work?
> 
> 1. Backup the catalogue
> 2. Remove mysql database
> 3. Build postresql database
> 4. Restore the catalogue
> 
> Or is there more to it than this?

There is more to it than that.  You need to stop bacula, dump the mysql
database, load it into postgresql, install the postgresql version of
bacula-sd and bacula-dir and get that up and running.  If your database is
large this may take some time.

It's not entirely trivial to load a mysql dump into postgresql though.

http://mtu.net/~jpschewe/blog/2010/06/migrating-bacula-from-mysql-to-postgresql/
http://workaround.org/node/258
http://www.backupcentral.com/phpBB2/two-way-mirrors-of-external-mailing-lists-3/bacula-25/process-to-convert-mysql-to-postgres-85413/

Gavin

-- 
Gavin McCullagh
Senior System Administrator
IT Services
Griffith College 
South Circular Road
Dublin 8
Ireland
Tel: +353 1 4163365
http://www.gcd.ie
http://www.gcd.ie/brochure.pdf
http://www.gcd.ie/opendays
http://www.gcd.ie/ebrochure

This E-mail is from Griffith College.
The E-mail and any files transmitted with it are confidential and may be
privileged and are intended solely for the use of the individual or entity
to whom they are addressed. If you are not the addressee you are prohibited
from disclosing its content, copying it or distributing it otherwise than to
the addressee. If you have received this e-mail in error, please immediately
notify the sender by replying to this e-mail and delete the e-mail from your
computer.

Bellerophon Ltd, trades as Griffith College (registered in Ireland No.
60469) with its registered address as Griffith College Campus, South
Circular Road, Dublin 8, Ireland.


--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2dcopy1
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Segmentation fault of Storage Daemon when client is not available

2011-09-21 Thread Thomas Lohman
Just to followup on this in case others have this issue.  I was able to 
rebuild bacula with the -g compiler option to get some debugging 
information.  The scenario that causes the SD to crash with a SEGFAULT 
is not consistently reproducible which makes me think of some kind of 
race condition.  But in any event, I was able to finally get a trace in 
gdb and the crash occurs in the same spot that others have reported in 
the URLs referenced below - namely in the deflate zlib method being 
called from openssl.  The solution, I'm hoping, if you're using TLS, is 
to turn TLS off for communication between the director and the storage 
daemon (and to do this, you want to comment out all of your TLS options 
in any Storage definitions in the Director configuration and just the 
Director definition in the SD configuration).  In addition, I also was 
able to set up the Director so if the SD does die, it would take care of 
restarting it and any failed jobs would be re-queued (using the 
Reschedule on Error options).

thanks again,


--tom


> Hi,
>>
>> We've been seeing our Bacula Storage Daemon die with a segmentation
>> fault when a client can't be reached for backup.  We have two servers
>> and have observed this behavior on both of them.  Some searching has
>> revealed that others seem to have (or had) this same issue.
>>
>> https://bugs.launchpad.net/ubuntu/+source/bacula/+bug/622742
>
> That looks similar to some existing bacula bug reports:
>
> http://bugs.bacula.org/view.php?id=1568
> http://bugs.bacula.org/view.php?id=1343
>
>
>> The behavior is not consistent i.e. sometimes it continues on working
>> normally if a client can't be contacted but eventually it'll snag on one
>> and die.  In addition, I've now had one of our storage daemons running
>> in the foreground with debugging set to the max and of course, that one
>> has now gone two days without seg faulting even though there have been
>> half a dozen non-responsive clients.
>>
>> We're currently running 5.0.3 built from source for both clients and
>> servers.  I'm wondering if anyone else here has experienced this problem
>> and/or has any pointers to a work around.  While things can be set up to
>> automatically restart the storage daemon if it dies, the main problem is
>> that any backups Bacula was in the middle of doing end with an error and
>> have to be manually rescheduled/run or just wait until the next time
>> their job comes up to run.

--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2dcopy1
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Yet another 'Invalid command ".messages"' on restore

2011-09-21 Thread Marcello Romani
Il 21/09/2011 16:28, Marcio Merlone ha scritto:
> Em 21-09-2011 11:04, Marcello Romani escreveu:
>> I suppose you're referring to Bat. If yes,
> Sorry, yes. :)
>
>> do the following: Settings => Preferences Uncheck "Check messages", or
>> leave it checked but insert a very long interval (like 3600 seconds)
> Have done ages ago, no luck.
>
>> below. This is not enough, though. If you have unread messages you'll
>> still get the error on restore. Open an ssh (i.e. command line)
>> connection to the bacula server. Launch the bconsole command and type
>> the "messages" command. This will clear the unread messages queue in
>> bacula. Now you can connect with BAT restore files without the
>> ".messages" error. This is the procedure that currently works for me. HTH
> Good tip, but it is a workaround, not a solution. What is the solution?
>
> --
> *Marcio Merlone*

I haven't found one yet. This workaround seems good enough for me. I 
guess a definitive solution would come from modifying BAT (this looks 
like a bug in BAT to me).

-- 
Marcello Romani

--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2dcopy1
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Restore performance

2011-09-21 Thread Alexandre Chapellon

  
  
Eric
  
  As Gavin pointed out, a 150GB database is huuuge for only
  a dozen client.
  Unless you have billions of files on each client there is no
  reason your catalog is that large.
  Are you sure you correctly applied job and file retention on your
  catalog? Also are you sure you catalog is not full of orphaned
  records?
  
  Before migrating to postgres (which is a good choice for big
  catalogs), I would look at the catalog to see if all retention
  period are correctly applied.
  
  Best regards.

Le 21/09/2011 16:51, Erik P. Olsen a écrit :

  On 21/09/11 15:56, Marcio Merlone wrote:

  
Em 21-09-2011 10:29, Gavin McCullagh escreveu:


  A 150GB database.  That's pretty large.  How many clients have you?


About a dozen clients - some inactive but still with valid backup - File
Retention = 6 months, Job Retention = 1 year. Most clients have few or no
changes, but my storage server has 2,825,101 files on a full backup and almost
100.000 on an incremental.



  Which database are you using (MySQL, Postgresql)?


MySQL.



  If you haven't seen it
already, this might be useful:

http://wiki.bacula.org/doku.php?id=faq#restore_takes_a_long_time_to_retrieve_sql_results_from_mysql_catalog


Yes, I was there when you mail arrived. It made think about going to Postgres.
Is there any upgrade path or helper to go from mysql to postgres? I am now
optimizing File table and will then run dbcheck.

  
  
I forgot to mention that my problem is with building the file tree. So 
converting to postresql may be the best thing to do.

Will the following scenario work?

1. Backup the catalogue
2. Remove mysql database
3. Build postresql database
4. Restore the catalogue

Or is there more to it than this?




-- 
  
  
Alexandre Chapellon
Ingénierie des systèmes open sources et
  réseaux.
  Follow me on twitter: @alxgomz
  

  

<>--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2dcopy1___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Yet another 'Invalid command ".messages"' on restore

2011-09-21 Thread Marcio Merlone

Em 21-09-2011 13:03, Marcello Romani escreveu:

Il 21/09/2011 16:28, Marcio Merlone ha scritto:

Em 21-09-2011 11:04, Marcello Romani escreveu:

do the following: Settings =>  Preferences Uncheck "Check messages", or
leave it checked but insert a very long interval (like 3600 seconds)

Have done ages ago, no luck.


below. This is not enough, though. If you have unread messages you'll
still get the error on restore. Open an ssh (i.e. command line)
connection to the bacula server. Launch the bconsole command and type
the "messages" command. This will clear the unread messages queue in
bacula. Now you can connect with BAT restore files without the
".messages" error. This is the procedure that currently works for me. HTH

Good tip, but it is a workaround, not a solution. What is the solution?

I haven't found one yet. This workaround seems good enough for me. I
guess a definitive solution would come from modifying BAT (this looks
like a bug in BAT to me).

Is there any bacula developer here to talk about?

--
*Marcio Merlone*
--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2dcopy1___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Restore performance

2011-09-21 Thread Marcio Merlone

Em 21-09-2011 13:33, Alexandre Chapellon escreveu:
As Gavin pointed out, a 150GB database is huuuge for only a 
dozen client.
Unless you have billions of files on each client there is no reason 
your catalog is that large.
Are you sure you correctly applied job and file retention on your 
catalog? Also are you sure you catalog is not full of orphaned records?


Before migrating to postgres (which is a good choice for big 
catalogs), I would look at the catalog to see if all retention period 
are correctly applied.


I am running dbcheck to see how many rabbits come out of the bushes. 
File table is only 6.6GB and Log is 105GB. What's that Log table for? It 
only have blobs...


Also, in a near future I will change bacula-dir to another server, which 
will be a good opportunity to start a blank catalog and go with 
Postgres, without any migration, just keeping the old one sitting there 
until the end of retention period.


--
*Marcio Merlone*
--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2dcopy1___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Full backups and off-sites

2011-09-21 Thread Rodrigo Renie Braga
2011/9/16 Tilman Schmidt 

> If I read the manual correctly, you'll need to have two tape drives
> connected to the same machine if you want to create an off-site copy
> that way. Is there a viable solution for off-site backups with only one
> tape drive?
>

Yes, you're right, you need to have 2 tape drives to create a copy job,
which is my case... I forgot about that...

With only one tape drive, the only solution is to backup on disk first and
Copy to tapes later, but I guess it's not what you want...
--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2dcopy1___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Restore performance

2011-09-21 Thread Alexandre Chapellon

  
  


Le 21/09/2011 18:56, Marcio Merlone a écrit :

  
  Em 21-09-2011 13:33, Alexandre Chapellon escreveu:
  

As Gavin pointed out, a 150GB database is
  huuuge for only a dozen client.
  Unless you have billions of files on each client there is no
  reason your catalog is that large.
  Are you sure you correctly applied job and file retention on
  your catalog? Also are you sure you catalog is not full of
  orphaned records?
  
  Before migrating to postgres (which is a good choice for big
  catalogs), I would look at the catalog to see if all retention
  period are correctly applied.
  
  
  I am running dbcheck to see how many rabbits come out of the
  bushes. File table is only 6.6GB and Log is 105GB. What's that Log
  table for? It only have blobs...
  

It is supposed to contain bacula report... just like in the bacula
log file.
I'm not sure having such a big amount of  data in another table
hurts, may be it does if you use innodb.

If you use MyISAM... my guess is it should not hurts... but note I
am not a DBA!

However, I'd like to know if this table can be safely purged? As one
day or another it will grow to an unacceptable size... (even more if
I have the same info in logfile).

 Also,
  in a near future I will change bacula-dir to another server, which
  will be a good opportunity to start a blank catalog and go with
  Postgres, without any migration, just keeping the old one sitting
  there until the end of retention period.
  
  -- 

 Marcio

  Merlone

  
  
  
  
  --
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2dcopy1
  
  
  
  ___



  
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users



-- 
  
  
Alexandre Chapellon
Ingénierie des systèmes open sources et
  réseaux.
  Follow me on twitter: @alxgomz
  

  

<>--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2dcopy1___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Purge Log table - was: Re: Restore performance

2011-09-21 Thread Marcio Merlone

Em 21-09-2011 14:45, Alexandre Chapellon escreveu:

Le 21/09/2011 18:56, Marcio Merlone a écrit :

Em 21-09-2011 13:33, Alexandre Chapellon escreveu:
As Gavin pointed out, a 150GB database is huuuge for only a 
dozen client.
Unless you have billions of files on each client there is no reason 
your catalog is that large.
Are you sure you correctly applied job and file retention on your 
catalog? Also are you sure you catalog is not full of orphaned records?


Before migrating to postgres (which is a good choice for big 
catalogs), I would look at the catalog to see if all retention 
period are correctly applied.
I am running dbcheck to see how many rabbits come out of the bushes. 
File table is only 6.6GB and Log is 105GB. What's that Log table for? 
It only have blobs...
It is supposed to contain bacula report... just like in the bacula log 
file.
I'm not sure having such a big amount of  data in another table hurts, 
may be it does if you use innodb.
If you use MyISAM... my guess is it should not hurts... but note I am 
not a DBA!

Me neither, and it is innodb, in my case.

However, I'd like to know if this table can be safely purged? As one 
day or another it will grow to an unacceptable size... (even more if I 
have the same info in logfile).

+1
Can it?


--
*Marcio Merlone*
--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2dcopy1___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Purge Log table - was: Re: Restore performance

2011-09-21 Thread Alexandre Chapellon

  
  


Le 21/09/2011 19:53, Marcio Merlone a écrit :

  
  Em 21-09-2011 14:45, Alexandre Chapellon escreveu:
  

Le 21/09/2011 18:56, Marcio Merlone a écrit :

  
  Em 21-09-2011 13:33, Alexandre Chapellon escreveu:
  

As Gavin pointed out, a 150GB database is
  huuuge for only a dozen client.
  Unless you have billions of files on each client there is
  no reason your catalog is that large.
  Are you sure you correctly applied job and file retention
  on your catalog? Also are you sure you catalog is not full
  of orphaned records?
  
  Before migrating to postgres (which is a good choice for
  big catalogs), I would look at the catalog to see if all
  retention period are correctly applied.
  
  I am running dbcheck to see how many rabbits come out of the
  bushes. File table is only 6.6GB and Log is 105GB. What's that
  Log table for? It only have blobs...

It is supposed to contain bacula report... just like in the
bacula log file.
I'm not sure having such a big amount of  data in another table
hurts, may be it does if you use innodb.
If you use MyISAM... my guess is it should not hurts... but note
I am not a DBA!
  
  Me neither, and it is innodb, in my case.

I'd say It can hurts innodb because, unlike myisam, data are, by
default, stored in one single (huge) file.

 
  
However, I'd like to know if this table can be safely purged? As
one day or another it will grow to an unacceptable size... (even
more if I have the same info in logfile).
  
  +1
  Can it?
  
  
  -- 

 Marcio

  Merlone

  
  
  
  
  --
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2dcopy1
  
  
  
  ___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users



-- 
  
  
Alexandre Chapellon
Ingénierie des systèmes open sources et
  réseaux.
  Follow me on twitter: @alxgomz
  

  

<>--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2dcopy1___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Purge Log table - was: Re: Restore performance

2011-09-21 Thread Alexandre Chapellon

  
  


Le 21/09/2011 20:03, Alexandre Chapellon a écrit :

  
  
  
  Le 21/09/2011 19:53, Marcio Merlone a écrit :
  

Em 21-09-2011 14:45, Alexandre Chapellon escreveu:

  
  Le 21/09/2011 18:56, Marcio Merlone a écrit :
  

Em 21-09-2011 13:33, Alexandre Chapellon escreveu:

  
  As Gavin pointed out, a 150GB database is
huuuge for only a dozen client.
Unless you have billions of files on each client there
is no reason your catalog is that large.
Are you sure you correctly applied job and file
retention on your catalog? Also are you sure you catalog
is not full of orphaned records?

Before migrating to postgres (which is a good choice for
big catalogs), I would look at the catalog to see if all
retention period are correctly applied.

I am running dbcheck to see how many rabbits come out of the
bushes. File table is only 6.6GB and Log is 105GB. What's
that Log table for? It only have blobs...
  
  It is supposed to contain bacula report... just like in the
  bacula log file.
  I'm not sure having such a big amount of  data in another
  table hurts, may be it does if you use innodb.
  If you use MyISAM... my guess is it should not hurts... but
  note I am not a DBA!

Me neither, and it is innodb, in my case.
  
  I'd say It can hurts innodb because, unlike myisam, data are, by
  default, stored in one single (huge) file.
  

maybe you could try this:
http://dev.mysql.com/doc/refman/5.0/en/innodb-multiple-tablespaces.html

   

  However, I'd like to know if this table can be safely purged?
  As one day or another it will grow to an unacceptable size...
  (even more if I have the same info in logfile).

+1
Can it?


-- 
  
   Marcio


Merlone
  




--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2dcopy1



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

  
  
  -- 


  Alexandre Chapellon
  Ingénierie des systèmes open sources et
réseaux.
Follow me on twitter: @alxgomz

  
  
  
  
  --
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2dcopy1
  
  
  
  ___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users



-- 
  
  
Alexandre Chapellon
Ingénierie des systèmes open sources et
  réseaux.
  Follow me on twitter: @alxgomz
  

  

<>--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2dcopy1___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Purge Log table - was: Re: Restore performance

2011-09-21 Thread Marcio Merlone

Em 21-09-2011 15:24, Alexandre Chapellon escreveu:

maybe you could try this:
http://dev.mysql.com/doc/refman/5.0/en/innodb-multiple-tablespaces.html
However, I'd like to know if this table can be safely purged? As 
one day or another it will grow to an unacceptable size... (even 
more if I have the same info in logfile).

+1
Can it?

Thanks, but nah! I'd rather purge this. I'll set another server anyway...

Regards.

--
*Marcio Merlone*
<>--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2dcopy1___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] bacula-dir starting on ipv6 ONLY, why?

2011-09-21 Thread Troy Kocher
Listers, 

I've installed a new bacula server recently and am working to get it in 
production.  It's FreeBSD 8.2-RELEASE, (GENERIC), with a default build of 
Bacula 5.0.3 sever & client, from the ports tree.  The previous server was 
5.0.0_1 on Freebsd 7.4. I brought over my pgsql database, my configs, and my 
data to the new server.  It appears I've done something to cause the director 
to run in ipv6 only.  Is this by default in 5.0.3? If so how do I correct it?  
Have I made an error in the dir config?  Please see below for my details.

foobar# head /usr/local/etc/bacula-dir.conf

Director {# define myself
  Name = foobar-dir
  Dirport = 9101# where we listen for UA connections
  QueryFile = "/usr/local/share/bacula/query.sql"
  WorkingDirectory = "/data/working"
  PidDirectory = "/var/run"
  Maximum Concurrent Jobs = 20
  Password = "mypassword" # Console password
  Messages = Daemon
 }
foobar# bacula start
Starting the Bacula Storage daemon
Starting the Bacula File daemon
Starting the Bacula Director daemon
foobar# bconsole
Connecting to Director foobar:9101
^C
foobar# sockstat
USER COMMANDPID   FD PROTO  LOCAL ADDRESS FOREIGN ADDRESS  
bacula   bacula-dir 36947 3  tcp6   ::1:26407 ::1:5432
bacula   bacula-dir 36947 4  tcp6   ::1:65226 ::1:5432
bacula   bacula-dir 36947 5  tcp6   ::1:31295 ::1:5432
root bacula-fd  36940 3  tcp4   *:9102*:*
bacula   bacula-sd  36932 3  tcp4   *:9103*:*


Any assistance would really be appreciated.

Thanks
Troy Kocher



--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2dcopy1
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Purge Log table - was: Re: Restore performance

2011-09-21 Thread Chris Shelton
2011/9/21 Marcio Merlone 

>  Em 21-09-2011 14:45, Alexandre Chapellon escreveu:
>
> Le 21/09/2011 18:56, Marcio Merlone a écrit :
>
> Em 21-09-2011 13:33, Alexandre Chapellon escreveu:
>
> As Gavin pointed out, a 150GB database is huuuge for only a dozen
> client.
> Unless you have billions of files on each client there is no reason your
> catalog is that large.
> Are you sure you correctly applied job and file retention on your catalog?
> Also are you sure you catalog is not full of orphaned records?
>
> Before migrating to postgres (which is a good choice for big catalogs), I
> would look at the catalog to see if all retention period are correctly
> applied.
>
> I am running dbcheck to see how many rabbits come out of the bushes. File
> table is only 6.6GB and Log is 105GB. What's that Log table for? It only
> have blobs...
>
> It is supposed to contain bacula report... just like in the bacula log
> file.
> I'm not sure having such a big amount of  data in another table hurts, may
> be it does if you use innodb.
> If you use MyISAM... my guess is it should not hurts... but note I am not a
> DBA!
>
> Me neither, and it is innodb, in my case.
>
>  However, I'd like to know if this table can be safely purged? As one day
> or another it will grow to an unacceptable size... (even more if I have the
> same info in logfile).
>
> +1
> Can it?
>

>From this page:
http://www.bacula.org/5.0.x-manuals/en/main/main/Messages_Resource.html

you must have an entry in your Messages section of your director config file
named catalog.  The description of that entry is:
*catalog* Send the message to the Catalog database. The message will be
written to the table named *Log* and a timestamp field will also be added.
This permits Job Reports and other messages to be recorded in the Catalog so
that they can be accessed by reporting software. Bacula will prune the Log
records associated with a Job when the Job records are pruned. Otherwise,
Bacula never uses these records internally, so this destination is only used
for special purpose programs (e.g. *bweb*). If you are not using bweb or any
other special front ends for bacula beyond bconsole, you can very likely
empty the Log table.

On my bacula installation, I just send messages to mail commands and to the
append entry to also save to a filesystem log file.   My Log table exists,
but has no records at all.

chris
--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2dcopy1___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bacula-dir starting on ipv6 ONLY, why?

2011-09-21 Thread Troy Kocher
Listers, 

Some additional details here, after about 30s the bacula-dir ipv6 seems to die 
and the only thing left listening is my bacula-fd & bacula-sd.  

Thanks again...


On 21,Sep 2011, at 2:22 PM, Troy Kocher wrote:

> Listers, 
> 
> I've installed a new bacula server recently and am working to get it in 
> production.  It's FreeBSD 8.2-RELEASE, (GENERIC), with a default build of 
> Bacula 5.0.3 sever & client, from the ports tree.  The previous server was 
> 5.0.0_1 on Freebsd 7.4. I brought over my pgsql database, my configs, and my 
> data to the new server.  It appears I've done something to cause the director 
> to run in ipv6 only.  Is this by default in 5.0.3? If so how do I correct it? 
>  Have I made an error in the dir config?  Please see below for my details.
> 
> foobar# head /usr/local/etc/bacula-dir.conf
> 
> Director {# define myself
>  Name = foobar-dir
>  Dirport = 9101# where we listen for UA connections
>  QueryFile = "/usr/local/share/bacula/query.sql"
>  WorkingDirectory = "/data/working"
>  PidDirectory = "/var/run"
>  Maximum Concurrent Jobs = 20
>  Password = "mypassword" # Console password
>  Messages = Daemon
> }
> foobar# bacula start
> Starting the Bacula Storage daemon
> Starting the Bacula File daemon
> Starting the Bacula Director daemon
> foobar# bconsole
> Connecting to Director foobar:9101
> ^C
> foobar# sockstat
> USER COMMANDPID   FD PROTO  LOCAL ADDRESS FOREIGN ADDRESS 
>  
> bacula   bacula-dir 36947 3  tcp6   ::1:26407 ::1:5432
> bacula   bacula-dir 36947 4  tcp6   ::1:65226 ::1:5432
> bacula   bacula-dir 36947 5  tcp6   ::1:31295 ::1:5432
> root bacula-fd  36940 3  tcp4   *:9102*:*
> bacula   bacula-sd  36932 3  tcp4   *:9103*:*
> 
> 
> Any assistance would really be appreciated.
> 
> Thanks
> Troy Kocher
> 
> 
> 
> --
> All the data continuously generated in your IT infrastructure contains a
> definitive record of customers, application performance, security
> threats, fraudulent activity and more. Splunk takes this data and makes
> sense of it. Business sense. IT sense. Common sense.
> http://p.sf.net/sfu/splunk-d2dcopy1
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
> 
> _
> Scanned by IBM Email Security Management Services 
> powered by MessageLabs.
> _


--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2dcopy1
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bacula-dir starting on ipv6 ONLY, why?

2011-09-21 Thread Dan Langille

On Sep 21, 2011, at 3:22 PM, Troy Kocher wrote:

> Listers, 
> 
> I've installed a new bacula server recently and am working to get it in 
> production.  It's FreeBSD 8.2-RELEASE, (GENERIC), with a default build of 
> Bacula 5.0.3 sever & client, from the ports tree.  The previous server was 
> 5.0.0_1 on Freebsd 7.4. I brought over my pgsql database, my configs, and my 
> data to the new server.  It appears I've done something to cause the director 
> to run in ipv6 only.  Is this by default in 5.0.3? If so how do I correct it? 
>  Have I made an error in the dir config?  Please see below for my details.
> 
> foobar# head /usr/local/etc/bacula-dir.conf
> 
> Director {# define myself
>  Name = foobar-dir
>  Dirport = 9101# where we listen for UA connections
>  QueryFile = "/usr/local/share/bacula/query.sql"
>  WorkingDirectory = "/data/working"
>  PidDirectory = "/var/run"
>  Maximum Concurrent Jobs = 20
>  Password = "mypassword" # Console password
>  Messages = Daemon
> }
> foobar# bacula start
> Starting the Bacula Storage daemon
> Starting the Bacula File daemon
> Starting the Bacula Director daemon
> foobar# bconsole
> Connecting to Director foobar:9101
> ^C
> foobar# sockstat
> USER COMMANDPID   FD PROTO  LOCAL ADDRESS FOREIGN ADDRESS 
>  
> bacula   bacula-dir 36947 3  tcp6   ::1:26407 ::1:5432
> bacula   bacula-dir 36947 4  tcp6   ::1:65226 ::1:5432
> bacula   bacula-dir 36947 5  tcp6   ::1:31295 ::1:5432

This is a connection from bacula-dir to your PostgresSQL database.  I don't see 
bacula-dir listening on 9101 yet…

You also said:

> Some additional details here, after about 30s the bacula-dir ipv6 seems to 
> die and the only thing left listening is my bacula-fd & bacula-sd.  

That makes me suspect the database connection.  I would start bacula-dir in the 
foreground with -d SOMEVALUE and see what messages you got.

> root bacula-fd  36940 3  tcp4   *:9102*:*
> bacula   bacula-sd  36932 3  tcp4   *:9103*:*
> 
> 
> Any assistance would really be appreciated.
> 
> Thanks
> Troy Kocher
> 
> 
> 
> --
> All the data continuously generated in your IT infrastructure contains a
> definitive record of customers, application performance, security
> threats, fraudulent activity and more. Splunk takes this data and makes
> sense of it. Business sense. IT sense. Common sense.
> http://p.sf.net/sfu/splunk-d2dcopy1
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users

-- 
Dan Langille - http://langille.org


--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2dcopy1
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] long-term archival of tapes -- growth of the catalog

2011-09-21 Thread Dan Langille

On Sep 21, 2011, at 6:04 AM, Gavin McCullagh wrote:

> Hi,
> 
> we've been happily using Bacula now for a few years, with a couple of big
> disk arrays as the storage devices/media.  We use something along the lines
> of what the manual documents for fully automated disk-based backups.  This
> has worked well and is really quick and convenient for doing restores from.
> It's expensive in hard disks though and none of the data is offline so a
> particularly nasty online incident might take out the backups as well as
> the live data.
> 
> Our plan now is to add a Dell LTO5 drive in and start using a COPY job once
> per month.  Almost everything gets a monthly Full backup to disk, so we'll
> then copy those jobs to tape and move them off-site.  A couple of tapes
> will be retired from the pool each year and will be stored moreorless
> indefinitely.
> 
> Do other people do this?

I do Copy to tape.  Be aware: your tape drive must be on the same SD as your 
disk storage.  Copy and migrate jobs can involve only one SD.  You cannot 
copy/migrate
from one SD to another.

>  If so, how do you deal with pruning?  Do you just
> let the database grow over time or do you let the data get pruned and use
> bscan or other low-level volume tools to read them if necessary?  Is there
> another approach I'm missing?

I keep all retention periods the same.  If anything is pruned, the entire 
job/file/volume is pruned.

I think database storage is cheap compared to the hassles of not having the 
data in the database.

-- 
Dan Langille - http://langille.org


--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2dcopy1
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Purge Log table - was: Re: Restore performance

2011-09-21 Thread Alexandre Chapellon

  
  


Le 21/09/2011 21:25, Chris Shelton a écrit :

  2011/9/21 Marcio Merlone 

   Em 21-09-2011 14:45,
Alexandre Chapellon escreveu:
 Le 21/09/2011 18:56, Marcio Merlone
  a écrit :
   Em 21-09-2011 13:33, Alexandre
Chapellon escreveu:
 As Gavin
pointed out, a 150GB database is huuuge for
only a dozen client.
Unless you have billions of files on each client
there is no reason your catalog is that large.
Are you sure you correctly applied job and file
retention on your catalog? Also are you sure you
catalog is not full of orphaned records?

Before migrating to postgres (which is a good choice
for big catalogs), I would look at the catalog to
see if all retention period are correctly applied.

I am running dbcheck to see how many rabbits come out of
the bushes. File table is only 6.6GB and Log is 105GB.
What's that Log table for? It only have blobs...
  
  It is supposed to contain bacula report... just like in
  the bacula log file.
  I'm not sure having such a big amount of  data in another
  table hurts, may be it does if you use innodb.
  If you use MyISAM... my guess is it should not hurts...
  but note I am not a DBA!

Me neither, and it is innodb, in my case.

 However, I'd like to know if this
  table can be safely purged? As one day or another it will
  grow to an unacceptable size... (even more if I have the
  same info in logfile).

+1
Can it?
  

  
  
  >From this page:
  http://www.bacula.org/5.0.x-manuals/en/main/main/Messages_Resource.html
  
  you must have an entry in your Messages section of your director
  config file named catalog.  The description of that entry is:
  
catalog
 Send the message
  to the Catalog database. The message will be written to the
  table named Log and a timestamp field will also be
  added. This permits Job Reports and other messages to be
  recorded in the Catalog so that they can be accessed by
  reporting software. Bacula will prune the Log records
  associated with a Job when the Job records are pruned.
  Otherwise, Bacula never uses these records internally, so this
  destination is only used for special purpose programs (e.g. bweb).

  

Great information Chris , thank you!
However Marcio setup seems to show that the Log table is not purged
as expected.
I have checked on my setup too and while I have almost 2000 rows in
y Log table, I have only 150 entries in the Job table.

select count(*) from Log;
+--+
| count(*) |
+--+
| 1886 |
+--+

select count(*) from Job;
+--+
| count(*) |
+--+
|  151 |
+--+


Which tends to proove Log table is not pruned with associated Jobs.
Is it a bug?

Regards.

If you are not using bweb or any other special front
  ends for bacula beyond bconsole, you can very likely empty the Log
  table.  
  
  On my bacula installation, I just send messages to mail commands
  and to the append entry to also save to a filesystem log file.  
  My Log table exists, but has no records at all.
  
  chris
  
  
  
  
  --
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2dcopy1
  
  
  
  ___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users



-- 
  
  
Alexandre Chapellon
Ingénierie des systèmes open sources et
  réseaux.
  Follow me on twitter: @alxgomz
  

  

<>--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it.