Re: [Bacula-users] Purge Log table - was: Re: Restore performance

2011-09-22 Thread Alexandre Chapellon

  
  


Le 21/09/2011 21:25, Chris Shelton a crit:

  2011/9/21 Marcio Merlone marcio.merl...@a1.ind.br

   Em 21-09-2011 14:45,
Alexandre Chapellon escreveu:
 Le 21/09/2011 18:56, Marcio Merlone
  a crit:
   Em 21-09-2011 13:33, Alexandre
Chapellon escreveu:
 As Gavin
pointed out, a 150GB database is huuuge for
only a dozen client.
Unless you have billions of files on each client
there is no reason your catalog is that large.
Are you sure you correctly applied job and file
retention on your catalog? Also are you sure you
catalog is not full of orphaned records?

Before migrating to postgres (which is a good choice
for big catalogs), I would look at the catalog to
see if all retention period are correctly applied.

I am running dbcheck to see how many rabbits come out of
the bushes. File table is only 6.6GB and Log is 105GB.
What's that Log table for? It only have blobs...
  
  It is supposed to contain bacula report... just like in
  the bacula log file.
  I'm not sure having such a big amount of data in another
  table hurts, may be it does if you use innodb.
  If you use MyISAM... my guess is it should not hurts...
  but note I am not a DBA!

Me neither, and it is innodb, in my case.

 However, I'd like to know if this
  table can be safely purged? As one day or another it will
  grow to an unacceptable size... (even more if I have the
  same info in logfile).

+1
Can it?
  

  
  
  From this page:
  http://www.bacula.org/5.0.x-manuals/en/main/main/Messages_Resource.html
  
  you must have an entry in your Messages section of your director
  config file named catalog. The description of that entry is:
  
catalog
 Send the message
  to the Catalog database. The message will be written to the
  table named Log and a timestamp field will also be
  added. This permits Job Reports and other messages to be
  recorded in the Catalog so that they can be accessed by
  reporting software. Bacula will prune the Log records
  associated with a Job when the Job records are pruned.
  Otherwise, Bacula never uses these records internally, so this
  destination is only used for special purpose programs (e.g. bweb).

  

Great information Chris , thank you!
However Marcio setup seems to show that the Log table is not purged
as expected.
I have checked on my setup too and while I have almost 2000 rows in
y Log table, I have only 150 entries in the Job table.

select count(*) from Log;
+--+
| count(*) |
+--+
| 1886 |
+--+

select count(*) from Job;
+--+
| count(*) |
+--+
| 151 |
+--+


Which tends to proove Log table is not pruned with associated Jobs.
Is it a bug?

Regards.

If you are not using bweb or any other special front
  ends for bacula beyond bconsole, you can very likely empty the Log
  table. 
  
  On my bacula installation, I just send messages to mail commands
  and to the append entry to also save to a filesystem log file.
  My Log table exists, but has no records at all.
  
  chris
  
  
  
  
  --
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2dcopy1
  
  
  
  ___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users



-- 
  
  
Alexandre Chapellon
Ingnierie des systmes open sources et
  rseaux.
  Follow me on twitter: @alxgomz
  

  

attachment: a_chapellon.vcf--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of 

Re: [Bacula-users] Purge Log table - was: Re: Restore performance

2011-09-22 Thread Eric Bollengier
Hello,

 Great information Chris , thank you!
 However Marcio setup seems to show that the Log table is not purged as
 expected.
 I have checked on my setup too and while I have almost 2000 rows in y
 Log table, I have only 150 entries in the Job table.

Having 10 lines of log per job is not unusual... (each row contains one 
line of log).

Having 150G for 2000 lines of log *is* unusual. It should be 200kB.

 select count(*) from Log;
 +--+
 | count(*) |
 +--+
 | 1886 |
 +--+

 select count(*) from Job;
 +--+
 | count(*) |
 +--+
 | 151 |
 +--+


 Which tends to proove Log table is not pruned with associated Jobs.
 Is it a bug?

It doesn't prove anything, except that something is uncorrectly 
configured on the MySQL or on the system part.

Bye

-- 
Need professional help and support for Bacula ?
Visit http://www.baculasystems.com

--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2dcopy1
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Purge Log table - was: Re: Restore performance

2011-09-22 Thread Jeremy Maes

Op 22/09/2011 8:38, Alexandre Chapellon schreef:

Le 21/09/2011 21:25, Chris Shelton a écrit :
2011/9/21 Marcio Merlone marcio.merl...@a1.ind.br 
mailto:marcio.merl...@a1.ind.br


Em 21-09-2011 14 tel:21-09-2011%2014:45, Alexandre Chapellon
escreveu:

Le 21/09/2011 18:56, Marcio Merlone a écrit :

Em 21-09-2011 13 tel:21-09-2011%2013:33, Alexandre Chapellon
escreveu:

As Gavin pointed out, a 150GB database is huuuge for
only a dozen client.
Unless you have billions of files on each client there is no
reason your catalog is that large.
Are you sure you correctly applied job and file retention on
your catalog? Also are you sure you catalog is not full of
orphaned records?

Before migrating to postgres (which is a good choice for big
catalogs), I would look at the catalog to see if all retention
period are correctly applied.

I am running dbcheck to see how many rabbits come out of the
bushes. File table is only 6.6GB and Log is 105GB. What's that
Log table for? It only have blobs...

It is supposed to contain bacula report... just like in the
bacula log file.
I'm not sure having such a big amount of  data in another table
hurts, may be it does if you use innodb.
If you use MyISAM... my guess is it should not hurts... but note
I am not a DBA!

Me neither, and it is innodb, in my case.


However, I'd like to know if this table can be safely purged? As
one day or another it will grow to an unacceptable size... (even
more if I have the same info in logfile).

+1
Can it?


From this page:
http://www.bacula.org/5.0.x-manuals/en/main/main/Messages_Resource.html

you must have an entry in your Messages section of your director 
config file named catalog.  The description of that entry is:


*catalog*
Send the message to the Catalog database. The message will be
written to the table named *Log* and a timestamp field will also
be added. This permits Job Reports and other messages to be
recorded in the Catalog so that they can be accessed by reporting
software. Bacula will prune the Log records associated with a Job
when the Job records are pruned. Otherwise, Bacula never uses
these records internally, so this destination is only used for
special purpose programs (e.g. *bweb*). 


Great information Chris , thank you!
However Marcio setup seems to show that the Log table is not purged as 
expected.
I have checked on my setup too and while I have almost 2000 rows in y 
Log table, I have only 150 entries in the Job table.


select count(*) from Log;
+--+
| count(*) |
+--+
| 1886 |
+--+

select count(*) from Job;
+--+
| count(*) |
+--+
|  151 |
+--+


Which tends to proove Log table is not pruned with associated Jobs.
Is it a bug?

Regards.


The Log table contains an entry for every message Bacula generates. So 
this has a seperate line for each Wrote label to prelabeled volume, 
each Job started, each Pruning Jobs and Files, etc.


Aka: every job has more than one entry in the Log table.

And for what it's worth: I don't have a huge retention defined for my 
clients, but I think it's very strange that one can have a Log table 
that is multiple times the size of the file table. When I check the 
setups where bacula makes backups of a few clients, totalling about 1M 
files with a 1 month retention (average) I get 6MB of Log table and 
1.4GB of File table.


Maybe you're sending way more messages to the Log table than you'd want to?

If you are not using bweb or any other special front ends for bacula 
beyond bconsole, you can very likely empty the Log table.


On my bacula installation, I just send messages to mail commands and 
to the append entry to also save to a filesystem log file.   My Log 
table exists, but has no records at all.


chris


 DISCLAIMER 
http://www.schaubroeck.be/maildisclaimer.htm
--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2dcopy1___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Purge Log table - was: Re: Restore performance

2011-09-22 Thread Alexandre Chapellon

  
  


Le 22/09/2011 09:11, Eric Bollengier a crit:

  Hello,


  
Great information Chris , thank you!
However Marcio setup seems to show that the Log table is not purged as
expected.
I have checked on my setup too and while I have almost 2000 rows in y
Log table, I have only 150 entries in the Job table.

  
  
Having 10 lines of log per job is not unusual... (each row contains one 
line of log).


that said, it seem more logical.
I double check using the following sql query and found sensible
value:

select JobId,count(*) from Log group by JobId;
...
152 rows in set (0.04 sec)



  

Having 150G for 2000 lines of log *is* unusual. It should be 200kB.


It's not me who have 150GB of Log table. I was just wondering why I
had 10 times more rows in log table than in job table.
I have my answer now. Thanks.

  
select count(*) from Log;
+--+
| count(*) |
+--+
| 1886 |
+--+

select count(*) from Job;
+--+
| count(*) |
+--+
| 151 |
+--+


Which tends to proove Log table is not pruned with associated Jobs.
Is it a bug?

  
  
It doesn't prove anything, except that something is uncorrectly 
configured on the MySQL or on the system part.

Bye




-- 
  
  
Alexandre Chapellon
Ingnierie des systmes open sources et
  rseaux.
  Follow me on twitter: @alxgomz
  

  

attachment: a_chapellon.vcf--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2dcopy1___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Purge Log table - was: Re: Restore performance

2011-09-21 Thread Alexandre Chapellon

  
  


Le 21/09/2011 19:53, Marcio Merlone a crit:

  
  Em 21-09-2011 14:45, Alexandre Chapellon escreveu:
  

Le 21/09/2011 18:56, Marcio Merlone a crit:

  
  Em 21-09-2011 13:33, Alexandre Chapellon escreveu:
  

As Gavin pointed out, a 150GB database is
  huuuge for only a dozen client.
  Unless you have billions of files on each client there is
  no reason your catalog is that large.
  Are you sure you correctly applied job and file retention
  on your catalog? Also are you sure you catalog is not full
  of orphaned records?
  
  Before migrating to postgres (which is a good choice for
  big catalogs), I would look at the catalog to see if all
  retention period are correctly applied.
  
  I am running dbcheck to see how many rabbits come out of the
  bushes. File table is only 6.6GB and Log is 105GB. What's that
  Log table for? It only have blobs...

It is supposed to contain bacula report... just like in the
bacula log file.
I'm not sure having such a big amount of data in another table
hurts, may be it does if you use innodb.
If you use MyISAM... my guess is it should not hurts... but note
I am not a DBA!
  
  Me neither, and it is innodb, in my case.

I'd say It can hurts innodb because, unlike myisam, data are, by
default, stored in one single (huge) file.

 
  
However, I'd like to know if this table can be safely purged? As
one day or another it will grow to an unacceptable size... (even
more if I have the same info in logfile).
  
  +1
  Can it?
  
  
  -- 

 Marcio

  Merlone

  
  
  
  
  --
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2dcopy1
  
  
  
  ___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users



-- 
  
  
Alexandre Chapellon
Ingnierie des systmes open sources et
  rseaux.
  Follow me on twitter: @alxgomz
  

  

attachment: a_chapellon.vcf--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2dcopy1___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Purge Log table - was: Re: Restore performance

2011-09-21 Thread Alexandre Chapellon

  
  


Le 21/09/2011 20:03, Alexandre Chapellon a crit:

  
  
  
  Le 21/09/2011 19:53, Marcio Merlone a crit:
  

Em 21-09-2011 14:45, Alexandre Chapellon escreveu:

  
  Le 21/09/2011 18:56, Marcio Merlone a crit:
  

Em 21-09-2011 13:33, Alexandre Chapellon escreveu:

  
  As Gavin pointed out, a 150GB database is
huuuge for only a dozen client.
Unless you have billions of files on each client there
is no reason your catalog is that large.
Are you sure you correctly applied job and file
retention on your catalog? Also are you sure you catalog
is not full of orphaned records?

Before migrating to postgres (which is a good choice for
big catalogs), I would look at the catalog to see if all
retention period are correctly applied.

I am running dbcheck to see how many rabbits come out of the
bushes. File table is only 6.6GB and Log is 105GB. What's
that Log table for? It only have blobs...
  
  It is supposed to contain bacula report... just like in the
  bacula log file.
  I'm not sure having such a big amount of data in another
  table hurts, may be it does if you use innodb.
  If you use MyISAM... my guess is it should not hurts... but
  note I am not a DBA!

Me neither, and it is innodb, in my case.
  
  I'd say It can hurts innodb because, unlike myisam, data are, by
  default, stored in one single (huge) file.
  

maybe you could try this:
http://dev.mysql.com/doc/refman/5.0/en/innodb-multiple-tablespaces.html

   

  However, I'd like to know if this table can be safely purged?
  As one day or another it will grow to an unacceptable size...
  (even more if I have the same info in logfile).

+1
Can it?


-- 
  
   Marcio


Merlone
  




--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2dcopy1



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

  
  
  -- 


  Alexandre Chapellon
  Ingnierie des systmes open sources et
rseaux.
Follow me on twitter: @alxgomz

  
  
  
  
  --
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2dcopy1
  
  
  
  ___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users



-- 
  
  
Alexandre Chapellon
Ingnierie des systmes open sources et
  rseaux.
  Follow me on twitter: @alxgomz
  

  

attachment: a_chapellon.vcf--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2dcopy1___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Purge Log table - was: Re: Restore performance

2011-09-21 Thread Marcio Merlone

Em 21-09-2011 15:24, Alexandre Chapellon escreveu:

maybe you could try this:
http://dev.mysql.com/doc/refman/5.0/en/innodb-multiple-tablespaces.html
However, I'd like to know if this table can be safely purged? As 
one day or another it will grow to an unacceptable size... (even 
more if I have the same info in logfile).

+1
Can it?

Thanks, but nah! I'd rather purge this. I'll set another server anyway...

Regards.

--
*Marcio Merlone*
attachment: marcio_merlone.vcf--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2dcopy1___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Purge Log table - was: Re: Restore performance

2011-09-21 Thread Chris Shelton
2011/9/21 Marcio Merlone marcio.merl...@a1.ind.br

  Em 21-09-2011 14:45, Alexandre Chapellon escreveu:

 Le 21/09/2011 18:56, Marcio Merlone a écrit :

 Em 21-09-2011 13:33, Alexandre Chapellon escreveu:

 As Gavin pointed out, a 150GB database is huuuge for only a dozen
 client.
 Unless you have billions of files on each client there is no reason your
 catalog is that large.
 Are you sure you correctly applied job and file retention on your catalog?
 Also are you sure you catalog is not full of orphaned records?

 Before migrating to postgres (which is a good choice for big catalogs), I
 would look at the catalog to see if all retention period are correctly
 applied.

 I am running dbcheck to see how many rabbits come out of the bushes. File
 table is only 6.6GB and Log is 105GB. What's that Log table for? It only
 have blobs...

 It is supposed to contain bacula report... just like in the bacula log
 file.
 I'm not sure having such a big amount of  data in another table hurts, may
 be it does if you use innodb.
 If you use MyISAM... my guess is it should not hurts... but note I am not a
 DBA!

 Me neither, and it is innodb, in my case.

  However, I'd like to know if this table can be safely purged? As one day
 or another it will grow to an unacceptable size... (even more if I have the
 same info in logfile).

 +1
 Can it?


From this page:
http://www.bacula.org/5.0.x-manuals/en/main/main/Messages_Resource.html

you must have an entry in your Messages section of your director config file
named catalog.  The description of that entry is:
*catalog* Send the message to the Catalog database. The message will be
written to the table named *Log* and a timestamp field will also be added.
This permits Job Reports and other messages to be recorded in the Catalog so
that they can be accessed by reporting software. Bacula will prune the Log
records associated with a Job when the Job records are pruned. Otherwise,
Bacula never uses these records internally, so this destination is only used
for special purpose programs (e.g. *bweb*). If you are not using bweb or any
other special front ends for bacula beyond bconsole, you can very likely
empty the Log table.

On my bacula installation, I just send messages to mail commands and to the
append entry to also save to a filesystem log file.   My Log table exists,
but has no records at all.

chris
--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2dcopy1___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users