[Bacula-users] Performance?

2007-07-23 Thread Doytchin Spiridonov
Hello,

is there anyone that have any stats on Bacula performance writing
volumes on harddisk for the following 3 cases:
1. concurrent jobs, writing at the same time to one and the same volume
2. concurrent jobs, writing at the same time to separate volumes (one
per job)
3. jobs run 1 at a time.

What would be the fastest (assuming that the disk data transfer is
less than network's, using 1Gbps).


-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] performance

2007-09-07 Thread Gabriele Bulfon
Hello,
I need to speed up the backup of a machine with a lot of very small files 
(600,000 = 90Gb).
I have verified that the problem is the mysql database, slowing down the backup 
because it has
to write 600,000 records in the catalog.
I'm thinking of 2 options:
1- forcing transactions at the start and end of every job (or at the end of the 
last job, maybe).
1- upgrading version 1.38 -> 2.x (latest).
Questions are:
1- how can I force bacula to use these transactions?
2- should I expect performance improvements by upgrading?
Thanx for any help
Gabriele.
Gabriele Bulfon - Sonicle S.r.l.
Tel +39 028246016 Int. 30 - Fax +39 028243880
Via Felice Cavallotti 16 - 20089, Rozzano - Milano - ITALY
http://www.sonicle.com
-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Performance

2011-07-25 Thread Rickifer Barros
Hello Guys...

This weekend I did a backup with a size of 41.92 GB that took 1 hour and 24
minutes with a rate of 8.27 MB/s.

My Bacula Server is installed in a IBM server connected in a Tape Drive LTO4
(120 MB/s) via SAS connection (3 Gb/s).

I'm using Encryption and Compression Gzip6.

I think that this rate (8.27 MB/s) is too much slow for such configuration.

I'd like to know how much rate you are getting using Encryption and
Compression and what I suppose to do to increase the Bacula's performance.

Thanks

Rickifer Barros.
--
Storage Efficiency Calculator
This modeling tool is based on patent-pending intellectual property that
has been used successfully in hundreds of IBM storage optimization engage-
ments, worldwide.  Store less, Store more with what you own, Move data to 
the right place. Try It Now! http://www.accelacomm.com/jaw/sfnl/114/51427378/___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Performance?

2007-07-23 Thread John Drescher
> is there anyone that have any stats on Bacula performance writing
> volumes on harddisk for the following 3 cases:
> 1. concurrent jobs, writing at the same time to one and the same volume
> 2. concurrent jobs, writing at the same time to separate volumes (one
> per job)
> 3. jobs run 1 at a time.
>
> What would be the fastest (assuming that the disk data transfer is
> less than network's, using 1Gbps).
>
Although someone can come up with benchmarks, I do not believe there
is a simple answer here. The data rate you achieve during backups is
highly dependent on many factors that are specific to your setup.

1) The job type. Full backups should get a much higher data rate than
incrementals or differentials as the client will spend a lot of time
thrashing through the filesystem searching for what has changed.

2) The database load. If you have millions of files in your backup the
database will be bogged down reducing your throughput

3) The number of clients you are backing up at a time. Since most
single clients will have a very hard time generating data at the rate
it can be written it is often better to run more than one client at a
time.

4) Software compression. If this is used your backup rate will
significantly decrease on each client so you should run multiple
clients at once to increase throughput.

5) Destination disk performance. If you are planning on archiving
rates of > 50MB/s you will need a raid on the receiving end or
multiple independent disks with each volume being on a different disk.

6) Jobs on a single client. Never run two jobs on a single client
unless they are using different drives.

7) Spooling. It can help and it can hurt.

8) Signatures. Using MD5 or SHA does add some time to the backup.

John

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] performance

2007-09-07 Thread Arno Lehmann
Hi,

07.09.2007 12:52,, Gabriele Bulfon wrote::
> 
> haha, mysql should be fast enough on a Sun T2000 8Gb RAMor not?!

I think so... of course you could almways find a faster DB machine :-)

Is this the machine the DIR is running on, too? If not, you might 
havethe bottleneck in the network connection.

> Anyway, I'm installing the 2.2.2 from scratch on a test machine.
> - Rebuilt mysql 5.0.33 with thread-safe switch on.
> - Built Bacula with batch-insert on
> 
> Once prepared the clean db and everything needed for my existing volumes,
> I ran one job to check rate.
> What happens, is that the bconsole "m" command is throwing out a big 
> amount of debug infos about buffer allocation..

Just use 'mes' instead of 'm' - that's for memory usage now :-)

Arno

> So I rebuilt bacula with "disable-smartalloc", but I had no chance to 
> make..full of errors
> What should I do to take off all that debug info?!?! Why do I need 
> smartalloc on?!
> ...oh, sorry: I'm testing on a Solaris 10 amd (Sun v20z).
> 
> Thanx a lot
> Gabriele.
> 
> <http://www.sonicle.com>
> Gabriele Bulfon - Sonicle S.r.l.
> Tel +39 028246016 Int. 30 - Fax +39 028243880
> Via Felice Cavallotti 16 - 20089, Rozzano - Milano - ITALY
> http://www.sonicle.com
> 
> 
> 
> 
> ------
> 
> Da: Arno Lehmann <[EMAIL PROTECTED]>
> A: bacula-users@lists.sourceforge.net
> Data: 7 settembre 2007 12.39.41 CEST
> Oggetto: Re: [Bacula-users] performance
> 
> Hi,
> 
> 07.09.2007 10:32,, Gabriele Bulfon wrote::
>  >
>  > Hello,
>  > I need to speed up the backup of a machine with a lot of very small
>  > files (600,000 = 90Gb).
>  > I have verified that the problem is the mysql database, slowing
> down the
>  > backup because it has
>  > to write 600,000 records in the catalog.
>  > I'm thinking of 2 options:
>  > 1- forcing transactions at the start and end of every job (or at
> the end
>  > of the last job, maybe).
>  > 1- upgrading version 1.38 -> 2.x (latest).
>  >
>  > Questions are:
>  > 1- how can I force bacula to use these transactions?
> 
> Don't try this... Bacula (once) was prepared for this aproach, but it
> never worked for serious reasons.
> 
>  > 2- should I expect performance improvements by upgrading?
> 
> Definitely. Especially in case of adding files, the new batch insert
> code should increase the performance. Note that you'll need MySQL
> version 4.1 upwards.
> 
> Finally, you forgot one other option: Get a faster database server :-)
> 
> Arno
> 
>  >
>  > Thanx for any help
>  > Gabriele.
>  >
>  > <http://www.sonicle.com>
>  > Gabriele Bulfon - Sonicle S.r.l.
>  > Tel +39 028246016 Int. 30 - Fax +39 028243880
>  > Via Felice Cavallotti 16 - 20089, Rozzano - Milano - ITALY
>  > http://www.sonicle.com
>  >
>  >
>  >
> 
> 
>  >
>  >
> -
> 
>  > This SF.net email is sponsored by: Splunk Inc.
>  > Still grepping through log files to find problems? Stop.
>  > Now Search log events and configuration files using AJAX and a
> browser.
>  > Download your FREE copy of Splunk now >> http://get.splunk.com/
>  >
>  >
>  >
> 
> 
>  >
>  > ___
>  > Bacula-users mailing list
>  > Bacula-users@lists.sourceforge.net
>  > https://lists.sourceforge.net/lists/listinfo/bacula-users
> 
> -- 
> Arno Lehmann
> IT-Service Lehmann
> www.its-lehmann.de
> 
> -
> 
> This SF.net email is sponsored by: Splunk Inc.
> Still grepping through log files to find problems? Stop.
> Now Search log events and configuration files using AJAX and a browser.
> Download your FREE copy of Splunk now >> http://get.splunk.com/
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
> 
> 
> 

-- 
Arno Lehmann
IT-Service Lehmann
www.its-lehmann.de

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] performance

2007-09-07 Thread Tom Sommer
On Fri, September 7, 2007 12:52, Gabriele Bulfon wrote:
> haha, mysql should be fast enough on a Sun T2000 8Gb RAMor not?!
> Anyway, I'm installing the 2.2.2 from scratch on a test machine.
> - Rebuilt mysql 5.0.33 with thread-safe switch on.
> - Built Bacula with batch-insert on
> Once prepared the clean db and everything needed for my existing volumes,
> I ran one job to check rate.
> What happens, is that the bconsole "m" command is throwing out a big
> amount of debug infos about buffer allocation.. So I rebuilt
> bacula with "disable-smartalloc", but I had no chance to
> make..full of errors What should I do to take off all
> that debug info?!?! Why do I need smartalloc on?! ...oh, sorry: I'm
> testing on a Solaris 10 amd (Sun v20z). Thanx a lot

Hehe, try "mes" in bconsole, "m" is an alias for the "memory" command now
-- confusing, yes.

--
Tom Sommer


-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] performance

2007-09-07 Thread Arno Lehmann
Hi,

07.09.2007 10:32,, Gabriele Bulfon wrote::
> 
> Hello,
> I need to speed up the backup of a machine with a lot of very small 
> files (600,000 = 90Gb).
> I have verified that the problem is the mysql database, slowing down the 
> backup because it has
> to write 600,000 records in the catalog.
> I'm thinking of 2 options:
> 1- forcing transactions at the start and end of every job (or at the end 
> of the last job, maybe).
> 1- upgrading version 1.38 -> 2.x (latest).
> 
> Questions are:
> 1- how can I force bacula to use these transactions?

Don't try this... Bacula (once) was prepared for this aproach, but it 
never worked for serious reasons.

> 2- should I expect performance improvements by upgrading?

Definitely. Especially in case of adding files, the new batch insert 
code should increase the performance. Note that you'll need MySQL 
version 4.1 upwards.

Finally, you forgot one other option: Get a faster database server :-)

Arno

> 
> Thanx for any help
> Gabriele.
> 
> 
> Gabriele Bulfon - Sonicle S.r.l.
> Tel +39 028246016 Int. 30 - Fax +39 028243880
> Via Felice Cavallotti 16 - 20089, Rozzano - Milano - ITALY
> http://www.sonicle.com
> 
> 
> 
> 
> -
> This SF.net email is sponsored by: Splunk Inc.
> Still grepping through log files to find problems?  Stop.
> Now Search log events and configuration files using AJAX and a browser.
> Download your FREE copy of Splunk now >>  http://get.splunk.com/
> 
> 
> 
> 
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users

-- 
Arno Lehmann
IT-Service Lehmann
www.its-lehmann.de

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] performance

2007-09-07 Thread Jason Harley
Gabriele Bulfon wrote, on 07/09/07 06:52 AM:
> haha, mysql should be fast enough on a Sun T2000 8Gb RAMor not?!

It's likely got a lot more to do with your MySQL tuning... I'd recommend 
PostgreSQL for a large database if you really want to see it scale. 
Also, what is prstat, iostat and sar (if you've got it installed and 
running) saying during your poor performing backups?  Do you have local 
disk contention?  Are your database logs on separate disks from your Is 
your memory usage what you feel it should be (e.g. do you want MySQL to 
use 4GB of your system memory if it can?).

I love the Niagra platform for databases, but that doesn't instantly 
mean that out of the box it's going to blow your mind...

./JRH


-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] performance

2007-09-07 Thread Gabriele Bulfon
haha, mysql should be fast enough on a Sun T2000 8Gb RAMor not?!
Anyway, I'm installing the 2.2.2 from scratch on a test machine.
- Rebuilt mysql 5.0.33 with thread-safe switch on.
- Built Bacula with batch-insert on
Once prepared the clean db and everything needed for my existing volumes,
I ran one job to check rate.
What happens, is that the bconsole "m" command is throwing out a big amount of 
debug infos about buffer allocation..
So I rebuilt bacula with "disable-smartalloc", but I had no chance to 
make..full of errors
What should I do to take off all that debug info?!?! Why do I need smartalloc 
on?!
...oh, sorry: I'm testing on a Solaris 10 amd (Sun v20z).
Thanx a lot
Gabriele.
Gabriele Bulfon - Sonicle S.r.l.
Tel +39 028246016 Int. 30 - Fax +39 028243880
Via Felice Cavallotti 16 - 20089, Rozzano - Milano - ITALY
http://www.sonicle.com
--
Da: Arno Lehmann <[EMAIL PROTECTED]>
A: bacula-users@lists.sourceforge.net
Data: 7 settembre 2007 12.39.41 CEST
Oggetto: Re: [Bacula-users] performance
Hi,
07.09.2007 10:32,, Gabriele Bulfon wrote::
>
> Hello,
> I need to speed up the backup of a machine with a lot of very small
> files (600,000 = 90Gb).
> I have verified that the problem is the mysql database, slowing down the
> backup because it has
> to write 600,000 records in the catalog.
> I'm thinking of 2 options:
> 1- forcing transactions at the start and end of every job (or at the end
> of the last job, maybe).
> 1- upgrading version 1.38 -> 2.x (latest).
>
> Questions are:
> 1- how can I force bacula to use these transactions?
Don't try this... Bacula (once) was prepared for this aproach, but it
never worked for serious reasons.
> 2- should I expect performance improvements by upgrading?
Definitely. Especially in case of adding files, the new batch insert
code should increase the performance. Note that you'll need MySQL
version 4.1 upwards.
Finally, you forgot one other option: Get a faster database server :-)
Arno
>
> Thanx for any help
> Gabriele.
>
> <http://www.sonicle.com>
> Gabriele Bulfon - Sonicle S.r.l.
> Tel +39 028246016 Int. 30 - Fax +39 028243880
> Via Felice Cavallotti 16 - 20089, Rozzano - Milano - ITALY
> http://www.sonicle.com
>
>
> 
>
> -
> This SF.net email is sponsored by: Splunk Inc.
> Still grepping through log files to find problems?  Stop.
> Now Search log events and configuration files using AJAX and a browser.
> Download your FREE copy of Splunk now >>  http://get.splunk.com/
>
>
> 
>
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
--
Arno Lehmann
IT-Service Lehmann
www.its-lehmann.de
-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users
-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] performance

2007-09-07 Thread Marc Cousin
I'd say you'll get the best performance with postgresql right now : batch 
insert has been made primarily for it (and uses a special bulk insert 
statement with postgresql).
I guess some optimizations could be done for mysql too, but I don't think 
they've been done for now ...

On Friday 07 September 2007 12:52:36 Gabriele Bulfon wrote:
> haha, mysql should be fast enough on a Sun T2000 8Gb RAMor not?!
> Anyway, I'm installing the 2.2.2 from scratch on a test machine.
> - Rebuilt mysql 5.0.33 with thread-safe switch on.
> - Built Bacula with batch-insert on
> Once prepared the clean db and everything needed for my existing volumes,
> I ran one job to check rate.
> What happens, is that the bconsole "m" command is throwing out a big amount
> of debug infos about buffer allocation.. So I rebuilt bacula
> with "disable-smartalloc", but I had no chance to
> make..full of errors What should I do to take off all
> that debug info?!?! Why do I need smartalloc on?! ...oh, sorry: I'm testing
> on a Solaris 10 amd (Sun v20z).
> Thanx a lot
> Gabriele.
> Gabriele Bulfon - Sonicle S.r.l.
> Tel +39 028246016 Int. 30 - Fax +39 028243880
> Via Felice Cavallotti 16 - 20089, Rozzano - Milano - ITALY
> http://www.sonicle.com
> ---
>--- Da: Arno Lehmann <[EMAIL PROTECTED]>
> A: bacula-users@lists.sourceforge.net
> Data: 7 settembre 2007 12.39.41 CEST
> Oggetto: Re: [Bacula-users] performance
> Hi,
>
> 07.09.2007 10:32,, Gabriele Bulfon wrote::
> > Hello,
> > I need to speed up the backup of a machine with a lot of very small
> > files (600,000 = 90Gb).
> > I have verified that the problem is the mysql database, slowing down the
> > backup because it has
> > to write 600,000 records in the catalog.
> > I'm thinking of 2 options:
> > 1- forcing transactions at the start and end of every job (or at the end
> > of the last job, maybe).
> > 1- upgrading version 1.38 -> 2.x (latest).
> >
> > Questions are:
> > 1- how can I force bacula to use these transactions?
>
> Don't try this... Bacula (once) was prepared for this aproach, but it
> never worked for serious reasons.
>
> > 2- should I expect performance improvements by upgrading?
>
> Definitely. Especially in case of adding files, the new batch insert
> code should increase the performance. Note that you'll need MySQL
> version 4.1 upwards.
> Finally, you forgot one other option: Get a faster database server :-)
> Arno
>
> > Thanx for any help
> > Gabriele.
> >
> > <http://www.sonicle.com>
> > Gabriele Bulfon - Sonicle S.r.l.
> > Tel +39 028246016 Int. 30 - Fax +39 028243880
> > Via Felice Cavallotti 16 - 20089, Rozzano - Milano - ITALY
> > http://www.sonicle.com
> >
> >
> > 
> >
> > -
> > This SF.net email is sponsored by: Splunk Inc.
> > Still grepping through log files to find problems?  Stop.
> > Now Search log events and configuration files using AJAX and a browser.
> > Download your FREE copy of Splunk now >>  http://get.splunk.com/
> >
> >
> > 
> >
> > ___
> > Bacula-users mailing list
> > Bacula-users@lists.sourceforge.net
> > https://lists.sourceforge.net/lists/listinfo/bacula-users
>
> --
> Arno Lehmann
> IT-Service Lehmann
> www.its-lehmann.de
> -
> This SF.net email is sponsored by: Splunk Inc.
> Still grepping through log files to find problems?  Stop.
> Now Search log events and configuration files using AJAX and a browser.
> Download your FREE copy of Splunk now >>  http://get.splunk.com/
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users



-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Performance Problem

2007-09-18 Thread Rainer Hackel
Hi all!

I have bacula running (version 2.0.2) and in principle everything works = fine. 
But now (reading some mails from the list) I ask myself why the = backup-speed 
is that slow. In average it's about 1500 kb/s.

The software is running on fedora. The Computer has a fast CPU and 2GB = of 
RAM. No network backups, just lokal disk. The backup-drive is a lto-1 = hp.

What backup-speed coult i expect? How could i find the bottleneck?

Thank you for your assistance.

Rainer

No virus found in this outgoing message.
Checked by AVG Free Edition. 
Version: 7.5.487 / Virus Database: 269.13.22/1013 - Release Date: 17.09.2007 
13:29
 

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Performance Issues

2005-04-25 Thread Andreas Kopecki
Hi!

I am currently evaluating Bacula for our on-site backup. All is running well, 
except I am stumbling over a few performance issues. Our setup is as follows:

Test-Server: Dual PIII 1GHz on a GRAU Tape Silo running Bacula Postgresql 
1.36.2 spooling on a software Raid-0 2x80GB PATA Disks.

Test-Client: Dual Opteron with a 1.6TB FibreChannel-Raid, XFS-formatted.

Both are connected via Gbit Ethernet, netio shows a transfer rate of 120MB/s. 

Starting a full backup, I am getting something about 30-40MB/s as expected, 
but after a few minutes the rate starts to drop significantly dropping 
sometimes as low as 100kB/s. From time to time, the rate rises again to 
20-30MB/s, but drops again after a short while. Also, the Client-RAID becomes 
quite unresponsive for other requests. The effects are the same when writing 
directly to tape (which handles about 30MB/s compressed) instead of spooling.
A full backup takes about five days. Currently, we are using Networker via 
NFS, doing the same backup to a slower tape in about 24-36 hours.

There are some threads in the archive mentioning a bad performance when using 
postgresql, but the postmaster process is running only at about 20%.

Did somebody already stumble on such performance issues when doing a backup to 
a high speed tape? Where are the bottlenecks here? Whould the performance 
increase switching from postgresql to mysql? Is it possible, that Bacula just 
has problems in processing small files?

Regards,
-- 
Andreas KopeckiHigh Performance Computing Center (HLRS)
   Visualisation Department
Tel. ++49-711-6855789  University of Stuttgart,
   Allmandring 30a, D-70550 Stuttgart

[EMAIL PROTECTED]  http://www.hlrs.de/
-



---
SF email is sponsored by - The IT Product Guide
Read honest & candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_id=6595&alloc_id=14396&op=click
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Performance Issues

2012-09-24 Thread Rodrigo Abrantes Antunes

Hi, when restoring, listing files, backing up, purging or pruning mysql
process uses 100% CPU and the machine is unusable, and such operations last
to long. Doing some research I found that this can be related to database
indexes, but I didn't understanf well what I need to do .Here is the output
of "show index from File" :
  
+---++--+--+-+---+-+--++--++-+

 | Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation |
Cardinality | Sub_part | Packed | Null | Index_type | Comment |
  
+---++--+--+-+---+-+--++--++-+

 | File  |  0 | PRIMARY  |    1 |
FileId  | A | 7924758 | NULL |
NULL   |  | BTREE  | |
 | File  |  1 | JobId    |    1 |
JobId   | A | 915 | NULL
| NULL   |  | BTREE  | |
 | File  |  1 | JobId_2  |    1 |
JobId   | A | 915 | NULL
| NULL   |  | BTREE  | |
 | File  |  1 | JobId_2  |    2 |
PathId  | A |  102918 | NULL |
NULL   |  | BTREE  | |
 | File  |  1 | JobId_2  |    3 |
FilenameId  | A | 7924758 | NULL |
NULL   |  | BTREE  | |
  
+---++--+--+-+---+-+--++--++-+


 Is this right? If not what do I need to do to make it right? My server has
2 dual-core CPUs with 16gb ram and is dedicated to bacula. RAM usage is
normal.

 Thanks.
--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] performance issues

2007-12-05 Thread Bob Cregan
Hi
   We have just started to use bacula as our backup solution and are 
suffering some performance problems. The details are

Server:
Debian 4.0 running bacula 2.2.4 is multihomed as it  backs up over two 
networks.
Storage is Overland Arcvault24 on same host
Database is mysql:


ceres:/usr/local/src/bacula-patches# dpkg --list |grep mysql
rc  bacula-director-mysql 2.0.3-4 
Network backup, recovery and verification (D
ii  libdbd-mysql-perl 3.0008-1A 
Perl5 database interface to the MySQL data
ii  libmysqlclient15-dev  5.0.32-7etch1   
mysql database development files
ii  libmysqlclient15off   5.0.32-7etch1   
mysql database client library
ii  mysql-client  5.0.32-7etch1   
mysql database client (meta package dependin
ii  mysql-client-5.0  5.0.32-7etch1   
mysql database client binaries
ii  mysql-common  5.0.32-7etch1   
mysql database common files (e.g. /etc/mysql
ii  mysql-server-4.1  5.0.32-7etch1   
mysql database server (transitional package)
ii  mysql-server-5.0  5.0.32-7etch1   
mysql database server binaries


Client machine is debian 3.1 again running bacula 2.2.4
filesystem is 2.4 TB LVM volume manager volume (underneath is 2X 
raid10): filesystem is xfs.

The network used in this backup is gigabit and is OK - I ran netperf 
test and it gave reasonable answers. The client disk is plenty quick 
enough (at least 220MB/s read). The server disk is also OK (the backup 
is spooled to disk) at 50MB/s.

So all the individual parts have the necessary performance but when I 
run a backup I get a rate of about 770KB/s which is a bit of a 
showstopper when you have at least (I split the 2TB volume up) 600Gb  to 
backup. The nature of the data is a mixture, but it does tend to be 
small: pdfs html and other web type files.

Details of the software are :
###
patches both have the following patches added which takes them pretty 
close to 2.2.5 I believe

2.2.4-ansi-label.patch
2.2.4-lost-block.patch
2.2.4-parse-command.patch
2.2.4-poll-mount.patch
2.2.4-replace.patch
2.2.4-restore.patch
2.2.4-sd-auth-fail.patch
2.2.4-sql.patch
2.2.4-verify.patch

client
Configuration on Mon Oct 15 13:05:56 GMT 2007:

  Host:   i686-pc-linux-gnu -- debian 3.1
  Bacula version: 2.2.4 (14 September 2007)
  Source code location:   .
  Install binaries:   /usr/local/bacula-2.2.4/sbin
  Install config files:   /usr/local/bacula-2.2.4/etc
  Scripts directory:  /usr/local/bacula-2.2.4/etc
  Working directory:  /usr/local/bacula-2.2.4/var/bacula/working
  PID directory:  /var/run
  Subsys directory:   /var/run/subsys
  Man directory:  /usr/share/man
  Data directory: /usr/local/bacula-2.2.4/share
  C Compiler: gcc 3.3.5
  C++ Compiler:   /usr/bin/g++ 3.3.5
  Compiler flags:  -g -O2 -Wall -fno-strict-aliasing 
-fno-exceptions -fno-rtti
  Linker flags:-O
  Libraries:  -lpthread
  Statically Linked Tools:no
  Statically Linked FD:   no
  Statically Linked SD:   no
  Statically Linked DIR:  no
  Statically Linked CONS: no
  Database type:  None
  Database lib:
  Database name:  bacula
  Database user:  bacula

  Job Output Email:   [EMAIL PROTECTED]
  Traceback Email:[EMAIL PROTECTED]
  SMTP Host Address:  localhost

  Director Port:  9101
  File daemon Port:   9102
  Storage daemon Port:9103

  Director User:
  Director Group:
##
Server

  Host:   i686-pc-linux-gnu -- debian 4.0
  Bacula version: 2.2.4 (14 September 2007)
  Source code location:   .
  Install binaries:   /usr/local/bacula-2.2.4/sbin
  Install config files:   /usr/local/bacula-2.2.4/etc
  Scripts directory:  /usr/local/bacula-2.2.4/etc
  Working directory:  /usr/local/bacula-2.2.4/var/bacula/working
  PID directory:  /var/run
  Subsys directory:   /var/run/subsys
  Man directory:  /usr/share/man
  Data directory: /usr/local/bacula-2.2.4/share
  C Compiler: gcc 4.1.2
  C++ Compiler:   /usr/bin/g++-4.1 4.1.2
  Compiler flags:  -g -O2 -Wall -fno-strict-aliasing 
-fno-exceptions -fno-rtti
  Linker flags:-O
  Libraries:  -lpthread -ldl
  Statically Linked

[Bacula-users] Performance problems

2006-11-15 Thread Manuel Staechele
Hello,

i want restore a simple file from a FULL job which is not that big.
and it took 10 hours to build the directory-tree.

job informations:
Type | Level | JobFiles | JobBytes| JobStatus |
B| F |  913,065 |  17,818,106,395 | T

--

i have already checked if there are all recomented indexes in the 
databases. but they are all there.

database: mysql
bacula-director and bacula-sd are not on the same server

-

are there any hints to improve this situation?

thanks for any suggestions and ideas!

greetings from Grenzach in Germany
manuel staechele




-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Performance

2011-07-25 Thread Rickifer Barros
I forgot to say that the files I backed up are locally in the Bacula Server.

On Mon, Jul 25, 2011 at 11:48 AM, Rickifer Barros
wrote:

> Hello Guys...
>
> This weekend I did a backup with a size of 41.92 GB that took 1 hour and 24
> minutes with a rate of 8.27 MB/s.
>
> My Bacula Server is installed in a IBM server connected in a Tape Drive
> LTO4 (120 MB/s) via SAS connection (3 Gb/s).
>
> I'm using Encryption and Compression Gzip6.
>
> I think that this rate (8.27 MB/s) is too much slow for such configuration.
>
> I'd like to know how much rate you are getting using Encryption and
> Compression and what I suppose to do to increase the Bacula's performance.
>
> Thanks
>
> Rickifer Barros.
>
--
Storage Efficiency Calculator
This modeling tool is based on patent-pending intellectual property that
has been used successfully in hundreds of IBM storage optimization engage-
ments, worldwide.  Store less, Store more with what you own, Move data to 
the right place. Try It Now! http://www.accelacomm.com/jaw/sfnl/114/51427378/___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Performance

2011-07-25 Thread John Drescher
2011/7/25 Rickifer Barros :
> Hello Guys...
>
> This weekend I did a backup with a size of 41.92 GB that took 1 hour and 24
> minutes with a rate of 8.27 MB/s.
>
> My Bacula Server is installed in a IBM server connected in a Tape Drive LTO4
> (120 MB/s) via SAS connection (3 Gb/s).
>
> I'm using Encryption and Compression Gzip6.

Disable software compression. The tape drive will compress much faster
than the client.

John

--
Storage Efficiency Calculator
This modeling tool is based on patent-pending intellectual property that
has been used successfully in hundreds of IBM storage optimization engage-
ments, worldwide.  Store less, Store more with what you own, Move data to 
the right place. Try It Now! http://www.accelacomm.com/jaw/sfnl/114/51427378/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Performance

2011-07-25 Thread Rickifer Barros
I did this beforebut, I didn't know how check in Debian if it really is
being compressed by the tape drive. The only thing that I got was the bacula
information about the SD and FD Written and the "mt" command in Linux don't
say me the real data size of the volume, so I chose to trust on the software
compression...YEAH I'm noob on Tape manipulation...  :t

On Mon, Jul 25, 2011 at 11:57 AM, John Drescher wrote:

> 2011/7/25 Rickifer Barros :
> > Hello Guys...
> >
> > This weekend I did a backup with a size of 41.92 GB that took 1 hour and
> 24
> > minutes with a rate of 8.27 MB/s.
> >
> > My Bacula Server is installed in a IBM server connected in a Tape Drive
> LTO4
> > (120 MB/s) via SAS connection (3 Gb/s).
> >
> > I'm using Encryption and Compression Gzip6.
>
> Disable software compression. The tape drive will compress much faster
> than the client.
>
> John
>
--
Storage Efficiency Calculator
This modeling tool is based on patent-pending intellectual property that
has been used successfully in hundreds of IBM storage optimization engage-
ments, worldwide.  Store less, Store more with what you own, Move data to 
the right place. Try It Now! http://www.accelacomm.com/jaw/sfnl/114/51427378/___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Performance

2011-07-25 Thread John Drescher
On Mon, Jul 25, 2011 at 11:06 AM, Rickifer Barros
 wrote:
> I did this beforebut, I didn't know how check in Debian if it really is
> being compressed by the tape drive. The only thing that I got was the bacula
> information about the SD and FD Written and the "mt" command in Linux don't
> say me the real data size of the volume, so I chose to trust on the software
> compression...YEAH I'm noob on Tape manipulation...  :t
>

Just fill a tape with bacua. Then look at its status bacula will tell
you how much data has been written to the tape. If this amount is >
than the tape's native size compression is on. Remember the tape drive
compresses data at 120 MB/s while software compression you will be
lucky to get 1/10 that if you have a 4 GHz processor.

John

--
Storage Efficiency Calculator
This modeling tool is based on patent-pending intellectual property that
has been used successfully in hundreds of IBM storage optimization engage-
ments, worldwide.  Store less, Store more with what you own, Move data to 
the right place. Try It Now! http://www.accelacomm.com/jaw/sfnl/114/51427378/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Performance

2011-07-25 Thread Rickifer Barros
OK John...I'll test it.

On Mon, Jul 25, 2011 at 12:13 PM, John Drescher wrote:

> On Mon, Jul 25, 2011 at 11:06 AM, Rickifer Barros
>  wrote:
> > I did this beforebut, I didn't know how check in Debian if it really
> is
> > being compressed by the tape drive. The only thing that I got was the
> bacula
> > information about the SD and FD Written and the "mt" command in Linux
> don't
> > say me the real data size of the volume, so I chose to trust on the
> software
> > compression...YEAH I'm noob on Tape manipulation...  :t
> >
>
> Just fill a tape with bacua. Then look at its status bacula will tell
> you how much data has been written to the tape. If this amount is >
> than the tape's native size compression is on. Remember the tape drive
> compresses data at 120 MB/s while software compression you will be
> lucky to get 1/10 that if you have a 4 GHz processor.
>
> John
>
--
Storage Efficiency Calculator
This modeling tool is based on patent-pending intellectual property that
has been used successfully in hundreds of IBM storage optimization engage-
ments, worldwide.  Store less, Store more with what you own, Move data to 
the right place. Try It Now! http://www.accelacomm.com/jaw/sfnl/114/51427378/___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Performance

2011-07-25 Thread Phil Stracchino
On 07/25/11 11:13, John Drescher wrote:
> On Mon, Jul 25, 2011 at 11:06 AM, Rickifer Barros
>  wrote:
>> I did this beforebut, I didn't know how check in Debian if it really is
>> being compressed by the tape drive. The only thing that I got was the bacula
>> information about the SD and FD Written and the "mt" command in Linux don't
>> say me the real data size of the volume, so I chose to trust on the software
>> compression...YEAH I'm noob on Tape manipulation...  :t
>>
> 
> Just fill a tape with bacua. Then look at its status bacula will tell
> you how much data has been written to the tape. If this amount is >
> than the tape's native size compression is on. Remember the tape drive
> compresses data at 120 MB/s while software compression you will be
> lucky to get 1/10 that if you have a 4 GHz processor.

mt can both get and set options including compression.  It's all in the
man page.


-- 
  Phil Stracchino, CDK#2 DoD#299792458 ICBM: 43.5607, -71.355
  ala...@caerllewys.net   ala...@metrocast.net   p...@co.ordinate.org
  Renaissance Man, Unix ronin, Perl hacker, SQL wrangler, Free Stater
 It's not the years, it's the mileage.

--
Storage Efficiency Calculator
This modeling tool is based on patent-pending intellectual property that
has been used successfully in hundreds of IBM storage optimization engage-
ments, worldwide.  Store less, Store more with what you own, Move data to 
the right place. Try It Now! http://www.accelacomm.com/jaw/sfnl/114/51427378/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Performance

2011-07-25 Thread James Harper
> 2011/7/25 Rickifer Barros :
> > Hello Guys...
> >
> > This weekend I did a backup with a size of 41.92 GB that took 1 hour
and 24
> > minutes with a rate of 8.27 MB/s.
> >
> > My Bacula Server is installed in a IBM server connected in a Tape
Drive LTO4
> > (120 MB/s) via SAS connection (3 Gb/s).
> >
> > I'm using Encryption and Compression Gzip6.
> 
> Disable software compression. The tape drive will compress much faster
> than the client.
> 

If you can find compressible patterns in the encrypted data stream then
you are not properly encrypting it. The only option would be to compress
before encryption which means you can't use the compression function in
the tape drive unless the tape drive also does the encryption (some do).

Use a lower GZIP compression level to see if it gets you better speed
without sacrificing too much performance... I suspect the speed hit is
going to be the encryption though.

James


--
Magic Quadrant for Content-Aware Data Loss Prevention
Research study explores the data loss prevention market. Includes in-depth
analysis on the changes within the DLP market, and the criteria used to
evaluate the strengths and weaknesses of these DLP solutions.
http://www.accelacomm.com/jaw/sfnl/114/51385063/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Performance

2011-07-26 Thread Steve Ellis
On 7/25/2011 6:14 PM, James Harper wrote:
>> 2011/7/25 Rickifer Barros:
>>> Hello Guys...
>>>
>>> This weekend I did a backup with a size of 41.92 GB that took 1 hour
> and 24
>>> minutes with a rate of 8.27 MB/s.
>>>
>>> My Bacula Server is installed in a IBM server connected in a Tape
> Drive LTO4
>>> (120 MB/s) via SAS connection (3 Gb/s).
>>>
>>> I'm using Encryption and Compression Gzip6.
>> Disable software compression. The tape drive will compress much faster
>> than the client.
>>
> If you can find compressible patterns in the encrypted data stream then
> you are not properly encrypting it. The only option would be to compress
> before encryption which means you can't use the compression function in
> the tape drive unless the tape drive also does the encryption (some do).
>
> Use a lower GZIP compression level to see if it gets you better speed
> without sacrificing too much performance... I suspect the speed hit is
> going to be the encryption though.
>
> James
>
I was under the impression that _all_ LTO4 drives implemented encryption 
(though if having the data traversing the LAN encrypted is your goal, 
you'd still have to do something).  I don't know enough about it to know 
how good the encryption in LTO4 is, however (or for that matter, how the 
key is specified).

Both encryption and compression in SW are going to be much slower than 
the tape drive could do it (which is why LTO4 required it, as I 
understood).  Another point, even with your current config, if you 
aren't doing data spooling you are probably slowing things down further, 
as well as wearing out both the tapes and heads on the drive with lots 
of shoeshining.

-se

--
Magic Quadrant for Content-Aware Data Loss Prevention
Research study explores the data loss prevention market. Includes in-depth
analysis on the changes within the DLP market, and the criteria used to
evaluate the strengths and weaknesses of these DLP solutions.
http://www.accelacomm.com/jaw/sfnl/114/51385063/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Performance

2011-07-26 Thread James Harper
> >> Disable software compression. The tape drive will compress much
faster
> >> than the client.
> >>
> > If you can find compressible patterns in the encrypted data stream
then
> > you are not properly encrypting it. The only option would be to
compress
> > before encryption which means you can't use the compression function
in
> > the tape drive unless the tape drive also does the encryption (some
do).
> >
> > Use a lower GZIP compression level to see if it gets you better
speed
> > without sacrificing too much performance... I suspect the speed hit
is
> > going to be the encryption though.
>
> I was under the impression that _all_ LTO4 drives implemented
encryption
> (though if having the data traversing the LAN encrypted is your goal,
> you'd still have to do something).  I don't know enough about it to
know
> how good the encryption in LTO4 is, however (or for that matter, how
the
> key is specified).
> 

I'm pretty sure that LTO4 drives are required to identify an encrypted
tape if one is inserted, but the actual support for encryption is
optional. I think they use AES encryption or some variant of it.

James

--
Magic Quadrant for Content-Aware Data Loss Prevention
Research study explores the data loss prevention market. Includes in-depth
analysis on the changes within the DLP market, and the criteria used to
evaluate the strengths and weaknesses of these DLP solutions.
http://www.accelacomm.com/jaw/sfnl/114/51385063/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Performance

2011-07-26 Thread Jeremy Maes
>> I was under the impression that_all_  LTO4 drives implemented encryption
>> (though if having the data traversing the LAN encrypted is your goal,
>> you'd still have to do something).  I don't know enough about it to know
>> how good the encryption in LTO4 is, however (or for that matter, how the
>> key is specified).
>>
>> Both encryption and compression in SW are going to be much slower than
>> the tape drive could do it (which is why LTO4 required it, as I
>> understood).  Another point, even with your current config, if you
>> aren't doing data spooling you are probably slowing things down further,
>> as well as wearing out both the tapes and heads on the drive with lots
>> of shoeshining.
>
> I'm pretty sure that LTO4 drives are required to identify an encrypted
> tape if one is inserted, but the actual support for encryption is
> optional. I think they use AES encryption or some variant of it.
>
> James

In regards to encryption on LTO-4 drives, not 100% sure about the 
requirement for tape drive manufacturers to add it to the drives.
I do know though, from discussions on this list in the past (and some 
searchwork myself) that activating encryption on such a drive is not 
something that you will be able to do with bacula.

The drive itself requires the software that operates it to manage the 
keys, and set the keys and encryption settings via scsi commands. From 
what I know there are only a few backup suites today that are able to do 
this, and they all cost quite a bit :)
So unless you can code the required stuff for bacula yourself, or are 
looking to change to a paying solution, hardware encryption will not be 
an option unfortunately.

Kind regards,
Jeremy

  DISCLAIMER 
http://www.schaubroeck.be/maildisclaimer.htm

--
Magic Quadrant for Content-Aware Data Loss Prevention
Research study explores the data loss prevention market. Includes in-depth
analysis on the changes within the DLP market, and the criteria used to
evaluate the strengths and weaknesses of these DLP solutions.
http://www.accelacomm.com/jaw/sfnl/114/51385063/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Performance

2011-07-26 Thread Konstantin Khomoutov
On Tue, 26 Jul 2011 00:18:05 -0700
Steve Ellis  wrote:

[...]
> Another point, even with your current config, if you 
> aren't doing data spooling you are probably slowing things down
> further, as well as wearing out both the tapes and heads on the drive
> with lots of shoeshining.
(I'm asking as a person having almost zero prior experience with tape
drives for backup purposes.)

Among other things, I'm doing full backups of a set of machines to a
single tape--yes, full backup each time, no incremental/differential
which means I supposedly have just straightforward data flows from
FDs to the SD.  At present time I have max concurrent jobs set to 1
on my tape drive resource and no data spooling turned on.
Would I benefit from enabling data spooling in this scenario?

To present some numbers, each machine's data is about 50-80G and I can
use about 200G for the spool directory which means I could do spooling
for 3-4 jobs in parallel (as described in [1]).
Would that improve tape usage pattern?

1. http://www.bacula.org/en/dev-manual/main/main/Data_Spooling.html

--
Magic Quadrant for Content-Aware Data Loss Prevention
Research study explores the data loss prevention market. Includes in-depth
analysis on the changes within the DLP market, and the criteria used to
evaluate the strengths and weaknesses of these DLP solutions.
http://www.accelacomm.com/jaw/sfnl/114/51385063/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Performance

2011-07-26 Thread Rickifer Barros
I disabled the Compression and my speed rate boosted from 8.2 MB/s to 40.8
MB/s but I'm still using Bacula encryption.

On Mon, Jul 25, 2011 at 10:14 PM, James Harper <
james.har...@bendigoit.com.au> wrote:

> > 2011/7/25 Rickifer Barros :
> > > Hello Guys...
> > >
> > > This weekend I did a backup with a size of 41.92 GB that took 1 hour
> and 24
> > > minutes with a rate of 8.27 MB/s.
> > >
> > > My Bacula Server is installed in a IBM server connected in a Tape
> Drive LTO4
> > > (120 MB/s) via SAS connection (3 Gb/s).
> > >
> > > I'm using Encryption and Compression Gzip6.
> >
> > Disable software compression. The tape drive will compress much faster
> > than the client.
> >
>
> If you can find compressible patterns in the encrypted data stream then
> you are not properly encrypting it. The only option would be to compress
> before encryption which means you can't use the compression function in
> the tape drive unless the tape drive also does the encryption (some do).
>
> Use a lower GZIP compression level to see if it gets you better speed
> without sacrificing too much performance... I suspect the speed hit is
> going to be the encryption though.
>
> James
>
>
--
Magic Quadrant for Content-Aware Data Loss Prevention
Research study explores the data loss prevention market. Includes in-depth
analysis on the changes within the DLP market, and the criteria used to
evaluate the strengths and weaknesses of these DLP solutions.
http://www.accelacomm.com/jaw/sfnl/114/51385063/___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Performance

2011-07-26 Thread Steve Ellis
On 7/26/2011 5:04 AM, Konstantin Khomoutov wrote:
> On Tue, 26 Jul 2011 00:18:05 -0700
> Steve Ellis  wrote:
>
> [...]
>> Another point, even with your current config, if you
>> aren't doing data spooling you are probably slowing things down
>> further, as well as wearing out both the tapes and heads on the drive
>> with lots of shoeshining.
> (I'm asking as a person having almost zero prior experience with tape
> drives for backup purposes.)
>
> Among other things, I'm doing full backups of a set of machines to a
> single tape--yes, full backup each time, no incremental/differential
> which means I supposedly have just straightforward data flows from
> FDs to the SD.  At present time I have max concurrent jobs set to 1
> on my tape drive resource and no data spooling turned on.
> Would I benefit from enabling data spooling in this scenario?
>
> To present some numbers, each machine's data is about 50-80G and I can
> use about 200G for the spool directory which means I could do spooling
> for 3-4 jobs in parallel (as described in [1]).
> Would that improve tape usage pattern?
>
> 1. http://www.bacula.org/en/dev-manual/main/main/Data_Spooling.html
>
>
OK, perhaps I'm not the best person to ask, but here's what I do know:

Even with only 1 job at a time, if you aren't able to deliver data to 
the drive at its minimum streaming data rate (for LTO4, probably at 
least 40MB/sec--possibly varies by manufacturer), then the tape 
mechanism will have to stop, go back a bit, wait for more data, then 
start up again--all of this takes time, and increases wear on the tapes 
and drive heads.  If you enable data spooling when you can't keep up 
with the drive anyway, even with a fairly modest spool size of 10-20G 
per job, I believe you will find that your backups will at least not be 
slower, and may well proceed faster, even with the overhead of spooling 
(assuming that your spool disk(s) are able to send data to the drive 
fast enough to hit near the maximum rate the drive can accept).  If you 
are using concurrent jobs, there is a further benefit:  the data for all 
jobs won't be completely shuffled on the tape.  If I recall, data 
spooling in bacula implicitly turns on attribute spooling, which can 
also help, I believe, if there are lots of small files in your backup.

You don't have to spool an entire job in order to take advantage of 
spooling--and with multiple concurrent jobs, while one is despooling 
others can be spooling (have to watch out for whether your spool area 
can keep up with all the writes and reads, though).

I'm still on LTO3, but I believe that some people advocate RAID0 for 
spool disks for LTO4.  I'm using an otherwise completely idle single 
drive for spooling 3 concurrent jobs and as far as I've noticed, I'm 
able to stream data to the drive at a rate it is happy with (again to LTO3).

I hope this helps,

-se

--
Magic Quadrant for Content-Aware Data Loss Prevention
Research study explores the data loss prevention market. Includes in-depth
analysis on the changes within the DLP market, and the criteria used to
evaluate the strengths and weaknesses of these DLP solutions.
http://www.accelacomm.com/jaw/sfnl/114/51385063/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Performance

2011-07-27 Thread Konstantin Khomoutov
On Tue, 26 Jul 2011 06:18:25 -0700
Steve Ellis  wrote:

> >> Another point, even with your current config, if you
> >> aren't doing data spooling you are probably slowing things down
> >> further, as well as wearing out both the tapes and heads on the
> >> drive with lots of shoeshining.
> > (I'm asking as a person having almost zero prior experience with
> > tape drives for backup purposes.)
> >
> > Among other things, I'm doing full backups of a set of machines to a
> > single tape--yes, full backup each time, no incremental/differential
> > which means I supposedly have just straightforward data flows from
> > FDs to the SD.  At present time I have max concurrent jobs set to 1
> > on my tape drive resource and no data spooling turned on.
> > Would I benefit from enabling data spooling in this scenario?
[...]
> OK, perhaps I'm not the best person to ask, but here's what I do know:
> 
> Even with only 1 job at a time, if you aren't able to deliver data to 
> the drive at its minimum streaming data rate (for LTO4, probably at 
> least 40MB/sec--possibly varies by manufacturer), then the tape 
> mechanism will have to stop, go back a bit, wait for more data, then 
> start up again--all of this takes time, and increases wear on the
> tapes and drive heads.
[...]
> I hope this helps,

Apart from giving me the knowledge of this mechanics, your message
made me sit down and make some measures.  So I discovered that my LTO-4
drive is actually faster to write (up to 10 times) than my local
filesystems are to read (and I was absolutely sure it's the other way
round).  We're now considering implementing a raid0 for data spooling.
Hence, thanks!

--
Got Input?   Slashdot Needs You.
Take our quick survey online.  Come on, we don't ask for help often.
Plus, you'll get a chance to win $100 to spend on ThinkGeek.
http://p.sf.net/sfu/slashdot-survey
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] performance problem

2011-08-03 Thread Jeff Shanholtz
I currently have 3 clients doing a full backup (simultaneously). According
to "status client" one is getting 300kb/s (this one is my director and
storage server machine), one is getting 225kb/s, and one is getting 50kb/s.
I've disabled AV on access scanning for the bacula-fd.exe process. I have
software compression enabled, but none of the 3 systems seem bogged down so
I think the bottleneck is not due to that option (although I'm tempted to
turn on ntfs compression on the backup drive and disable software
compression in the future).

 

For the most part I don't mind too much that the backups are so slow because
I'm quite happy to see the client machines continuing to be quite snappy.
The main concern, particularly for full backups (which at this rate will
take upwards of 3 days to complete), is the possibility of a system going
offline and thus killing the backup (or will it pick up where it left off,
as long as I have FD Connect Timeout configured to be longer than a system
would typically be offline for, e.g. 12 hours?).

 

So what else could be coming into play with my poor performance? Are
simultaneous backups problematic performance-wise? I've watched the I/O
activity of bacula-sd.exe and it certainly doesn't seem to be maxed out. It
is using an external USB2 hard drive. Could the difference between USB2 and
eSATA be the key? I can connect them as eSATA if I really need to. Seems
like USB2 should be allowing substantially more than the roughly 600kb/s
overall speed I'm getting though.

 

I'm running all Windows Bacula binaries, version 3.0.3. I'm not sure posting
config files is necessary at this point, although I'm happy to do so if
needed.

--
BlackBerry® DevCon Americas, Oct. 18-20, San Francisco, CA
The must-attend event for mobile developers. Connect with experts. 
Get tools for creating Super Apps. See the latest technologies.
Sessions, hands-on labs, demos & much more. Register early & save!
http://p.sf.net/sfu/rim-blackberry-1___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Performance Problem

2007-09-18 Thread John Drescher
> I have bacula running (version 2.0.2) and in principle everything works = 
> fine. But now (reading some mails from the list) I ask myself why the = 
> backup-speed is that slow. In average it's about 1500 kb/s.
>
Is this an incremental or Differential backup?

John

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Performance Problem

2007-09-18 Thread Bill Moran
In response to "Rainer Hackel" <[EMAIL PROTECTED]>:

> Hi all!
> 
> I have bacula running (version 2.0.2) and in principle everything works = 
> fine. But now (reading some mails from the list) I ask myself why the = 
> backup-speed is that slow. In average it's about 1500 kb/s.
> 
> The software is running on fedora. The Computer has a fast CPU and 2GB = of 
> RAM. No network backups, just lokal disk. The backup-drive is a lto-1 = hp.
> 
> What backup-speed coult i expect? How could i find the bottleneck?

How fast are your disks/tapes?  Are you backing up to tape or disk?
Try some dd tests to see how fast you can transfer data raw.  Use
tar going from disk to tape to see how fast that runs, and/or use
dd going from disk to disk.

Frequently, in my experience, otherwise "fast" computers have slow
(but reliable) hard drives in them.  Since RAM is so cheap, you
usually don't notice this until you're moving _lots_ of data around.

>From there, you have to take into account that the DBMS has to write
the catalog records, so you could run some tests to see how fast it
can write new records to see if that's slowing you down.

In my experience, CPU/memory are usually not the bottleneck when
backups are running.

-- 
Bill Moran
http://www.potentialtech.com

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Performance Problem

2007-09-18 Thread Michel Meyers
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hello,

Rainer Hackel wrote:
> I have bacula running (version 2.0.2) and in principle everything works = 
> fine. 

I feel obliged to warn you about that version:
http://www.bacula.org/downloads/bug-395.txt

You should upgrade to 2.2.4 as soon as possible. On a sidenote, 2.2.4
has some performance improvements in the way it inserts data into SQL
(batch inserts) but will not work on very ancient MySQL versions for
example.

Greetings,
  Michel
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.7 (MingW32)

iD8DBQFG7/wq2Vs+MkscAyURApRHAKCKwTlA/rSdYMow8zBSB9GN03TgtACfbHu6
DiqG4c+wOFWl4MwWm5sEsEQ=
=1YrL
-END PGP SIGNATURE-

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Performance Problem

2007-09-18 Thread David Blewett
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

John Drescher wrote:
>> I have bacula running (version 2.0.2) and in principle everything works = 
>> fine.
>> But now (reading some mails from the list) I ask myself why the = 
>> backup-speed 
>> is that slow. In average it's about 1500 kb/s.

We are using an HP StorageWorks Ultrium 215 (LTO1) here. Using btape's
test, I usually get 7500KB/s. When backing up local disks or some of our
faster boxes over the network, I usually get 10MB/s.

David Blewett
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.7 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFG7/geZmlc6wNjtLYRAlsOAJ92+2hmhZcngMs7sUo7/hDGw6LP2gCgrUST
VOstFpQH3c5QhY8kF1P1nH0=
=S5lc
-END PGP SIGNATURE-

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Performance Issues

2005-04-25 Thread Alan Brown
On Mon, 25 Apr 2005, Andreas Kopecki wrote:
Test-Server: Dual PIII 1GHz on a GRAU Tape Silo running Bacula Postgresql
1.36.2 spooling on a software Raid-0 2x80GB PATA Disks.
How much ram?
AB

---
SF email is sponsored by - The IT Product Guide
Read honest & candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_id=6595&alloc_id=14396&op=click
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Performance Issues

2005-04-25 Thread Andreas Kopecki
Hi!

On Monday 25 April 2005 12:24, Alan Brown wrote:
> On Mon, 25 Apr 2005, Andreas Kopecki wrote:
> > Test-Server: Dual PIII 1GHz on a GRAU Tape Silo running Bacula Postgresql
> > 1.36.2 spooling on a software Raid-0 2x80GB PATA Disks.
>
> How much ram?

2Gig, I think that should be sufficient?

-- 
Andreas KopeckiHigh Performance Computing Center (HLRS)
   Visualisation Department
Tel. ++49-711-6855789  University of Stuttgart,
   Allmandring 30a, D-70550 Stuttgart

[EMAIL PROTECTED]  http://www.hlrs.de/
-



---
SF email is sponsored by - The IT Product Guide
Read honest & candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_id=6595&alloc_id=14396&op=click
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Performance Issues

2005-04-25 Thread Alan Brown
On Mon, 25 Apr 2005, Andreas Kopecki wrote:
Test-Server: Dual PIII 1GHz on a GRAU Tape Silo running Bacula Postgresql
1.36.2 spooling on a software Raid-0 2x80GB PATA Disks.
How much ram?
2Gig, I think that should be sufficient?
More than enough, but you'll ensure postgres is actually using it.
How big is its memory footprint?

---
SF email is sponsored by - The IT Product Guide
Read honest & candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_id=6595&alloc_id=14396&op=click
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Performance Issues

2005-04-25 Thread Andreas Kopecki
Hi!

On Monday 25 April 2005 12:34, Alan Brown wrote:
> On Mon, 25 Apr 2005, Andreas Kopecki wrote:
> >>> Test-Server: Dual PIII 1GHz on a GRAU Tape Silo running Bacula
> >>> Postgresql 1.36.2 spooling on a software Raid-0 2x80GB PATA Disks.
> >>
> >> How much ram?
> >
> > 2Gig, I think that should be sufficient?
>
> More than enough, but you'll ensure postgres is actually using it.
> How big is its memory footprint?
 
postgres  19292 /usr/bin/postmaster -D /var/lib/pgsql/data
postgres  10088 postgres: stats buffer process
postgres   9096 postgres: stats collector process
postgres  20648 postgres: bacula bacula [local] idle

Sizes are in VIRT, RES is a little bit lower as I am not very experienced 
in postgresql: any settings you recommend to tune?

-- 
Andreas KopeckiHigh Performance Computing Center (HLRS)
   Visualisation Department
Tel. ++49-711-6855789  University of Stuttgart,
   Allmandring 30a, D-70550 Stuttgart

[EMAIL PROTECTED]  http://www.hlrs.de/
-



---
SF email is sponsored by - The IT Product Guide
Read honest & candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_id=6595&alloc_id=14396&op=click
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Performance Issues

2005-04-26 Thread Alan Brown
On Mon, 25 Apr 2005, Andreas Kopecki wrote:
More than enough, but you'll ensure postgres is actually using it.
How big is its memory footprint?
postgres  19292 /usr/bin/postmaster -D /var/lib/pgsql/data
postgres  10088 postgres: stats buffer process
postgres   9096 postgres: stats collector process
postgres  20648 postgres: bacula bacula [local] idle
Sizes are in VIRT, RES is a little bit lower as I am not very experienced
in postgresql: any settings you recommend to tune?
It seems small. I am using MySQL and there has been some discussion of 
tuning that here recently, but not much about Postgresql.


---
SF email is sponsored by - The IT Product Guide
Read honest & candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_id=6595&alloc_id=14396&op=click
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Performance Issues

2012-10-01 Thread lst_hoe02

Zitat von Rodrigo Abrantes Antunes :

> Hi, when restoring, listing files, backing up, purging or pruning mysql
> process uses 100% CPU and the machine is unusable, and such operations last
> to long. Doing some research I found that this can be related to database
> indexes, but I didn't understanf well what I need to do .Here is the output
> of "show index from File" :
>   
> +---++--+--+-+---+-+--++--++-+
>  | Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation |
> Cardinality | Sub_part | Packed | Null | Index_type | Comment |
>   
> +---++--+--+-+---+-+--++--++-+
>  | File  |  0 | PRIMARY  |    1 |
> FileId  | A | 7924758 | NULL |
> NULL   |  | BTREE  | |
>  | File  |  1 | JobId    |    1 |
> JobId   | A | 915 | NULL
> | NULL   |  | BTREE  | |
>  | File  |  1 | JobId_2  |    1 |
> JobId   | A | 915 | NULL
> | NULL   |  | BTREE  | |
>  | File  |  1 | JobId_2  |    2 |
> PathId  | A |  102918 | NULL |
> NULL   |  | BTREE  | |
>  | File  |  1 | JobId_2  |    3 |
> FilenameId  | A | 7924758 | NULL |
> NULL   |  | BTREE  | |
>   
> +---++--+--+-+---+-+--++--++-+
>
>  Is this right? If not what do I need to do to make it right? My server has
> 2 dual-core CPUs with 16gb ram and is dedicated to bacula. RAM usage is
> normal.
>
>  Thanks.

Hello

Bacula uses the database for meta-data operation which can lead to  
high insert rates and scanning large tables. Databases are sensitive  
to I/O performance so you should check if your I/O is up to the task  
and if MySQL is tuned for this kind of usage.

Regards

Andreas



--
Got visibility?
Most devs has no idea what their production app looks like.
Find out how fast your code is with AppDynamics Lite.
http://ad.doubleclick.net/clk;262219671;13503038;y?
http://info.appdynamics.com/FreeJavaPerformanceDownload.html
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] performance issues

2007-12-13 Thread Jeronimo Zucco
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Bob Cregan wrote:
> Hi We have just started to use bacula as our backup solution and
> are suffering some performance problems. The details are
>
> Server: Debian 4.0 running bacula 2.2.4 is multihomed as it  backs
> up over two networks. Storage is Overland Arcvault24 on same host
> Database is mysql:

May be the problem is mysql configuration. Make a test with
my-medium.cnf or  my-large.cn (included in mysql-server package) in
your my.cnf

- --
Jeronimo Zucco
LPIC-1 Linux Professional Institute Certified
Núcleo de Processamento de Dados
Universidade de Caxias do Sul

http://jczucco.blogspot.com

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFHYWaTWi/PuDd2cZARAtliAJ0doRFM+Lxbte+dSltJGBgzLUTR6QCcDlWd
QPzZDDY43h4K8DJWcpxHatw=
=0zLo
-END PGP SIGNATURE-


-
SF.Net email is sponsored by:
Check out the new SourceForge.net Marketplace.
It's the best place to buy or sell services
for just about anything Open Source.
http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Performance problems

2006-11-15 Thread John Drescher
On 11/15/06, Manuel Staechele <[EMAIL PROTECTED]> wrote:
Hello,i want restore a simple file from a FULL job which is not that big.and it took 10 hours to build the directory-tree.job informations:Type | Level | JobFiles | JobBytes| JobStatus |
B| F |  913,065 |  17,818,106,395 | T--i have already checked if there are all recomented indexes in thedatabases. but they are all there.
Usually this is a problem with the database but I see that
you have checked the indexes. I have a few questions. Is the pc with
the database installed recent? What version of bacula are you running?
Did the hard drive thrash continuously for the time it was building the
tree? What was your cpu load during the tree build?
 database: mysqlbacula-director and bacula-sd are not on the same server
This should only make a difference for the actual restore and not the build tree step.John
-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Performance problems

2006-11-15 Thread Manuel Staechele
John Drescher schrieb:
> 
> 
> On 11/15/06, *Manuel Staechele* <[EMAIL PROTECTED] 
> > wrote:
> 
> Hello,
> 
> i want restore a simple file from a FULL job which is not that big.
> and it took 10 hours to build the directory-tree.
> 
> job informations:
> Type | Level | JobFiles | JobBytes| JobStatus |
> B| F |  913,065 |  17,818,106,395 | T
> 
> --
> 
> i have already checked if there are all recomented indexes in the
> databases. but they are all there.
> 
> 
> 
> Usually this is a problem with the database but I see that you have 
> checked the indexes. I have a few questions. Is the pc with the database 
> installed recent? What version of bacula are you running? Did the hard 
> drive thrash continuously for the time it was building the tree? What 
> was your cpu load during the tree build?

 > Is the pc with the database  installed recent?
no actually 2-3 years i think

 > What version of bacula are you running?
Version: 1.38.11 (28 June 2006)

 > Did the hard drive thrash continuously for the time it was building 
the tree?
could bee do not know excactly

 > What was your cpu load during the tree build?
it goes from average 5-10% up to 50-90% so really hight
could that be the reason?

thanks for your effort

manuel


> 
> database: mysql
> bacula-director and bacula-sd are not on the same server
> 
> 
> This should only make a difference for the actual restore and not the 
> build tree step.
> 
> John
> 
> 
> 

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] performance problem

2011-08-04 Thread Jeff Shanholtz
FWIW the backups sped up considerably and finished after 1.5 days at an
overall transfer rate of about 1.5 MB/s. I'm really not sure what caused the
slowdown yesterday but the eventual speed up seems to imply an environmental
state on the machines that went away. I checked to see if the AV software
was doing full system scans, but it wasn't. Unless anyone has ideas on what
might have been causing the slowdown, I will consider this issue (more or
less) resolved. Thanks anyway.

 

From: Jeff Shanholtz [mailto:jeffs...@shanholtz.com] 
Sent: Wednesday, August 03, 2011 5:31 PM
To: Bacula-users@lists.sourceforge.net
Subject: [Bacula-users] performance problem

 

I currently have 3 clients doing a full backup (simultaneously). According
to "status client" one is getting 300kb/s (this one is my director and
storage server machine), one is getting 225kb/s, and one is getting 50kb/s.
I've disabled AV on access scanning for the bacula-fd.exe process. I have
software compression enabled, but none of the 3 systems seem bogged down so
I think the bottleneck is not due to that option (although I'm tempted to
turn on ntfs compression on the backup drive and disable software
compression in the future).

 

For the most part I don't mind too much that the backups are so slow because
I'm quite happy to see the client machines continuing to be quite snappy.
The main concern, particularly for full backups (which at this rate will
take upwards of 3 days to complete), is the possibility of a system going
offline and thus killing the backup (or will it pick up where it left off,
as long as I have FD Connect Timeout configured to be longer than a system
would typically be offline for, e.g. 12 hours?).

 

So what else could be coming into play with my poor performance? Are
simultaneous backups problematic performance-wise? I've watched the I/O
activity of bacula-sd.exe and it certainly doesn't seem to be maxed out. It
is using an external USB2 hard drive. Could the difference between USB2 and
eSATA be the key? I can connect them as eSATA if I really need to. Seems
like USB2 should be allowing substantially more than the roughly 600kb/s
overall speed I'm getting though.

 

I'm running all Windows Bacula binaries, version 3.0.3. I'm not sure posting
config files is necessary at this point, although I'm happy to do so if
needed.

--
BlackBerry® DevCon Americas, Oct. 18-20, San Francisco, CA
The must-attend event for mobile developers. Connect with experts. 
Get tools for creating Super Apps. See the latest technologies.
Sessions, hands-on labs, demos & much more. Register early & save!
http://p.sf.net/sfu/rim-blackberry-1___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] performance problem on windows

2005-07-13 Thread Carsten Schurig

Hi,

I installed bacula 1.36.3 to backup two Linux server and one Windows 2K
server. It seems to work, almost: the backup from the windows machine
ist very slow.

The backup of the Linux servers runs with about 800 kBytes/s (DDS-3
tapes), but the Windows server just returns about 100 kB/s, which is
much too slow to backup 15 GB!

What I don't understand at all is, that I in some of the test runs I did
reach these 800 kB/s for the Windows machine as well and I can't
remember, what I changed.

Do you have any idea what I could look for? Are there some settings in
the fileset definitions, which reduce performance on a Windows machine
this tremendously?

Thanks,
Carsten



---
This SF.Net email is sponsored by the 'Do More With Dual!' webinar happening
July 14 at 8am PDT/11am EDT. We invite you to explore the latest in dual
core and dual graphics technology at this free one hour event hosted by HP, 
AMD, and NVIDIA.  To register visit http://www.hp.com/go/dualwebinar

___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] performance problem on windows

2005-07-13 Thread Carsten Schurig

Hi,

I installed bacula 1.36.3 to backup two Linux server and one Windows 2K 
server. It seems to work, almost: the backup from the windows machine 
ist very slow.


The backup of the Linux servers runs with about 800 kBytes/s (DDS-3 
tapes), but the Windows server just returns about 100 kB/s, which is 
much too slow to backup 15 GB!


What I don't understand at all is, that I in some of the test runs I did 
reach these 800 kB/s for the Windows machine as well and I can't 
remember, what I changed.


Do you have any idea what I could look for? Are there some settings in 
the fileset definitions, which reduce performance on a Windows machine 
this tremendously?


Thanks,
Carsten



---
This SF.Net email is sponsored by the 'Do More With Dual!' webinar happening
July 14 at 8am PDT/11am EDT. We invite you to explore the latest in dual
core and dual graphics technology at this free one hour event hosted by HP, 
AMD, and NVIDIA.  To register visit http://www.hp.com/go/dualwebinar

___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] performance problem - Windows & TLS

2011-08-23 Thread kamilfurman
Hello

After enabling TLS, I've noticed significant performance drawback.
I've made some tests for both Linux (Fedora 13) and Windows XP clients. I've 
used 
250MB tar archive. One file. No compression.


BACKUP:
Windows TLS  850   kB/s
Windows NO_TLS 8500  kB/s
Linux  TLS  9274   kB/s
Linux  NO_TLS10819 kB/s

RESTORE:
WindowsTLS   5645 kB/s
WindowsNO_TLS  8114 kB/s
Linux   TLS   5409 kB/s
Linux   NO_TLS  5901 kB/s

As you can see, Windows backup with TLS enabled is 10 times slower than backup 
without TLS. 
On Linux clients I didn't notice this problem. I checked it few times, also 
with other PC.
During backup client CPU load was around 1%. So I don't thing it's hardware 
problem. 

I've also copied tar archive from Windows client to backup storage server 
(using CIFS) - 16 MB/s.

Am I missing somthing? Did you have simillar problems?  
What can cause this slowdown? Is it normal? 
How can I improve performance?

Thanks in advance.
Kamil

+--
|This was sent by kamilfur...@gmail.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--



--
Get a FREE DOWNLOAD! and learn more about uberSVN rich system, 
user administration capabilities and model configuration. Take 
the hassle out of deploying and managing Subversion and the 
tools developers use with it. http://p.sf.net/sfu/wandisco-d2d-2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] performance problem - Windows & TLS

2011-08-25 Thread mariusz
Hi Kamil,

2 days ago I had got the same problem like you.
Open client config file for windows and put "Maximum Network Buffer Size = 
65536" in FileDaemon :)
It will resolve the problem

Mariusz.

+--
|This was sent by mariusz@gmail.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--



--
EMC VNX: the world's simplest storage, starting under $10K
The only unified storage solution that offers unified management 
Up to 160% more powerful than alternatives and 25% more efficient. 
Guaranteed. http://p.sf.net/sfu/emc-vnx-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Performance with latent networks

2011-09-29 Thread reaper
Hello.

I saw this post recently 
http://www.mail-archive.com/bacula-users@lists.sourceforge.net/msg47393.html 
and it seems I'm affected by this problem too. Bacula shows extremely low 
performance in networks with high rtt. I have sd in Germany and client in USA. 
Bacula can make backups on speeds between 5 to 10 Mbit/s which is too slow as 
backups are up to 600 GB. With simple iperf test I can achieve transfer rates 
about 100-120 Mbit/s. Why bacula cannot do the same? Compression does not 
affect transfer speed. Client has plenty of free CPU/memory.

I've made ssh tunnel between hosts and set Address = 127.0.0.1 in Storage 
section. With this configuration I can see backups are running at speeds about 
100 Mbit/s.

Bacula version is 5.0.2 from Debian Lenny backports.

+--
|This was sent by rea...@lmn.name via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--



--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2dcopy1
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Performance with latent networks

2011-09-29 Thread reaper
sounds like your sysctl.conf needs tweaking? have you tried something like this 
on both sides? 

 kernel.msgmnb = 65536 
 kernel.msgmax = 65536 
 kernel.shmmax = 68719476736 
 kernel.shmall = 4294967296 

 # long fat pipes 
 net.core.wmem_max = 8388608 
 net.core.rmem_max = 8388608

Yes, I tried to tweak kernel first with no effect. BTW wouldn't badly tuned 
network stack affect other programs such as iperf and ssh?

+--
|This was sent by rea...@lmn.name via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--



--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security
threats, fraudulent activity, and more. Splunk takes this data and makes
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2dcopy2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Performance with latent networks

2011-09-29 Thread reaper
i have noticed that scp is not a good measure of throughput -- i do not know 
why. i use an openvpn tunnel between sites and loose about 20% of throughput 
due to the tunnel. check window size on distant machine (using wireshark) to 
verify that some upstream device is not changing it.

No, no, no. You missing the point. iperf with no window tuning can scale it to 
3MB during test while bacula can only do 128k. Values are from Send-Q in ss -t 
on client.

+--
|This was sent by rea...@lmn.name via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--



--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security
threats, fraudulent activity, and more. Splunk takes this data and makes
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2dcopy2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Performance with latent networks

2011-09-30 Thread reaper
if i understand you correctly, bacula is only using a 128k -- way too small? 
curious -- have you played with "Maximum Network Buffer Size" ? does this help?

Yes, that's correct, bacula can only scale window to 128k that's why throughput 
gets limited to 10Mbit/s. With ssh tunnel between client and sd it does not 
matter because ssh can scale window to 3MB and throughput is about 
100-120Mbit/s. I've tried to set "Maximum Network Buffer Size" on client and sd 
to 8MB. No result unfortunatelly.

+--
|This was sent by rea...@lmn.name via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--



--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security
threats, fraudulent activity, and more. Splunk takes this data and makes
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2dcopy2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Performance with latent networks

2011-10-01 Thread reaper
i'm going through a similar issue -- zurich, barca, copenhagen ...

mayak-cq, can you test bacula performance under your conditions? With and 
without ssh (or something similar) tunnel.

+--
|This was sent by rea...@lmn.name via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--



--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security
threats, fraudulent activity, and more. Splunk takes this data and makes
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2dcopy2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] performance over WAN links

2011-10-16 Thread James Harper
I'm revisiting a remote backup, and am troubled by the fact that Bacula
appears to be making using of only a fraction of the available
bandwidth.

iperf tells me there is around 750KBits/second of usable TCP bandwidth
in the fd->sd direction, but Bacula only reports a Bytes/sec rate of
30Kbytes/second which is quite removed from the ~70Kbytes/second I'd
expect.

A tcpdump of a 30 second snapshot of the traffic shows that it isn't
buffer overhead - there really were only around 30Kbytes/second of data
transmitted.

Any hints on how to speed this up a bit?

Thanks

James

--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2d-oct
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Performance testing and tuning

2005-11-04 Thread Ove Risberg
Hi,

I have tried to get some decent performance in my bacula configuration
and after reading bacula mail lists, documentation and some source code
I increased it from 300KB/s to 3MB/s when backuping the root filesystem.

After reading the mail lists I found out that I was not the first bacula
user with performance problems and I do not think I will be the last.

One thing I was missing while doing this was more tools to test and tune
the performance of tape, database, network and reading files.

btape has great functions for testing if the tape drive and autochanger
is working but I would like it to test different parameters and suggest
changes to my configuration. I do not mind if would take a long time to
do this because it would take me a lot longer to do it myself.

Network performance would be much easier to test and tune if I could
tell a file daemon to send data to a storage daemon and report the
transfer rate without reading files, updating database or writing to
tape.

Reading files can be done in a similar way by telling a file daemon to
read files and report the transfer rate without sending the files to any
storage daemon.

I do not know how to test and tune the database...
but someone must know ;-)

When/If these tools are available it would be possible to write a
performance tuning tool to help users to test and tune their bacula
configuration.

Is it possible to do this?



---
SF.Net email is sponsored by:
Tame your development challenges with Apache's Geronimo App Server. Download
it for free - -and be entered to win a 42" plasma tv or your very own
Sony(tm)PSP.  Click here to play: http://sourceforge.net/geronimo.php
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] performance&design&configuration challenges

2020-10-05 Thread Žiga Žvan

Hi,
I'm having some performance challenges. I would appreciate some educated 
guess from an experienced bacula user.


I'm changing old backup sw that writes to tape drive with bacula 
writing  to disk. The results are:
a) windows file server backup from a deduplicated drive (1.700.000 
files, 900 GB data, deduplicated space used 600 GB). *Bacula: 12 hours, 
old software: 2.5 hours*
b) linux file server backup (50.000 files, 166 GB data).*Bacula 3.5 
hours, old software: 1 hour*.


I have tried to:
a) turn off compression&encryption. The result is the same: backup speed 
around 13 MB/sec.
b) change destination storage (from a new ibm storage attached over nfs, 
to a local SSD disk attached on bacula server virtual machine). It took 
2 hours 50 minutes to backup linux file server (instead of 3.5 hours). 
Sequential write test tested with linux dd command shows write speed 300 
MB/sec for IBM storage and 600 MB/sec for local SSD storage (far better 
than actual throughput).


The network bandwidth is 1 GB (1 GB on client, 10 GB on server) so I 
guess this is not a problem; however I have noticed that bacula-fd on 
client side uses 100% of CPU.


I'm using:
-bacula server version 9.6.5
-bacula client version 5.2.13 (original from centos 6 repo).

Any idea what is wrong and/or what performance should I expect?
I would also appreciate some answers on the questions bellow (I think 
this email went unanswered).


Kind regards,
Ziga Zvan




On 05.08.2020 10:52, Žiga Žvan wrote:


Dear all,
I have tested bacula sw (9.6.5) and I must say I'm quite happy with 
the results (eg. compression, encryption, configureability). However I 
have some configuration/design questions I hope, you can help me with.


Regarding job schedule, I would like to:
- create incremental daily backup (retention 1 week)
- create weekly full backup (retention 1 month)
- create monthly full backup (retention 1 year)

I am using dummy cloud driver that writes to local file storage.  
Volume is a directory with fileparts. I would like to have seperate 
volumes/pools for each client. I would like to delete the data on disk 
after retention period expires. If possible, I would like to delete 
just the fileparts with expired backup.


Questions:
a) At the moment, I'm using two backup job definitions per client and 
central schedule definition for all my clients. I have noticed that my 
incremental job gets promoted to full after monthly backup ("No prior 
Full backup Job record found"; because monthly backup is a seperate 
job, but bacula searches for full backups inside the same job). Could 
you please suggest a better configuration. If possible, I would like 
to keep central schedule definition (If I manipulate pools in a 
schedule resource, I would need to define them per client).


b) I would like to delete expired backups on disk (and in the catalog 
as well). At the moment I'm using one volume in a daily/weekly/monthly 
pool per client. In a volume, there are fileparts belonging to expired 
backups (eg. part1-23 in the output bellow). I have tried to solve 
this with purge/prune scripts in my BackupCatalog job (as suggested in 
the whitepapers) but the data does not get deleted. Is there any way 
to delete fileparts? Should I create separate volumes after retention 
period? Please suggest a better configuration.


c) Do I need a restore job for each client? I would just like to 
restore backup on the same client, default to /restore folder... When 
I use bconsole restore all command, the wizard asks me all the 
questions (eg. 5- last backup for a client, which client,fileset...) 
but at the end it asks for a restore job which changes all previously 
defined things (eg. client).


d) At the moment, I have not implemented autochanger functionality. 
Clients compress/encrypt the data and send them to bacula server, 
which writes them on one central storage system. Jobs are processed in 
sequential order (one at a time). Do you expect any significant 
performance gain if i implement autochanger in order to have jobs run 
simultaneously?


Relevant part of configuration attached bellow.

Looking forward to move in the production...
Kind regards,
Ziga Zvan


*Volume example *(fileparts 1-23 should be deleted)*:*
[root@bacula cetrtapot-daily-vol-0022]# ls -ltr
total 0
-rw-r--r--. 1 bacula disk   262 Jul 28 23:05 part.1
-rw-r--r--. 1 bacula disk 35988 Jul 28 23:06 part.2
-rw-r--r--. 1 bacula disk 35992 Jul 28 23:07 part.3
-rw-r--r--. 1 bacula disk 36000 Jul 28 23:08 part.4
-rw-r--r--. 1 bacula disk 35981 Jul 28 23:09 part.5
-rw-r--r--. 1 bacula disk 328795126 Jul 28 23:10 part.6
-rw-r--r--. 1 bacula disk 35988 Jul 29 23:09 part.7
-rw-r--r--. 1 bacula disk 35995 Jul 29 23:10 part.8
-rw-r--r--. 1 bacula disk 35981 Jul 29 23:11 part.9
-rw-r--r--. 1 bacula disk 35992 Jul 29 23:12 part.10
-rw-r--r--. 1 bacula disk 453070890 Jul 29 23:12 part.11
-rw-r--r--. 1 bacula disk 35995 Jul 30 23:09 part.12
-rw-r--r--. 1 bacu

[Bacula-users] performance&design&configuration challenges

2020-10-06 Thread Žiga Žvan

Hi,

I'm having some performance challenges. I would appreciate some educated 
guess from an experienced bacula user.


I'm changing old backup sw that writes to tape drive with bacula 
writing  to disk. The results are:
a) windows file server backup from a deduplicated drive (1.700.000 
files, 900 GB data, deduplicated space used 600 GB). *Bacula: 12 hours, 
old software: 2.5 hours*
b) linux file server backup (50.000 files, 166 GB data).*Bacula 3.5 
hours, old software: 1 hour*.


I have tried to:
a) turn off compression&encryption. The result is the same: backup speed 
around 13 MB/sec.
b) change destination storage (from a new ibm storage attached over nfs, 
to a local SSD disk attached on bacula server virtual machine). It took 
2 hours 50 minutes to backup linux file server (instead of 3.5 hours). 
Sequential write test tested with linux dd command shows write speed 300 
MB/sec for IBM storage and 600 MB/sec for local SSD storage (far better 
than actual throughput).


The network bandwidth is 1 GB (1 GB on client, 10 GB on server) so I 
guess this is not a problem; however I have noticed that bacula-fd on 
client side uses 100% of CPU.


I'm using:
-bacula server version 9.6.5
-bacula client version 5.2.13 (original from centos 6 repo).

Any idea what is wrong and/or what performance should I expect?
I would also appreciate some answers on the questions bellow.

Kind regards,
Ziga Zvan




On 05.08.2020 10:52, Žiga Žvan wrote:


Dear all,
I have tested bacula sw (9.6.5) and I must say I'm quite happy with 
the results (eg. compression, encryption, configureability). However I 
have some configuration/design questions I hope, you can help me with.


Regarding job schedule, I would like to:
- create incremental daily backup (retention 1 week)
- create weekly full backup (retention 1 month)
- create monthly full backup (retention 1 year)

I am using dummy cloud driver that writes to local file storage.  
Volume is a directory with fileparts. I would like to have seperate 
volumes/pools for each client. I would like to delete the data on disk 
after retention period expires. If possible, I would like to delete 
just the fileparts with expired backup.


Questions:
a) At the moment, I'm using two backup job definitions per client and 
central schedule definition for all my clients. I have noticed that my 
incremental job gets promoted to full after monthly backup ("No prior 
Full backup Job record found"; because monthly backup is a seperate 
job, but bacula searches for full backups inside the same job). Could 
you please suggest a better configuration. If possible, I would like 
to keep central schedule definition (If I manipulate pools in a 
schedule resource, I would need to define them per client).


b) I would like to delete expired backups on disk (and in the catalog 
as well). At the moment I'm using one volume in a daily/weekly/monthly 
pool per client. In a volume, there are fileparts belonging to expired 
backups (eg. part1-23 in the output bellow). I have tried to solve 
this with purge/prune scripts in my BackupCatalog job (as suggested in 
the whitepapers) but the data does not get deleted. Is there any way 
to delete fileparts? Should I create separate volumes after retention 
period? Please suggest a better configuration.


c) Do I need a restore job for each client? I would just like to 
restore backup on the same client, default to /restore folder... When 
I use bconsole restore all command, the wizard asks me all the 
questions (eg. 5- last backup for a client, which client,fileset...) 
but at the end it asks for a restore job which changes all previously 
defined things (eg. client).


d) At the moment, I have not implemented autochanger functionality. 
Clients compress/encrypt the data and send them to bacula server, 
which writes them on one central storage system. Jobs are processed in 
sequential order (one at a time). Do you expect any significant 
performance gain if i implement autochanger in order to have jobs run 
simultaneously?


Relevant part of configuration attached bellow.

Looking forward to move in the production...
Kind regards,
Ziga Zvan


*Volume example *(fileparts 1-23 should be deleted)*:*
[root@bacula cetrtapot-daily-vol-0022]# ls -ltr
total 0
-rw-r--r--. 1 bacula disk   262 Jul 28 23:05 part.1
-rw-r--r--. 1 bacula disk 35988 Jul 28 23:06 part.2
-rw-r--r--. 1 bacula disk 35992 Jul 28 23:07 part.3
-rw-r--r--. 1 bacula disk 36000 Jul 28 23:08 part.4
-rw-r--r--. 1 bacula disk 35981 Jul 28 23:09 part.5
-rw-r--r--. 1 bacula disk 328795126 Jul 28 23:10 part.6
-rw-r--r--. 1 bacula disk 35988 Jul 29 23:09 part.7
-rw-r--r--. 1 bacula disk 35995 Jul 29 23:10 part.8
-rw-r--r--. 1 bacula disk 35981 Jul 29 23:11 part.9
-rw-r--r--. 1 bacula disk 35992 Jul 29 23:12 part.10
-rw-r--r--. 1 bacula disk 453070890 Jul 29 23:12 part.11
-rw-r--r--. 1 bacula disk 35995 Jul 30 23:09 part.12
-rw-r--r--. 1 bacula disk 35993 Jul 30 23:10 part.13

[Bacula-users] Performance issues after upgrade

2010-03-16 Thread Wouter Verhelst
Hi,

On a bacula installation that I originally set up on a machine running
Debian "etch", everything was running smoothly.

Since etch has been EOL'ed, it was necessary to upgrade it, and we
recently did do an upgrade to lenny. This also involved an upgrade of
bacula from 1.38 (the version in etch) to 2.4.4 (the version in lenny).
The unfortunate result of that has been that a full backup job of a
server with slightly more than a tera of data, which used to take just
shy of 24 hours blew up to just over three days.

Obviously I'm not very happy with that. Investigating turned up that for
some reason data spooling was not enabled; enabling that caused a
massive gain in performance, with only one thing left.

After the backup job is finished (which now takes a bit more than 24
hours, something I consider acceptable), bacula starts spooling
attribute data. This, too, takes just over 24 hours, with no output
during the whole time this is happening.

Since the postgresql 'postmaster' is in 'D' state the whole time when
this happens, I suspect this has something to do with postgresql not
being properly configured, or similar, but I don't see what exactly is
going wrong. Is there a resource somewhere that I could look into to
help me figure out what's wrong with my postgresql setup? Or is there
something else I should look into?

If this is a known issue that is fixed in later bacula versions, please
let me know; in that case an upgrade is not out of the question, though
I'd prefer not having to do that if it isn't necessary.

Thanks,

-- 
The biometric identification system at the gates of the CIA headquarters
works because there's a guard with a large gun making sure no one is
trying to fool the system.
  http://www.schneier.com/blog/archives/2009/01/biometrics.html

--
Download Intel® Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Performance with latent networks

2011-04-10 Thread Peter Hoskin
Hi,

 

I'm using bacula to do backups of some remote servers, over the Internet
encapsulated in OpenVPN (just to make sure things are encrypted and kept off
public address space).

 

The bacula-fd is in Montreal Canada with 100mbit Ethernet. I also have
another bacula-fd in Canberra Australia on 100mbit Ethernet. The
bacula-director is in Sydney Australia with ADSL2+ at full line sync. The
latency to Montreal is about 300ms while the latency to Canberra is about
30ms.

 

The problem I'm encountering. backups from the Montreal box will peak at a
transfer rate of 100kb/sec despite my ability to do 2.2mb/sec via http, ftp,
ssh, rsync, etc. from the same host.

 

The backups from Canberra come down at 1.2-2mb/sec.

 

A backup of 6GB of data will take about 12 hours from Montreal, but will be
less than an hour from Canberra.

 

So it appears the problem is network latency. Is there anything I can try to
improve backup speeds from Montreal?

 

Regards,

Peter Hoskin

 

--
Xperia(TM) PLAY
It's a major breakthrough. An authentic gaming
smartphone on the nation's most reliable network.
And it wants your games.
http://p.sf.net/sfu/verizon-sfdev___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Performance with many files

2011-07-06 Thread Adrian Reyer
Hi,

I am using bacula for a bit more than a month now and the database gets
slower and slower both for selecting stuff and for running backups as
such.
I am using a MySQL database, still myisam tables and I am considering
switching to InnoDB tables or postgresql.
Amongst normal fileserver data there is 450GB IMAP-Serverdata, single
small files to be backed up and after 1 month (2 full backups, weekly
differential and daily incremental) the tables look like this:
select count(*) FROM Filename;
3928838
select count(*) FROM File;
54211255
select count(*) FROM Path;
1016689
Diskspace:
# du -sk /var/lib/mysql/
8741404 /var/lib/mysql/

The backup is mostly to disk and currently uses 11TB of space, the
disk-volumes are valid vor 35 days and are copied to tape somewhere in
that period to remain available for 13 months.

The database server has 16GB of RAM and MySQL is configured to use ~8GB
of RAM. MySQL parameters:
key_buffer  = 8192M
max_allowed_packet  = 40M
join_buffer_size= 4M
thread_stack= 192K
thread_cache_size   = 8
max_connections = 200
table_cache = 1024
thread_concurrency  = 10
query_cache_limit   = 127M
query_cache_size= 127M
max_heap_table_size = 512M
tmp_table_size = 512M

The backups run with SpoolData=yes and SpoolAttribute=yes, the latter
specifically set for the backupserver itself as it serves as
rsync-target as well and has SpoolData=no.
bacula-director and -sd reside on a small server with 4GB RAM, the
database itself is on a seperate server.

I seems like performance will get worse and worse over time and it is
only the 1st month of the 13 I'd like to keep. The problem seems not to
be disk io but MySQL running at 99% CPU for extended times probably
while despooling attribute data.

What can I do to improove the performance?

Regards,
Adrian
-- 
LiHAS - Adrian Reyer - Hessenwiesenstraße 10 - D-70565 Stuttgart
Fon: +49 (7 11) 78 28 50 90 - Fax:  +49 (7 11) 78 28 50 91
Mail: li...@lihas.de - Web: http://lihas.de
Linux, Netzwerke, Consulting & Support - USt-ID: DE 227 816 626 Stuttgart

--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] performance problem on windows

2005-07-13 Thread Jonas Björklund
Hello,

On Wed, 13 Jul 2005, Carsten Schurig wrote:

 > The backup of the Linux servers runs with about 800 kBytes/s (DDS-3
 > tapes), but the Windows server just returns about 100 kB/s, which is
 > much too slow to backup 15 GB!

Have you tried spooling?

http://www.bacula.org/rel-manual/Data_Spooling.html


---
This SF.Net email is sponsored by the 'Do More With Dual!' webinar happening
July 14 at 8am PDT/11am EDT. We invite you to explore the latest in dual
core and dual graphics technology at this free one hour event hosted by HP, 
AMD, and NVIDIA.  To register visit http://www.hp.com/go/dualwebinar
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] performance problem on windows

2005-07-13 Thread Dominic Marks
On Wednesday 13 July 2005 12:14, Jonas Björklund wrote:
> Hello,
>
> On Wed, 13 Jul 2005, Carsten Schurig wrote:
>  > The backup of the Linux servers runs with about 800 kBytes/s
>  > (DDS-3 tapes), but the Windows server just returns about 100 kB/s,
>  > which is much too slow to backup 15 GB!
>
> Have you tried spooling?
>
> http://www.bacula.org/rel-manual/Data_Spooling.html
>

The problem is that the client system is not sending data fast
enough, so I don't see how spooling will help. I also have this
problem, some Windows machines can manange ~1MB/s. One laptop
in particular running WindowsXP can do no better than 50KB/s when 
backing up. Which is obviously no good to anyone.

How can we debug the performance problems of the Windows FD?

> ---
> This SF.Net email is sponsored by the 'Do More With Dual!' webinar
> happening July 14 at 8am PDT/11am EDT. We invite you to explore the
> latest in dual core and dual graphics technology at this free one
> hour event hosted by HP, AMD, and NVIDIA.  To register visit
> http://www.hp.com/go/dualwebinar
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users

Cheers,
-- 
Dominic Marks


---
This SF.Net email is sponsored by the 'Do More With Dual!' webinar happening
July 14 at 8am PDT/11am EDT. We invite you to explore the latest in dual
core and dual graphics technology at this free one hour event hosted by HP,
AMD, and NVIDIA.  To register visit http://www.hp.com/go/dualwebinar
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Performance on a Sun Solaris

2006-11-06 Thread Berner Martin
Hy
Das anybody runs Bacula (1.38.9) on Sun Solaris (8, 9, 10)?
I'm interested, how do you backup a consistent state of the System.

For Example I use fssnap for /, /var and /export/home witch are on a 
ufs-Filesystem.
But I think it's slower backing up from the snapshot then from the real File.

I think the Backup is very slow in general.
bacula-dir and bacula-sd are running on a dedicated Sun on witch also the 
Cattalo-Database (PostgresSQL) are running.

backing up a Solaris 8 running Oracle-DB takes my 6 and a half hours for 261GB 
that's about 11.4MB/s
Details:
03-Nov 13:32 merkur-dir: Bacula 1.38.9 (02May06): 03-Nov-2006 13:32:37
  JobId:  3930
  Job:saturn.2006-11-02_21.00.02
  Backup Level:   Full
  Client: "saturn-fd" sparc-sun-solaris2.8,solaris,5.8
  FileSet:"SaturnFull" 2006-01-13 20:05:32
  Pool:   "Backup"
  Storage:"Tandberg"
  Scheduled time: 02-Nov-2006 21:00:01
  Start time: 03-Nov-2006 07:01:12
  End time:   03-Nov-2006 13:32:37
  Elapsed time:   6 hours 31 mins 25 secs
  Priority:   21
  FD Files Written:   446,213
  SD Files Written:   446,213
  FD Bytes Written:   261,207,880,851 (261.2 GB)
  SD Bytes Written:   261,283,897,816 (261.2 GB)
  Rate:   11122.3 KB/s
  Software Compression:   None
  Volume name(s): BACKUP-2005-12-05_45|BACKUP-2006-11-03_3930
  Volume Session Id:  1846
  Volume Session Time:1148901284
  Last Volume Bytes:  159,174,938,006 (159.1 GB)
  Non-fatal FD errors:0
  SD Errors:  0
  FD termination status:  OK
  SD termination status:  OK
  Termination:Backup OK

On the other Hand I back up an Application-Server running Solaris 10 takes my 1 
hours 19 minutes for about 19GB. That's about 4MB/s
Details:
03-Nov 16:45 merkur-dir: Bacula 1.38.9 (02May06): 03-Nov-2006 16:45:58
  JobId:  3939
  Job:neptun.2006-11-03_15.25.10
  Backup Level:   Full
  Client: "neptun-fd" sparc-sun-solaris2.10,solaris,5.10
  FileSet:"NeptunFull" 2005-12-07 21:04:30
  Pool:   "Backup"
  Storage:"Tandberg"
  Scheduled time: 03-Nov-2006 15:25:01
  Start time: 03-Nov-2006 15:26:22
  End time:   03-Nov-2006 16:45:58
  Elapsed time:   1 hour 19 mins 36 secs
  Priority:   40
  FD Files Written:   441,549
  SD Files Written:   441,549
  FD Bytes Written:   18,816,988,341 (18.81 GB)
  SD Bytes Written:   18,896,666,176 (18.89 GB)
  Rate:   3939.9 KB/s
  Software Compression:   None
  Volume name(s): BACKUP-2006-11-03_3930
  Volume Session Id:  1848
  Volume Session Time:1148901284
  Last Volume Bytes:  229,313,864,627 (229.3 GB)
  Non-fatal FD errors:0
  SD Errors:  0
  FD termination status:  OK
  SD termination status:  OK
  Termination:Backup OK

So where is the Difference? What's going wrong that it takes so long?

I'm graceful for any hint.

Thanks for your reply

Berner Martin


-
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Performance with latent networks

2011-09-29 Thread mayak-cq
On Thu, 2011-09-29 at 04:20 -0700, reaper wrote:

> Hello.
> 
> I saw this post recently 
> http://www.mail-archive.com/bacula-users@lists.sourceforge.net/msg47393.html 
> and it seems I'm affected by this problem too. Bacula shows extremely low 
> performance in networks with high rtt. I have sd in Germany and client in 
> USA. Bacula can make backups on speeds between 5 to 10 Mbit/s which is too 
> slow as backups are up to 600 GB. With simple iperf test I can achieve 
> transfer rates about 100-120 Mbit/s. Why bacula cannot do the same? 
> Compression does not affect transfer speed. Client has plenty of free 
> CPU/memory.
> 
> I've made ssh tunnel between hosts and set Address = 127.0.0.1 in Storage 
> section. With this configuration I can see backups are running at speeds 
> about 100 Mbit/s.
> 
> Bacula version is 5.0.2 from Debian Lenny backports.


hi reaper,

sounds like your sysctl.conf needs tweaking? have you tried something
like this on both sides?

kernel.msgmnb = 65536
kernel.msgmax = 65536
kernel.shmmax = 68719476736
kernel.shmall = 4294967296

# long fat pipes
net.core.wmem_max = 8388608
net.core.rmem_max = 8388608


cheers

m
--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2dcopy1___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Performance with latent networks

2011-09-29 Thread mayak-cq
On Thu, 2011-09-29 at 21:12 -0700, reaper wrote:

> sounds like your sysctl.conf needs tweaking? have you tried something like 
> this on both sides? 
> 
>  kernel.msgmnb = 65536 
>  kernel.msgmax = 65536 
>  kernel.shmmax = 68719476736 
>  kernel.shmall = 4294967296 
> 
>  # long fat pipes 
>  net.core.wmem_max = 8388608 
>  net.core.rmem_max = 8388608
> 
> Yes, I tried to tweak kernel first with no effect. BTW wouldn't badly tuned 
> network stack affect other programs such as iperf and ssh?

hi repear,

i'm going through a similar issue -- zurich, barca, copenhagen ...

iperf's results will differ dramatically depending on the tcp window
size -- here's what i'm testing with:

server side (distant): /usr/bin/iperf -s -w 4096KB
client side (near): iperf -w4096KB -c distant.Server.IP.addr

(remember that iperf doubles the -w argument)

i have noticed that scp is not a good measure of throughput -- i do not
know why. i use an openvpn tunnel between sites and loose about 20% of
throughput due to the tunnel. check window size on distant machine
(using wireshark) to verify that some upstream device is not changing
it.

here's good site that discusses window size -- you can also google "long
fat networks" for a detailed discussion;
http://bradhedlund.com/2008/12/19/how-to-calculate-tcp-throughput-for-long-distance-links/

cheers

m
--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security
threats, fraudulent activity, and more. Splunk takes this data and makes
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2dcopy2___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Performance with latent networks

2011-09-30 Thread mayak-cq
On Thu, 2011-09-29 at 23:43 -0700, reaper wrote:

> i have noticed that scp is not a good measure of throughput -- i do not know 
> why. i use an openvpn tunnel between sites and loose about 20% of throughput 
> due to the tunnel. check window size on distant machine (using wireshark) to 
> verify that some upstream device is not changing it.
> 
> No, no, no. You missing the point. iperf with no window tuning can scale it 
> to 3MB during test while bacula can only do 128k. Values are from Send-Q in 
> ss -t on client.

hi reaper,

if i understand you correctly, bacula is only using a 128k -- way too
small? curious -- have you played with "Maximum Network Buffer Size" ?
does this help?

cheers

m

--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security
threats, fraudulent activity, and more. Splunk takes this data and makes
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2dcopy2___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Performance with latent networks

2011-09-30 Thread mayak-cq
On Fri, 2011-09-30 at 02:18 -0700, reaper wrote:

> if i understand you correctly, bacula is only using a 128k -- way too
> small? curious -- have you played with "Maximum Network Buffer Size" ?
> does this help?
> 
> Yes, that's correct, bacula can only scale window to 128k that's why
> throughput gets limited to 10Mbit/s. With ssh tunnel between client
> and sd it does not matter because ssh can scale window to 3MB and
> throughput is about 100-120Mbit/s. I've tried to set "Maximum Network
> Buffer Size" on client and sd to 8MB. No result unfortunatelly.


hey reaper,

i think that you should file a bug report -- that bacula doesn't use the
native window size (or at least have an option to do so) is a tad
strange. in the meantime, using a tunnel that uses a larger window size
is the clear solution.

cheers

m
--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security
threats, fraudulent activity, and more. Splunk takes this data and makes
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2dcopy2___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Performance with latent networks

2011-10-01 Thread mayak-cq
On Sat, 2011-10-01 at 07:57 -0700, reaper wrote:

> i'm going through a similar issue -- zurich, barca, copenhagen ...
> 
> mayak-cq, can you test bacula performance under your conditions? With and 
> without ssh (or something similar) tunnel.


hi reaper,

sure -- i can do that tomorrow.

are you using iptraf to measure your throughput? another technique?

thanks

m
--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security
threats, fraudulent activity, and more. Splunk takes this data and makes
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2dcopy2___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Performance with latent networks

2011-10-02 Thread Radosław Korzeniewski
Hello,

2011/9/30 reaper 

> sounds like your sysctl.conf needs tweaking? have you tried something like
> this on both sides?
>
>  kernel.msgmnb = 65536
>  kernel.msgmax = 65536
>  kernel.shmmax = 68719476736
>  kernel.shmall = 4294967296
>
>
These are the IPC (inter process communication) kernel parameters and have
nothing about network tuning and AFAIK Bacula doesn't use IPC. I'm not
suprised that there was no effect. :)

-- 
Radosław Korzeniewski
rados...@korzeniewski.net
--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security
threats, fraudulent activity, and more. Splunk takes this data and makes
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2dcopy2___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Performance with latent networks

2011-10-03 Thread mayak-cq
On Sun, 2011-10-02 at 11:45 +0200, Radosław Korzeniewski wrote:

> Hello,
> 
> 2011/9/30 reaper 
> 
> sounds like your sysctl.conf needs tweaking? have you tried
> something like this on both sides?
> 
>  kernel.msgmnb = 65536
>  kernel.msgmax = 65536
>  kernel.shmmax = 68719476736
>  kernel.shmall = 4294967296
> 
> 
> 

These are the IPC (inter process communication) kernel parameters and
have nothing about network tuning and AFAIK Bacula doesn't use IPC. I'm
not suprised that there was no effect. :)

hi radoslaw, hi reaper,

yes -- i have just sniffed to some traffic and done a backup over
100mbit network -- here are the results

zurich running bacula-dir and bacula-sd (backup server)
copenhagen running bacula-fd (backup client)

heres the 3 way handshake (bolding is mine)

No. TimeSourceDestination   Protocol
Info
  1 0.00zurich copenhagenTCP  33252 >
bacula-fd [SYN] Seq=0 Win=5840 Len=0 MSS=1460 SACK_PERM=1 TSV=2129448587
TSER=0 WS=6

Frame 1: 76 bytes on wire (608 bits), 76 bytes captured (608 bits)
Linux cooked capture
Internet Protocol, Src: zurich (zurich), Dst: copenhagen (copenhagen)
Transmission Control Protocol, Src Port: 33252 (33252), Dst Port:
bacula-fd (9102), Seq: 0, Len: 0
Source port: 33252 (33252)
Destination port: bacula-fd (9102)
[Stream index: 0]
Sequence number: 0(relative sequence number)
Header length: 40 bytes
Flags: 0x02 (SYN)
Window size: 5840
Checksum: 0xa51b [validation disabled]
Options: (20 bytes)
Maximum segment size: 1460 bytes
TCP SACK Permitted Option: True
Timestamps: TSval 2129448587, TSecr 0
NOP
Window scale: 6 (multiply by 64)

No. TimeSourceDestination   Protocol
Info
  2 0.36copenhagenzurich TCP  bacula-fd
> 33252 [SYN, ACK] Seq=0 Ack=1 Win=5792 Len=0 MSS=1460 SACK_PERM=1
TSV=1494684643 TSER=2129448587 WS=8

Frame 2: 76 bytes on wire (608 bits), 76 bytes captured (608 bits)
Linux cooked capture
Internet Protocol, Src: copenhagen (copenhagen), Dst: zurich (zurich)
Transmission Control Protocol, Src Port: bacula-fd (9102), Dst Port:
33252 (33252), Seq: 0, Ack: 1, Len: 0
Source port: bacula-fd (9102)
Destination port: 33252 (33252)
[Stream index: 0]
Sequence number: 0(relative sequence number)
Acknowledgement number: 1(relative ack number)
Header length: 40 bytes
Flags: 0x12 (SYN, ACK)
Window size: 5792
Checksum: 0xa123 [validation disabled]
Options: (20 bytes)
Maximum segment size: 1460 bytes
TCP SACK Permitted Option: True
Timestamps: TSval 1494684643, TSecr 2129448587
NOP
Window scale: 8 (multiply by 256)
[SEQ/ACK analysis]

No. TimeSourceDestination   Protocol
Info
  3 0.023054zurich copenhagenTCP  33252 >
bacula-fd [ACK] Seq=1 Ack=1 Win=5888 Len=0 TSV=2129448610
TSER=1494684643

Frame 3: 68 bytes on wire (544 bits), 68 bytes captured (544 bits)
Linux cooked capture
Internet Protocol, Src: zurich (zurich), Dst: copenhagen (copenhagen)
Transmission Control Protocol, Src Port: 33252 (33252), Dst Port:
bacula-fd (9102), Seq: 1, Ack: 1, Len: 0
Source port: 33252 (33252)
Destination port: bacula-fd (9102)
[Stream index: 0]
Sequence number: 1(relative sequence number)
Acknowledgement number: 1(relative ack number)
Header length: 32 bytes
Flags: 0x10 (ACK)
Window size: 5888 (scaled)
Checksum: 0xe61d [validation disabled]
Options: (12 bytes)
NOP
NOP
Timestamps: TSval 2129448610, TSecr 1494684643
[SEQ/ACK analysis]



zurich and copenhagen are 22.589 ms apart on a shared 100mbit connection
-- using the bandwidth delay product:

theoretical
bandwidth   delayproductBits bitsPerByte
bytesInWindow
500 000 000 * .022589 = 11 294 50 /   8=  1
411 813

actual
windowSizeBytes  windowScale  bytesInWindow
delaythroughputBytesPerSecond
bitsPerSecond
5840*  64   =373 760
zurich -> copenhagen  / .022589  16 546 107
* 8 132 368 856
5888*  256 = 1 507 328
copenhagen -> zurich  / .022589  66 728 408
* 8 533 827 264



i backed up about 16GB of data uncompressed from copenhagen to zurich at
around 60mbits/s.



i am lost as far as bacula's window calculation -- it thinks that i have
a 500mbit connection?


cheers

m




<>--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this da

Re: [Bacula-users] Performance with latent networks

2011-10-03 Thread mayak-cq
On Mon, 2011-10-03 at 10:34 +0200, mayak-cq wrote:




> 
> zurich and copenhagen are 22.589 ms apart on a shared 100mbit
> connection -- using the bandwidth delay product:
> 
> theoretical
> bandwidth   delayproductBits bitsPerByte
> bytesInWindow
> 500 000 000 * .022589 = 11 294 50 /   8=
> 1 411 813

--^^^
this should be 100 000 000 -- therefore

theoretical
bandwidth   delayproductBits bitsPerByte
bytesInWindow
100 000 000 * .022589 = 282 363 /   8=
35 295

sorry for the confusion ...

m

--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2dcopy1___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] performance over WAN links

2011-10-16 Thread James Harper
Disregard. It's flying along now at the expected speeds. I blame sun
spots.

James

> -Original Message-
> From: James Harper [mailto:james.har...@bendigoit.com.au]
> Sent: Monday, 17 October 2011 4:14 PM
> To: bacula-users@lists.sourceforge.net
> Subject: [Bacula-users] performance over WAN links
> 
> I'm revisiting a remote backup, and am troubled by the fact that
Bacula
> appears to be making using of only a fraction of the available
> bandwidth.
> 
> iperf tells me there is around 750KBits/second of usable TCP bandwidth
> in the fd->sd direction, but Bacula only reports a Bytes/sec rate of
> 30Kbytes/second which is quite removed from the ~70Kbytes/second I'd
> expect.
> 
> A tcpdump of a 30 second snapshot of the traffic shows that it isn't
> buffer overhead - there really were only around 30Kbytes/second of
data
> transmitted.
> 
> Any hints on how to speed this up a bit?
> 
> Thanks
> 
> James
> 
>

--
> All the data continuously generated in your IT infrastructure contains
a
> definitive record of customers, application performance, security
> threats, fraudulent activity and more. Splunk takes this data and
makes
> sense of it. Business sense. IT sense. Common sense.
> http://p.sf.net/sfu/splunk-d2d-oct
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users

--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2d-oct
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Performance testing and tuning

2005-11-04 Thread Kern Sibbald
Hello,

On Friday 04 November 2005 12:31, Ove Risberg wrote:
> Hi,
>
> I have tried to get some decent performance in my bacula configuration
> and after reading bacula mail lists, documentation and some source code
> I increased it from 300KB/s to 3MB/s when backuping the root filesystem.
>
> After reading the mail lists I found out that I was not the first bacula
> user with performance problems and I do not think I will be the last.
>
> One thing I was missing while doing this was more tools to test and tune
> the performance of tape, database, network and reading files.
>
> btape has great functions for testing if the tape drive and autochanger
> is working but I would like it to test different parameters and suggest
> changes to my configuration. I do not mind if would take a long time to
> do this because it would take me a lot longer to do it myself.
>
> Network performance would be much easier to test and tune if I could
> tell a file daemon to send data to a storage daemon and report the
> transfer rate without reading files, updating database or writing to
> tape.
>
> Reading files can be done in a similar way by telling a file daemon to
> read files and report the transfer rate without sending the files to any
> storage daemon.
>
> I do not know how to test and tune the database...
> but someone must know ;-)
>
> When/If these tools are available it would be possible to write a
> performance tuning tool to help users to test and tune their bacula
> configuration.
>
> Is it possible to do this?

Yes, of course, this is possible.  However, there is a question of priorities 
and manpower. For the moment, manpower is rather limited.

You might start by describing what you did, how/why you decided to change what 
you did, and what the results of each change were.  Already, this would be a 
big help to others.

Bacula does have some of the capabilities that you described. Some are 
controlled by directives, and others are controlled by recompiling with 
#defines changed in src/version.h.  However, the defines are a bit out of 
date and may not work.  To make it all work and to put it together in a 
coherent way that users can try it, is a rather big project.

-- 
Best regards,

Kern

  (">
  /\
  V_V


---
SF.Net email is sponsored by:
Tame your development challenges with Apache's Geronimo App Server. Download
it for free - -and be entered to win a 42" plasma tv or your very own
Sony(tm)PSP.  Click here to play: http://sourceforge.net/geronimo.php
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Performance testing and tuning

2005-11-04 Thread Ryan Novosielski
What did you ultimately change on your site that might be of interest to 
others on the list? I'd be interested to know. I'm not going to get 3 
MB/s, since I have a drive that writes slower than that, but any ways to 
boost performance would be interesting.


 _  _ _  _ ___  _  _  _
|Y#| |  | |\/| |  \ |\ |  | | Ryan Novosielski - User Support Spec. III
|$&| |__| |  | |__/ | \| _| | [EMAIL PROTECTED] - 973/972.0922 (2-0922)
\__/ Univ. of Med. and Dent.| IST/AST - NJMS Medical Science Bldg - C630

On Fri, 4 Nov 2005, Ove Risberg wrote:


Hi,

I have tried to get some decent performance in my bacula configuration
and after reading bacula mail lists, documentation and some source code
I increased it from 300KB/s to 3MB/s when backuping the root filesystem.

After reading the mail lists I found out that I was not the first bacula
user with performance problems and I do not think I will be the last.

One thing I was missing while doing this was more tools to test and tune
the performance of tape, database, network and reading files.

btape has great functions for testing if the tape drive and autochanger
is working but I would like it to test different parameters and suggest
changes to my configuration. I do not mind if would take a long time to
do this because it would take me a lot longer to do it myself.

Network performance would be much easier to test and tune if I could
tell a file daemon to send data to a storage daemon and report the
transfer rate without reading files, updating database or writing to
tape.

Reading files can be done in a similar way by telling a file daemon to
read files and report the transfer rate without sending the files to any
storage daemon.

I do not know how to test and tune the database...
but someone must know ;-)

When/If these tools are available it would be possible to write a
performance tuning tool to help users to test and tune their bacula
configuration.

Is it possible to do this?



---
SF.Net email is sponsored by:
Tame your development challenges with Apache's Geronimo App Server. Download
it for free - -and be entered to win a 42" plasma tv or your very own
Sony(tm)PSP.  Click here to play: http://sourceforge.net/geronimo.php
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users




---
SF.Net email is sponsored by:
Tame your development challenges with Apache's Geronimo App Server. Download
it for free - -and be entered to win a 42" plasma tv or your very own
Sony(tm)PSP.  Click here to play: http://sourceforge.net/geronimo.php
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Performance testing and tuning

2005-11-04 Thread Ove Risberg
I changed the MaximumNetworkBufferSize to 65536 in the file and storage
daemon configuration.

Please reply with your results.

/Ove

On Fri, 2005-11-04 at 16:29, Ryan Novosielski wrote:
> What did you ultimately change on your site that might be of interest to 
> others on the list? I'd be interested to know. I'm not going to get 3 
> MB/s, since I have a drive that writes slower than that, but any ways to 
> boost performance would be interesting.
> 
>  _  _ _  _ ___  _  _  _
> |Y#| |  | |\/| |  \ |\ |  | | Ryan Novosielski - User Support Spec. III
> |$&| |__| |  | |__/ | \| _| | [EMAIL PROTECTED] - 973/972.0922 (2-0922)
> \__/ Univ. of Med. and Dent.| IST/AST - NJMS Medical Science Bldg - C630
> 
> On Fri, 4 Nov 2005, Ove Risberg wrote:
> 
> > Hi,
> >
> > I have tried to get some decent performance in my bacula configuration
> > and after reading bacula mail lists, documentation and some source code
> > I increased it from 300KB/s to 3MB/s when backuping the root filesystem.
> >
> > After reading the mail lists I found out that I was not the first bacula
> > user with performance problems and I do not think I will be the last.
> >
> > One thing I was missing while doing this was more tools to test and tune
> > the performance of tape, database, network and reading files.
> >
> > btape has great functions for testing if the tape drive and autochanger
> > is working but I would like it to test different parameters and suggest
> > changes to my configuration. I do not mind if would take a long time to
> > do this because it would take me a lot longer to do it myself.
> >
> > Network performance would be much easier to test and tune if I could
> > tell a file daemon to send data to a storage daemon and report the
> > transfer rate without reading files, updating database or writing to
> > tape.
> >
> > Reading files can be done in a similar way by telling a file daemon to
> > read files and report the transfer rate without sending the files to any
> > storage daemon.
> >
> > I do not know how to test and tune the database...
> > but someone must know ;-)
> >
> > When/If these tools are available it would be possible to write a
> > performance tuning tool to help users to test and tune their bacula
> > configuration.
> >
> > Is it possible to do this?
> >
> >
> >
> > ---
> > SF.Net email is sponsored by:
> > Tame your development challenges with Apache's Geronimo App Server. Download
> > it for free - -and be entered to win a 42" plasma tv or your very own
> > Sony(tm)PSP.  Click here to play: http://sourceforge.net/geronimo.php
> > ___
> > Bacula-users mailing list
> > Bacula-users@lists.sourceforge.net
> > https://lists.sourceforge.net/lists/listinfo/bacula-users
> >
> 
> 
> ---
> SF.Net email is sponsored by:
> Tame your development challenges with Apache's Geronimo App Server. Download
> it for free - -and be entered to win a 42" plasma tv or your very own
> Sony(tm)PSP.  Click here to play: http://sourceforge.net/geronimo.php
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users



---
SF.Net email is sponsored by:
Tame your development challenges with Apache's Geronimo App Server. Download
it for free - -and be entered to win a 42" plasma tv or your very own
Sony(tm)PSP.  Click here to play: http://sourceforge.net/geronimo.php
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Performance testing and tuning

2005-11-04 Thread Ove Risberg
Hi, 

On Fri, 2005-11-04 at 14:14, Kern Sibbald wrote:
> Hello,
> 
> On Friday 04 November 2005 12:31, Ove Risberg wrote:
> > Hi,
> >
> > I have tried to get some decent performance in my bacula configuration
> > and after reading bacula mail lists, documentation and some source code
> > I increased it from 300KB/s to 3MB/s when backuping the root filesystem.
> >
> > After reading the mail lists I found out that I was not the first bacula
> > user with performance problems and I do not think I will be the last.
> >
> > One thing I was missing while doing this was more tools to test and tune
> > the performance of tape, database, network and reading files.
> >
> > btape has great functions for testing if the tape drive and autochanger
> > is working but I would like it to test different parameters and suggest
> > changes to my configuration. I do not mind if would take a long time to
> > do this because it would take me a lot longer to do it myself.
> >
> > Network performance would be much easier to test and tune if I could
> > tell a file daemon to send data to a storage daemon and report the
> > transfer rate without reading files, updating database or writing to
> > tape.
> >
> > Reading files can be done in a similar way by telling a file daemon to
> > read files and report the transfer rate without sending the files to any
> > storage daemon.
> >
> > I do not know how to test and tune the database...
> > but someone must know ;-)
> >
> > When/If these tools are available it would be possible to write a
> > performance tuning tool to help users to test and tune their bacula
> > configuration.
> >
> > Is it possible to do this?
> 
> Yes, of course, this is possible.  However, there is a question of priorities 
> and manpower. For the moment, manpower is rather limited.
> 
> You might start by describing what you did, how/why you decided to change 
> what 
> you did, and what the results of each change were.  Already, this would be a 
> big help to others.

I will try to describe what I did...

I first found some network problems the bacula server was using 100MBit
half-duplex and the switch was using 100MBit full-duplex and this is not
good.

After fixing this the performance was not better... so I tried to scp a
large file in both directions at the same time to see if there is
something wrong with the network but I got many MB/s i both directions
at the same time so the network is not the problem.

Someone told me that LTO drives want large block sizes to perform well
so I increased MaximumBlockSize to 512K.
btape complained about the block size so I decresed it to 256K.
Why is the default value 63K and not 64K?
The performance was not better after fixing this.

Then I found several mails in the bacula-users mail list about
increasing MaximumNetworkBufferSize in the file and storage daemon
configuration.
I first increased it to 65536 and now I got about 3MB/s ;-)
Then I increased it to 512K, the same value our Netbackup server uses,
but the performance was not better and I was not able to restore all
files. The storage daemon died and when trying to read the backup files
(I made these backup test to files) with bls I could only list the first
files in the backup and then bls aborted with some error message.

So I changed MaximumNetworkBufferSize to 65535 again.

> 
> Bacula does have some of the capabilities that you described. Some are 
> controlled by directives, and others are controlled by recompiling with 
> #defines changed in src/version.h.  However, the defines are a bit out of 
> date and may not work.  To make it all work and to put it together in a 
> coherent way that users can try it, is a rather big project.

Do you have any notes, documentation or testscripts using the directives
or defines in version.h?

I will try to do some testing if I get some hints on how to use these
features and if I get some useful results I will share it with you all
on the list.



---
SF.Net email is sponsored by:
Tame your development challenges with Apache's Geronimo App Server. Download
it for free - -and be entered to win a 42" plasma tv or your very own
Sony(tm)PSP.  Click here to play: http://sourceforge.net/geronimo.php
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


FW: [Bacula-users] Performance testing and tuning

2005-11-08 Thread Ribi Roland

Hi,

We have Solaris 10 Servers, 3x Sun V240, one of them is the backup-server
with the storage deamon and director the others are backed up to the tape at
the backup-server.

The backup-server has a 1.3GHz CPU and 1Gb RAM, U320 SCSI-Kontroller (on
board) and a second U320 controller with the Tandberg SDLT320 tapedrive.
Networkspeed is at 1Gbit. We alos testet some transfers with "Pipe Magic
Tools" and get more then 30Mb/s in the average. The disks I testet with
bonnie++ and the write/reed is more then 30Mb/s.

We compiled bacula to work with postgre-sql.

All your tips did'nt change anything, I have only a throuput of arround
300-350Kb/s. I also added the spooling-parameter.

With spooling it takes arround 10-20min to spool data and write them down to
tape at 2MB/s. But the save of the attributes (writing to the catalog) needs
2h or more!

I tried many combinations and recompiled bacula also with Postgres 8.1,
Postgres 8.0, with CFLAGS=-O2/CXXFLAGS=-O2

I compiled it with gcc 3.4.4, which I compiled myself, because on solaris 10
there is'nt any binary distribution.

I think that there are 2 Problems, one is the performance of Bacula or
Postgres itself  and also some problems with the tape. I did'nt get more
then 2.6Mb/s with the ufsdump from solaris.

Anybody can help?

> -Original Message-
> From: Ove Risberg [mailto:[EMAIL PROTECTED] 
> Sent: Friday, November 04, 2005 1:59 PM
> To: Ribi Roland
> Subject: RE: [Bacula-users] Performance testing and tuning
> 
> 
> Hi,
> 
> I changed the MaximumNetworkBufferSize to 65536 in the file 
> and storage
> daemon configuration and increased the MaximumBlockSize to 
> 262144 in the
> storage daemon configuration.
> 
> If you change the MaximumBlockSize you must run the btape 
> tests again so
> you are sure the value is valid for your tape drive.
> 
> I think the MaximumNetworkBufferSize was the parameter that 
> gave me the
> performance increase.
> 
> What do you think of adding more performance testing tools to bacula?
> 
> What system do you have (hardware, os, tape, disk)?
> 
> Please reply with your results.
> 
> /Ove
> 
> On Fri, 2005-11-04 at 13:02, Ribi Roland wrote:
> > How did you increase the performance?
> > 
> > I have the same slow system (arround 300Kb/s), what did you 
> change in the
> > configuration, which did help to increase the performance?
> > 
> > It would help me...
> > 
> > Thank You
> > 
> > > -----Original Message-
> > > From: Ove Risberg [mailto:[EMAIL PROTECTED] 
> > > Sent: Friday, November 04, 2005 12:31 PM
> > > To: bacula-users
> > > Subject: [Bacula-users] Performance testing and tuning
> > > 
> > > 
> > > Hi,
> > > 
> > > I have tried to get some decent performance in my bacula 
> configuration
> > > and after reading bacula mail lists, documentation and some 
> > > source code
> > > I increased it from 300KB/s to 3MB/s when backuping the root 
> > > filesystem.
> > > 
> > > After reading the mail lists I found out that I was not the 
> > > first bacula
> > > user with performance problems and I do not think I will 
> be the last.
> > > 
> > > One thing I was missing while doing this was more tools to 
> > > test and tune
> > > the performance of tape, database, network and reading files.
> > > 
> > > btape has great functions for testing if the tape drive and 
> > > autochanger
> > > is working but I would like it to test different parameters 
> > > and suggest
> > > changes to my configuration. I do not mind if would take a 
> > > long time to
> > > do this because it would take me a lot longer to do it myself.
> > > 
> > > Network performance would be much easier to test and tune 
> if I could
> > > tell a file daemon to send data to a storage daemon and report the
> > > transfer rate without reading files, updating database or 
> writing to
> > > tape.
> > > 
> > > Reading files can be done in a similar way by telling a 
> file daemon to
> > > read files and report the transfer rate without sending the 
> > > files to any
> > > storage daemon.
> > > 
> > > I do not know how to test and tune the database...
> > > but someone must know ;-)
> > > 
> > > When/If these tools are available it would be possible to write a
> > > performance tuning tool to help users to test and tune 
> their bacula
> > > configuration.
> > > 
> > > Is it possible to do this?
> > > 
> > >

RE: [Bacula-users] Performance testing and tuning

2005-11-08 Thread Ribi Roland
I found the solution...

rtfm! 

for solaris --enable-smartalloc is important... :/


> -Original Message-
> From: Ribi Roland [mailto:[EMAIL PROTECTED]
> Sent: Tuesday, November 08, 2005 9:33 AM
> To: 'bacula-users@lists.sourceforge.net'
> Subject: FW: [Bacula-users] Performance testing and tuning
> 
> 
> 
> Hi,
> 
> We have Solaris 10 Servers, 3x Sun V240, one of them is the 
> backup-server
> with the storage deamon and director the others are backed up 
> to the tape at
> the backup-server.
> 
> The backup-server has a 1.3GHz CPU and 1Gb RAM, U320 
> SCSI-Kontroller (on
> board) and a second U320 controller with the Tandberg SDLT320 
> tapedrive.
> Networkspeed is at 1Gbit. We alos testet some transfers with 
> "Pipe Magic
> Tools" and get more then 30Mb/s in the average. The disks I 
> testet with
> bonnie++ and the write/reed is more then 30Mb/s.
> 
> We compiled bacula to work with postgre-sql.
> 
> All your tips did'nt change anything, I have only a throuput 
> of arround
> 300-350Kb/s. I also added the spooling-parameter.
> 
> With spooling it takes arround 10-20min to spool data and 
> write them down to
> tape at 2MB/s. But the save of the attributes (writing to the 
> catalog) needs
> 2h or more!
> 
> I tried many combinations and recompiled bacula also with 
> Postgres 8.1,
> Postgres 8.0, with CFLAGS=-O2/CXXFLAGS=-O2
> 
> I compiled it with gcc 3.4.4, which I compiled myself, 
> because on solaris 10
> there is'nt any binary distribution.
> 
> I think that there are 2 Problems, one is the performance of Bacula or
> Postgres itself  and also some problems with the tape. I 
> did'nt get more
> then 2.6Mb/s with the ufsdump from solaris.
> 
> Anybody can help?
> 
> > -Original Message-
> > From: Ove Risberg [mailto:[EMAIL PROTECTED] 
> > Sent: Friday, November 04, 2005 1:59 PM
> > To: Ribi Roland
> > Subject: RE: [Bacula-users] Performance testing and tuning
> > 
> > 
> > Hi,
> > 
> > I changed the MaximumNetworkBufferSize to 65536 in the file 
> > and storage
> > daemon configuration and increased the MaximumBlockSize to 
> > 262144 in the
> > storage daemon configuration.
> > 
> > If you change the MaximumBlockSize you must run the btape 
> > tests again so
> > you are sure the value is valid for your tape drive.
> > 
> > I think the MaximumNetworkBufferSize was the parameter that 
> > gave me the
> > performance increase.
> > 
> > What do you think of adding more performance testing tools 
> to bacula?
> > 
> > What system do you have (hardware, os, tape, disk)?
> > 
> > Please reply with your results.
> > 
> > /Ove
> > 
> > On Fri, 2005-11-04 at 13:02, Ribi Roland wrote:
> > > How did you increase the performance?
> > > 
> > > I have the same slow system (arround 300Kb/s), what did you 
> > change in the
> > > configuration, which did help to increase the performance?
> > > 
> > > It would help me...
> > > 
> > > Thank You
> > > 
> > > > -Original Message-
> > > > From: Ove Risberg [mailto:[EMAIL PROTECTED] 
> > > > Sent: Friday, November 04, 2005 12:31 PM
> > > > To: bacula-users
> > > > Subject: [Bacula-users] Performance testing and tuning
> > > > 
> > > > 
> > > > Hi,
> > > > 
> > > > I have tried to get some decent performance in my bacula 
> > configuration
> > > > and after reading bacula mail lists, documentation and some 
> > > > source code
> > > > I increased it from 300KB/s to 3MB/s when backuping the root 
> > > > filesystem.
> > > > 
> > > > After reading the mail lists I found out that I was not the 
> > > > first bacula
> > > > user with performance problems and I do not think I will 
> > be the last.
> > > > 
> > > > One thing I was missing while doing this was more tools to 
> > > > test and tune
> > > > the performance of tape, database, network and reading files.
> > > > 
> > > > btape has great functions for testing if the tape drive and 
> > > > autochanger
> > > > is working but I would like it to test different parameters 
> > > > and suggest
> > > > changes to my configuration. I do not mind if would take a 
> > > > long time to
> > > > do this because it wo

RE: [Bacula-users] Performance testing and tuning

2005-11-08 Thread Risberg Ove
Hi,

What was your transfer rate before and after compiling bacula 
with --enable-smartalloc?

/Ove

On Tue, 2005-11-08 at 11:10 +0100, Ribi Roland wrote:
> I found the solution...
> 
> rtfm! 
> 
> for solaris --enable-smartalloc is important... :/
> 
> 
> > -Original Message-
> > From: Ribi Roland [mailto:[EMAIL PROTECTED]
> > Sent: Tuesday, November 08, 2005 9:33 AM
> > To: 'bacula-users@lists.sourceforge.net'
> > Subject: FW: [Bacula-users] Performance testing and tuning
> > 
> > 
> > 
> > Hi,
> > 
> > We have Solaris 10 Servers, 3x Sun V240, one of them is the 
> > backup-server
> > with the storage deamon and director the others are backed up 
> > to the tape at
> > the backup-server.
> > 
> > The backup-server has a 1.3GHz CPU and 1Gb RAM, U320 
> > SCSI-Kontroller (on
> > board) and a second U320 controller with the Tandberg SDLT320 
> > tapedrive.
> > Networkspeed is at 1Gbit. We alos testet some transfers with 
> > "Pipe Magic
> > Tools" and get more then 30Mb/s in the average. The disks I 
> > testet with
> > bonnie++ and the write/reed is more then 30Mb/s.
> > 
> > We compiled bacula to work with postgre-sql.
> > 
> > All your tips did'nt change anything, I have only a throuput 
> > of arround
> > 300-350Kb/s. I also added the spooling-parameter.
> > 
> > With spooling it takes arround 10-20min to spool data and 
> > write them down to
> > tape at 2MB/s. But the save of the attributes (writing to the 
> > catalog) needs
> > 2h or more!
> > 
> > I tried many combinations and recompiled bacula also with 
> > Postgres 8.1,
> > Postgres 8.0, with CFLAGS=-O2/CXXFLAGS=-O2
> > 
> > I compiled it with gcc 3.4.4, which I compiled myself, 
> > because on solaris 10
> > there is'nt any binary distribution.
> > 
> > I think that there are 2 Problems, one is the performance of Bacula or
> > Postgres itself  and also some problems with the tape. I 
> > did'nt get more
> > then 2.6Mb/s with the ufsdump from solaris.
> > 
> > Anybody can help?
> > 
> > > -Original Message-
> > > From: Ove Risberg [mailto:[EMAIL PROTECTED] 
> > > Sent: Friday, November 04, 2005 1:59 PM
> > > To: Ribi Roland
> > > Subject: RE: [Bacula-users] Performance testing and tuning
> > > 
> > > 
> > > Hi,
> > > 
> > > I changed the MaximumNetworkBufferSize to 65536 in the file 
> > > and storage
> > > daemon configuration and increased the MaximumBlockSize to 
> > > 262144 in the
> > > storage daemon configuration.
> > > 
> > > If you change the MaximumBlockSize you must run the btape 
> > > tests again so
> > > you are sure the value is valid for your tape drive.
> > > 
> > > I think the MaximumNetworkBufferSize was the parameter that 
> > > gave me the
> > > performance increase.
> > > 
> > > What do you think of adding more performance testing tools 
> > to bacula?
> > > 
> > > What system do you have (hardware, os, tape, disk)?
> > > 
> > > Please reply with your results.
> > > 
> > > /Ove
> > > 
> > > On Fri, 2005-11-04 at 13:02, Ribi Roland wrote:
> > > > How did you increase the performance?
> > > > 
> > > > I have the same slow system (arround 300Kb/s), what did you 
> > > change in the
> > > > configuration, which did help to increase the performance?
> > > > 
> > > > It would help me...
> > > > 
> > > > Thank You
> > > > 
> > > > > -Original Message-
> > > > > From: Ove Risberg [mailto:[EMAIL PROTECTED] 
> > > > > Sent: Friday, November 04, 2005 12:31 PM
> > > > > To: bacula-users
> > > > > Subject: [Bacula-users] Performance testing and tuning
> > > > > 
> > > > > 
> > > > > Hi,
> > > > > 
> > > > > I have tried to get some decent performance in my bacula 
> > > configuration
> > > > > and after reading bacula mail lists, documentation and some 
> > > > > source code
> > > > > I increased it from 300KB/s to 3MB/s when backuping the root 
> > > > > filesystem.
> > > > > 
> > > > > After reading the mail lists I found out that I was not the 
> > > > > first bacula
> &g

Re: [Bacula-users] Performance testing and tuning

2005-11-08 Thread Kern Sibbald
On Tuesday 08 November 2005 11:10, Ribi Roland wrote:
> I found the solution...
>
> rtfm!
>
> for solaris --enable-smartalloc is important... :/

I would be curious to know what problem this solved.  I doubt that Bacula will 
compile without --enable-smartalloc simply because I have never tried it.  If 
you can get it to compile without --enable-smartalloc, it probably will run 
faster and use less memory.  However, there will be no buffer overrun 
checking ...

>
> > -Original Message-
> > From: Ribi Roland [mailto:[EMAIL PROTECTED]
> > Sent: Tuesday, November 08, 2005 9:33 AM
> > To: 'bacula-users@lists.sourceforge.net'
> > Subject: FW: [Bacula-users] Performance testing and tuning
> >
> >
> >
> > Hi,
> >
> > We have Solaris 10 Servers, 3x Sun V240, one of them is the
> > backup-server
> > with the storage deamon and director the others are backed up
> > to the tape at
> > the backup-server.
> >
> > The backup-server has a 1.3GHz CPU and 1Gb RAM, U320
> > SCSI-Kontroller (on
> > board) and a second U320 controller with the Tandberg SDLT320
> > tapedrive.
> > Networkspeed is at 1Gbit. We alos testet some transfers with
> > "Pipe Magic
> > Tools" and get more then 30Mb/s in the average. The disks I
> > testet with
> > bonnie++ and the write/reed is more then 30Mb/s.
> >
> > We compiled bacula to work with postgre-sql.
> >
> > All your tips did'nt change anything, I have only a throuput
> > of arround
> > 300-350Kb/s. I also added the spooling-parameter.
> >
> > With spooling it takes arround 10-20min to spool data and
> > write them down to
> > tape at 2MB/s. But the save of the attributes (writing to the
> > catalog) needs
> > 2h or more!
> >
> > I tried many combinations and recompiled bacula also with
> > Postgres 8.1,
> > Postgres 8.0, with CFLAGS=-O2/CXXFLAGS=-O2
> >
> > I compiled it with gcc 3.4.4, which I compiled myself,
> > because on solaris 10
> > there is'nt any binary distribution.
> >
> > I think that there are 2 Problems, one is the performance of Bacula or
> > Postgres itself  and also some problems with the tape. I
> > did'nt get more
> > then 2.6Mb/s with the ufsdump from solaris.
> >
> > Anybody can help?
> >
> > > -Original Message-
> > > From: Ove Risberg [mailto:[EMAIL PROTECTED]
> > > Sent: Friday, November 04, 2005 1:59 PM
> > > To: Ribi Roland
> > > Subject: RE: [Bacula-users] Performance testing and tuning
> > >
> > >
> > > Hi,
> > >
> > > I changed the MaximumNetworkBufferSize to 65536 in the file
> > > and storage
> > > daemon configuration and increased the MaximumBlockSize to
> > > 262144 in the
> > > storage daemon configuration.
> > >
> > > If you change the MaximumBlockSize you must run the btape
> > > tests again so
> > > you are sure the value is valid for your tape drive.
> > >
> > > I think the MaximumNetworkBufferSize was the parameter that
> > > gave me the
> > > performance increase.
> > >
> > > What do you think of adding more performance testing tools
> >
> > to bacula?
> >
> > > What system do you have (hardware, os, tape, disk)?
> > >
> > > Please reply with your results.
> > >
> > > /Ove
> > >
> > > On Fri, 2005-11-04 at 13:02, Ribi Roland wrote:
> > > > How did you increase the performance?
> > > >
> > > > I have the same slow system (arround 300Kb/s), what did you
> > >
> > > change in the
> > >
> > > > configuration, which did help to increase the performance?
> > > >
> > > > It would help me...
> > > >
> > > > Thank You
> > > >
> > > > > -Original Message-
> > > > > From: Ove Risberg [mailto:[EMAIL PROTECTED]
> > > > > Sent: Friday, November 04, 2005 12:31 PM
> > > > > To: bacula-users
> > > > > Subject: [Bacula-users] Performance testing and tuning
> > > > >
> > > > >
> > > > > Hi,
> > > > >
> > > > > I have tried to get some decent performance in my bacula
> > >
> > > configuration
> > >
> > > > > and after reading bacula mail lists, documentation and some
> > > > > source code
> > > > > I increased it from 300KB/s to 3MB

RE: [Bacula-users] Performance testing and tuning

2005-11-08 Thread Ribi Roland
It did'nt resolved anything..

I also disableb gzip compression at the same time. 

If I enable gzip I have the same low speed (with or w/o
--enable-smartalloc).

At the moment bacula runs fast enougth. The only problem to solve is
OS/Hardware related. My Tape runs at an other machine with 17-20Mb/s and at
the new Solaris 10 server with a SCSI-U320-Controller I get only 2.6Mb/s...


> -Original Message-
> From: Kern Sibbald [mailto:[EMAIL PROTECTED]
> Sent: Tuesday, November 08, 2005 2:24 PM
> To: bacula-users@lists.sourceforge.net
> Cc: Ribi Roland
> Subject: Re: [Bacula-users] Performance testing and tuning
> 
> 
> On Tuesday 08 November 2005 11:10, Ribi Roland wrote:
> > I found the solution...
> >
> > rtfm!
> >
> > for solaris --enable-smartalloc is important... :/
> 
> I would be curious to know what problem this solved.  I doubt 
> that Bacula will 
> compile without --enable-smartalloc simply because I have 
> never tried it.  If 
> you can get it to compile without --enable-smartalloc, it 
> probably will run 
> faster and use less memory.  However, there will be no buffer overrun 
> checking ...
> 
> >
> > > -Original Message-
> > > From: Ribi Roland [mailto:[EMAIL PROTECTED]
> > > Sent: Tuesday, November 08, 2005 9:33 AM
> > > To: 'bacula-users@lists.sourceforge.net'
> > > Subject: FW: [Bacula-users] Performance testing and tuning
> > >
> > >
> > >
> > > Hi,
> > >
> > > We have Solaris 10 Servers, 3x Sun V240, one of them is the
> > > backup-server
> > > with the storage deamon and director the others are backed up
> > > to the tape at
> > > the backup-server.
> > >
> > > The backup-server has a 1.3GHz CPU and 1Gb RAM, U320
> > > SCSI-Kontroller (on
> > > board) and a second U320 controller with the Tandberg SDLT320
> > > tapedrive.
> > > Networkspeed is at 1Gbit. We alos testet some transfers with
> > > "Pipe Magic
> > > Tools" and get more then 30Mb/s in the average. The disks I
> > > testet with
> > > bonnie++ and the write/reed is more then 30Mb/s.
> > >
> > > We compiled bacula to work with postgre-sql.
> > >
> > > All your tips did'nt change anything, I have only a throuput
> > > of arround
> > > 300-350Kb/s. I also added the spooling-parameter.
> > >
> > > With spooling it takes arround 10-20min to spool data and
> > > write them down to
> > > tape at 2MB/s. But the save of the attributes (writing to the
> > > catalog) needs
> > > 2h or more!
> > >
> > > I tried many combinations and recompiled bacula also with
> > > Postgres 8.1,
> > > Postgres 8.0, with CFLAGS=-O2/CXXFLAGS=-O2
> > >
> > > I compiled it with gcc 3.4.4, which I compiled myself,
> > > because on solaris 10
> > > there is'nt any binary distribution.
> > >
> > > I think that there are 2 Problems, one is the performance 
> of Bacula or
> > > Postgres itself  and also some problems with the tape. I
> > > did'nt get more
> > > then 2.6Mb/s with the ufsdump from solaris.
> > >
> > > Anybody can help?
> > >
> > > > -Original Message-
> > > > From: Ove Risberg [mailto:[EMAIL PROTECTED]
> > > > Sent: Friday, November 04, 2005 1:59 PM
> > > > To: Ribi Roland
> > > > Subject: RE: [Bacula-users] Performance testing and tuning
> > > >
> > > >
> > > > Hi,
> > > >
> > > > I changed the MaximumNetworkBufferSize to 65536 in the file
> > > > and storage
> > > > daemon configuration and increased the MaximumBlockSize to
> > > > 262144 in the
> > > > storage daemon configuration.
> > > >
> > > > If you change the MaximumBlockSize you must run the btape
> > > > tests again so
> > > > you are sure the value is valid for your tape drive.
> > > >
> > > > I think the MaximumNetworkBufferSize was the parameter that
> > > > gave me the
> > > > performance increase.
> > > >
> > > > What do you think of adding more performance testing tools
> > >
> > > to bacula?
> > >
> > > > What system do you have (hardware, os, tape, disk)?
> > > >
> > > > Please reply with your results.
> > > >
> > > > /Ove
> > > >
> > > > On Fri, 2005-11-04

RE: [Bacula-users] Performance testing and tuning

2005-11-08 Thread John Stoffel

Ribi> I also disableb gzip compression at the same time. 

I'd disable gzip anyway, let the drive with it's dedicated compression
do the work.  Also, make sure that on your Sun boxes you have the
Gigabit ethernet cards (if not using the onboard ones) in the 66mhz
PCI slots, and not the 33mhz ones.  You'll see a big performance
increase there.  

Ribi> If I enable gzip I have the same low speed (with or w/o
Ribi> --enable-smartalloc).

Turn it off, let the drive do the work.

Ribi> At the moment bacula runs fast enougth. The only problem to
Ribi> solve is OS/Hardware related. My Tape runs at an other machine
Ribi> with 17-20Mb/s and at the new Solaris 10 server with a
Ribi> SCSI-U320-Controller I get only 2.6Mb/s...

Sounds like a cable problem to me.  Or that the system where it runs
quickly enough is an older/slower controller which is negotiating
properly to the drive.  

See if you can go into the BIOS of the controller with the tape drive
and turn down the speed.  You don't care about SCSI 320 speed, try to
160 or even 80mbytes/sec setting if you can.

Also, make sure the firmware on your drive is upto date.  

And of course, make sure you're not mixing differential with regular
or LVD with HVD on the controller and tape drive.  It will make a huge
difference.

Can you check the output of 'lusinfo -v' on both systems with the same
tape drive? 

Good luck!
John


---
SF.Net email is sponsored by:
Tame your development challenges with Apache's Geronimo App Server. Download
it for free - -and be entered to win a 42" plasma tv or your very own
Sony(tm)PSP.  Click here to play: http://sourceforge.net/geronimo.php
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


RE: [Bacula-users] Performance testing and tuning

2005-11-08 Thread Ribi Roland
> Ribi> If I enable gzip I have the same low speed (with or w/o
> Ribi> --enable-smartalloc).
> 
> Turn it off, let the drive do the work.

Yes, that is how I configured it now.

> Ribi> At the moment bacula runs fast enougth. The only problem to
> Ribi> solve is OS/Hardware related. My Tape runs at an other machine
> Ribi> with 17-20Mb/s and at the new Solaris 10 server with a
> Ribi> SCSI-U320-Controller I get only 2.6Mb/s...
> 
> Sounds like a cable problem to me.  Or that the system where it runs
> quickly enough is an older/slower controller which is negotiating
> properly to the drive.  
> 
> See if you can go into the BIOS of the controller with the tape drive
> and turn down the speed.  You don't care about SCSI 320 speed, try to
> 160 or even 80mbytes/sec setting if you can.
> 

At the moment it looks like the termination of the scsi-bus is not for
U320-SCSI.

I will change the terminator and test again. 

At the onboard interface (U160-SCSI) the tape gets 19Mb/s.


---
SF.Net email is sponsored by:
Tame your development challenges with Apache's Geronimo App Server. Download
it for free - -and be entered to win a 42" plasma tv or your very own
Sony(tm)PSP.  Click here to play: http://sourceforge.net/geronimo.php
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Performance testing and tuning

2005-11-10 Thread Kern Sibbald
Hello,

For any one interested in performance tuning, I have added documentation to 
src/version.h in the current CVS for all the performance #defines that are 
available.  Also, you can also get valuable information by disabling Catalog 
Updates using the "Use Catalog = no" directive in your pool.

On Tuesday 08 November 2005 22:50, John Stoffel wrote:
> Ribi> I also disableb gzip compression at the same time.
>
> I'd disable gzip anyway, let the drive with it's dedicated compression
> do the work.  Also, make sure that on your Sun boxes you have the
> Gigabit ethernet cards (if not using the onboard ones) in the 66mhz
> PCI slots, and not the 33mhz ones.  You'll see a big performance
> increase there.
>
> Ribi> If I enable gzip I have the same low speed (with or w/o
> Ribi> --enable-smartalloc).
>
> Turn it off, let the drive do the work.
>
> Ribi> At the moment bacula runs fast enougth. The only problem to
> Ribi> solve is OS/Hardware related. My Tape runs at an other machine
> Ribi> with 17-20Mb/s and at the new Solaris 10 server with a
> Ribi> SCSI-U320-Controller I get only 2.6Mb/s...
>
> Sounds like a cable problem to me.  Or that the system where it runs
> quickly enough is an older/slower controller which is negotiating
> properly to the drive.
>
> See if you can go into the BIOS of the controller with the tape drive
> and turn down the speed.  You don't care about SCSI 320 speed, try to
> 160 or even 80mbytes/sec setting if you can.
>
> Also, make sure the firmware on your drive is upto date.
>
> And of course, make sure you're not mixing differential with regular
> or LVD with HVD on the controller and tape drive.  It will make a huge
> difference.
>
> Can you check the output of 'lusinfo -v' on both systems with the same
> tape drive?
>
> Good luck!
> John
>
>
> ---
> SF.Net email is sponsored by:
> Tame your development challenges with Apache's Geronimo App Server.
> Download it for free - -and be entered to win a 42" plasma tv or your very
> own Sony(tm)PSP.  Click here to play: http://sourceforge.net/geronimo.php
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users

-- 
Best regards,

Kern

  (">
  /\
  V_V


---
SF.Net email is sponsored by:
Tame your development challenges with Apache's Geronimo App Server. Download
it for free - -and be entered to win a 42" plasma tv or your very own
Sony(tm)PSP.  Click here to play: http://sourceforge.net/geronimo.php
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Performance testing and tuning

2005-11-15 Thread Kern Sibbald
On Friday 04 November 2005 19:53, Ove Risberg wrote:
> Hi,
>
> On Fri, 2005-11-04 at 14:14, Kern Sibbald wrote:
> > Hello,
> >
> > On Friday 04 November 2005 12:31, Ove Risberg wrote:
> > > Hi,
> > >
> > > I have tried to get some decent performance in my bacula configuration
> > > and after reading bacula mail lists, documentation and some source code
> > > I increased it from 300KB/s to 3MB/s when backuping the root
> > > filesystem.
> > >
> > > After reading the mail lists I found out that I was not the first
> > > bacula user with performance problems and I do not think I will be the
> > > last.
> > >
> > > One thing I was missing while doing this was more tools to test and
> > > tune the performance of tape, database, network and reading files.
> > >
> > > btape has great functions for testing if the tape drive and autochanger
> > > is working but I would like it to test different parameters and suggest
> > > changes to my configuration. I do not mind if would take a long time to
> > > do this because it would take me a lot longer to do it myself.
> > >
> > > Network performance would be much easier to test and tune if I could
> > > tell a file daemon to send data to a storage daemon and report the
> > > transfer rate without reading files, updating database or writing to
> > > tape.
> > >
> > > Reading files can be done in a similar way by telling a file daemon to
> > > read files and report the transfer rate without sending the files to
> > > any storage daemon.
> > >
> > > I do not know how to test and tune the database...
> > > but someone must know ;-)
> > >
> > > When/If these tools are available it would be possible to write a
> > > performance tuning tool to help users to test and tune their bacula
> > > configuration.
> > >
> > > Is it possible to do this?
> >
> > Yes, of course, this is possible.  However, there is a question of
> > priorities and manpower. For the moment, manpower is rather limited.
> >
> > You might start by describing what you did, how/why you decided to change
> > what you did, and what the results of each change were.  Already, this
> > would be a big help to others.
>
> I will try to describe what I did...
>
> I first found some network problems the bacula server was using 100MBit
> half-duplex and the switch was using 100MBit full-duplex and this is not
> good.
>
> After fixing this the performance was not better... so I tried to scp a
> large file in both directions at the same time to see if there is
> something wrong with the network but I got many MB/s i both directions
> at the same time so the network is not the problem.
>
> Someone told me that LTO drives want large block sizes to perform well
> so I increased MaximumBlockSize to 512K.
> btape complained about the block size so I decresed it to 256K.
> Why is the default value 63K and not 64K?

Because 64K is not supported by all tape drives and will on older drives cause 
two records to be written instead of one.  It also can cause severe memory 
fragmentation.  See the Bacula archives for more details on this. The guys 
who understand these things *much* better than I do gave a lot of input.

> The performance was not better after fixing this.
>
> Then I found several mails in the bacula-users mail list about
> increasing MaximumNetworkBufferSize in the file and storage daemon
> configuration.
> I first increased it to 65536 and now I got about 3MB/s ;-)
> Then I increased it to 512K, the same value our Netbackup server uses,
> but the performance was not better and I was not able to restore all
> files. The storage daemon died and when trying to read the backup files
> (I made these backup test to files) with bls I could only list the first
> files in the backup and then bls aborted with some error message.
>
> So I changed MaximumNetworkBufferSize to 65535 again.
>
> > Bacula does have some of the capabilities that you described. Some are
> > controlled by directives, and others are controlled by recompiling with
> > #defines changed in src/version.h.  However, the defines are a bit out of
> > date and may not work.  To make it all work and to put it together in a
> > coherent way that users can try it, is a rather big project.
>
> Do you have any notes, documentation or testscripts using the directives
> or defines in version.h?
>
> I will try to do some testing if I get some hints on how to use these
> features and if I get some useful results I will share it with you all
> on the list.

-- 
Best regards,

Kern

  (">
  /\
  V_V


---
This SF.Net email is sponsored by the JBoss Inc.  Get Certified Today
Register for a JBoss Training Course.  Free Certification Exam
for All Training Attendees Through End of 2005. For more info visit:
http://ads.osdn.com/?ad_id=7628&alloc_id=16845&op=click
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] performance decaying with huge backup

2005-11-16 Thread matthew_buckland
Hello list,

I've been running bacula very happily now for nearly a year and am so far very 
pleased. However the amount of data that is now being backed up with this 
system is huge. The problem I have is that for the first 3-4GB of backup, I'm 
getting a nice 6-7Mbytes/s, which is great! Then beyond that it just 
continuously decreases. When I get to the fifth tape the transfer rate is 
about 700kbytes/s. My setup is as follows:

Bacula director, storage and file daemon version 1.36.2, running on FreeBSD 
version 5.3-RELEASE-p5. Database is PostgresQL version 7.4.6 running on 
Debian Linux, kernel 2.6.8.

I'm trying to backup 380GB from a firewire disk, onto 40/70 DLT tapes, drive 
attached to pci scsi controller (all on same machine as director). Am I 
insane?!? If so, please say so. If not read on.

The load average during the backup is almost 1 and on the database server is 
almost nothing.

Does anybody have any clues for improving this performance, or should I just 
split the job into smaller chunks?

Thanks in advance for any help, even if it's just to advise me that I'm 
insane.


Matt
-- 
Matthew Buckland, Network / Support Analyst
Wordbank Limited
33 Charlotte Street, London W1T 1RR
Direct line: +44 (0) 20 7903 8847
Fax: +44 (0) 20 7903 



---
This SF.Net email is sponsored by the JBoss Inc.  Get Certified Today
Register for a JBoss Training Course.  Free Certification Exam
for All Training Attendees Through End of 2005. For more info visit:
http://ads.osdn.com/?ad_id=7628&alloc_id=16845&op=click
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] performance&design&configuration challenges

2020-10-05 Thread Josh Fisher


On 10/5/20 9:20 AM, Žiga Žvan wrote:


Hi,
I'm having some performance challenges. I would appreciate some 
educated guess from an experienced bacula user.


I'm changing old backup sw that writes to tape drive with bacula 
writing  to disk. The results are:
a) windows file server backup from a deduplicated drive (1.700.000 
files, 900 GB data, deduplicated space used 600 GB). *Bacula: 12 
hours, old software: 2.5 hours*
b) linux file server backup (50.000 files, 166 GB data).*Bacula 3.5 
hours, old software: 1 hour*.


I have tried to:
a) turn off compression&encryption. The result is the same: backup 
speed around 13 MB/sec.
b) change destination storage (from a new ibm storage attached over 
nfs, to a local SSD disk attached on bacula server virtual machine). 
It took 2 hours 50 minutes to backup linux file server (instead of 3.5 
hours). Sequential write test tested with linux dd command shows write 
speed 300 MB/sec for IBM storage and 600 MB/sec for local SSD storage 
(far better than actual throughput).




There are directives to enable/disable spooling of both data and the 
attributes (metadata) being written to the catalog database. When using 
disk volumes, you probably want to disable data spooling and enable 
attribute spooling. The attribute spooling will prevent a database write 
after each file backed up and instead do the database writes as a batch 
at the end of the job. Data spooling would rarely if ever be needed when 
writing to dick media.


With attribute spooling enabled, you can make a rough guess as to 
whether DB performance is the problem by judging how long the job is in 
the 'attribute despooling' state, The status dir command in bconsole 
shows the job state.



The network bandwidth is 1 GB (1 GB on client, 10 GB on server) so I 
guess this is not a problem; however I have noticed that bacula-fd on 
client side uses 100% of CPU.


I'm using:
-bacula server version 9.6.5
-bacula client version 5.2.13 (original from centos 6 repo).

Any idea what is wrong and/or what performance should I expect?
I would also appreciate some answers on the questions bellow (I think 
this email went unanswered).


Kind regards,
Ziga Zvan




On 05.08.2020 10:52, Žiga Žvan wrote:


Dear all,
I have tested bacula sw (9.6.5) and I must say I'm quite happy with 
the results (eg. compression, encryption, configureability). However 
I have some configuration/design questions I hope, you can help me with.


Regarding job schedule, I would like to:
- create incremental daily backup (retention 1 week)
- create weekly full backup (retention 1 month)
- create monthly full backup (retention 1 year)

I am using dummy cloud driver that writes to local file storage.  
Volume is a directory with fileparts. I would like to have seperate 
volumes/pools for each client. I would like to delete the data on 
disk after retention period expires. If possible, I would like to 
delete just the fileparts with expired backup.


Questions:
a) At the moment, I'm using two backup job definitions per client and 
central schedule definition for all my clients. I have noticed that 
my incremental job gets promoted to full after monthly backup ("No 
prior Full backup Job record found"; because monthly backup is a 
seperate job, but bacula searches for full backups inside the same 
job). Could you please suggest a better configuration. If possible, I 
would like to keep central schedule definition (If I manipulate pools 
in a schedule resource, I would need to define them per client).


b) I would like to delete expired backups on disk (and in the catalog 
as well). At the moment I'm using one volume in a 
daily/weekly/monthly pool per client. In a volume, there are 
fileparts belonging to expired backups (eg. part1-23 in the output 
bellow). I have tried to solve this with purge/prune scripts in my 
BackupCatalog job (as suggested in the whitepapers) but the data does 
not get deleted. Is there any way to delete fileparts? Should I 
create separate volumes after retention period? Please suggest a 
better configuration.


c) Do I need a restore job for each client? I would just like to 
restore backup on the same client, default to /restore folder... When 
I use bconsole restore all command, the wizard asks me all the 
questions (eg. 5- last backup for a client, which client,fileset...) 
but at the end it asks for a restore job which changes all previously 
defined things (eg. client).


d) At the moment, I have not implemented autochanger functionality. 
Clients compress/encrypt the data and send them to bacula server, 
which writes them on one central storage system. Jobs are processed 
in sequential order (one at a time). Do you expect any significant 
performance gain if i implement autochanger in order to have jobs run 
simultaneously?


Relevant part of configuration attached bellow.

Looking forward to move in the production...
Kind regards,
Ziga Zvan


*Volume example *(fileparts 1-23 should be deleted)*:*
[root@bacula c

Re: [Bacula-users] performance&design&configuration challenges

2020-10-06 Thread Žiga Žvan
I believe that I have my spooling attributes set correctly on jobdefs 
(see bellow). Spool attributes = yes; Spool data defaults to no. Any 
other idea for performance problems?

Regard,
Ziga



JobDefs {
  Name = "bazar2-job"
  Type = Backup
  Level = Incremental
  Client = bazar2.kranj.cetrtapot.si-fd #Client names: will be match on 
bacula-fd.conf on client side

  FileSet = "bazar2-fileset"
  Schedule = "WeeklyCycle" #schedule : see in bacula-dir.conf
  Storage = FSTestBackup
#  Storage = FSOciCloudStandard
  Messages = Standard
  Pool = bazar2-daily-pool
  Spool Attributes = yes   # Better for backup to disk
  Max Full Interval = 15 days # Ensure that full backup exist
  Priority = 10
  Write Bootstrap = "/opt/bacula/working/%c.bsr"
}

status dir not showing files transfered:

 JobId  Type Level Files Bytes  Name  Status
==
   714  Back Full  0 0  bazar2-monthly-backup is running


On 06.10.2020 09:14, Žiga Žvan wrote:


Thanks Josh for your reply and sorry for my previous duplicate email.

I will try to disable data spooling and report back the results.

What about manipulating retention? Currently I have different jobs for 
weekly full and monthly full backup (see bellow), but that triggers 
full backup instead of incremental on Monday (because I use different 
job resource). Is there a better way to have monthly backup with 
longer retention?


Kind regards,
Ziga

#For all clients

Schedule {
  Name = "MonthlyFull"
  Run = Full 1st fri at 23:05
}

# This schedule does the catalog. It starts after the WeeklyCycle
Schedule {
  Name = "WeeklyCycleAfterBackup"
  Run = Full sun-sat at 23:10
}

#Job for each client

Job {
  Name = "oradev02-backup"
  JobDefs = "oradev02-job"
  Full Backup Pool = oradev02-weekly-pool
  Incremental Backup Pool = oradev02-daily-pool
}

Job {
  Name = "oradev02-monthly-backup"
  JobDefs = "oradev02-job"
  Pool = oradev02-monthly-pool
  Schedule = "MonthlyFull"  #schedule : see in bacula-dir.conf 
(monthly pool with longer retention)

}



On 05.10.2020 16:30, Josh Fisher wrote:



On 10/5/20 9:20 AM, Žiga Žvan wrote:


Hi,
I'm having some performance challenges. I would appreciate some 
educated guess from an experienced bacula user.


I'm changing old backup sw that writes to tape drive with bacula 
writing  to disk. The results are:
a) windows file server backup from a deduplicated drive (1.700.000 
files, 900 GB data, deduplicated space used 600 GB). *Bacula: 12 
hours, old software: 2.5 hours*
b) linux file server backup (50.000 files, 166 GB data).*Bacula 3.5 
hours, old software: 1 hour*.


I have tried to:
a) turn off compression&encryption. The result is the same: backup 
speed around 13 MB/sec.
b) change destination storage (from a new ibm storage attached over 
nfs, to a local SSD disk attached on bacula server virtual machine). 
It took 2 hours 50 minutes to backup linux file server (instead of 
3.5 hours). Sequential write test tested with linux dd command shows 
write speed 300 MB/sec for IBM storage and 600 MB/sec for local SSD 
storage (far better than actual throughput).




There are directives to enable/disable spooling of both data and the 
attributes (metadata) being written to the catalog database. When 
using disk volumes, you probably want to disable data spooling and 
enable attribute spooling. The attribute spooling will prevent a 
database write after each file backed up and instead do the database 
writes as a batch at the end of the job. Data spooling would rarely 
if ever be needed when writing to dick media.


With attribute spooling enabled, you can make a rough guess as to 
whether DB performance is the problem by judging how long the job is 
in the 'attribute despooling' state, The status dir command in 
bconsole shows the job state.



The network bandwidth is 1 GB (1 GB on client, 10 GB on server) so I 
guess this is not a problem; however I have noticed that bacula-fd 
on client side uses 100% of CPU.


I'm using:
-bacula server version 9.6.5
-bacula client version 5.2.13 (original from centos 6 repo).

Any idea what is wrong and/or what performance should I expect?
I would also appreciate some answers on the questions bellow (I 
think this email went unanswered).


Kind regards,
Ziga Zvan




On 05.08.2020 10:52, Žiga Žvan wrote:


Dear all,
I have tested bacula sw (9.6.5) and I must say I'm quite happy with 
the results (eg. compression, encryption, configureability). 
However I have some configuration/design questions I hope, you can 
help me with.


Regarding job schedule, I would like to:
- create incremental daily backup (retention 1 week)
- create weekly full backup (retention 1 month)
- create monthly full backup (retention 1 year)

I am using dummy cloud driver that writes to local file storage.  
Volume is a directory with fileparts. I would like to have seperate 
volumes/pools for each client. I wou

Re: [Bacula-users] performance&design&configuration challenges

2020-10-06 Thread Heitor Faria
Hello Ziga,

Your client is probably too old for the 9.2.x Director.
Even CentOS 6 is old, most likely in the end of life.
Other than that you can try some tuning: 
http://www.bacula.lat/tuning-better-performance-and-treatment-of-backup-bottlenecks/?lang=en

Rgds.
--
MSc Heitor Faria
CEO Bacula LatAm
mobile1: + 1 909 655-8971
mobile2: + 55 61 98268-4220

América Latina
[ http://bacula.lat/]

 Original Message 
From: Žiga Žvan 
Sent: Tuesday, October 6, 2020 03:11 AM
To: bacula-users@lists.sourceforge.net
Subject: [Bacula-users] performance&design&configuration challenges

>Hi,
>
>I'm having some performance challenges. I would appreciate some educated 
>guess from an experienced bacula user.
>
>I'm changing old backup sw that writes to tape drive with bacula 
>writing  to disk. The results are:
>a) windows file server backup from a deduplicated drive (1.700.000 
>files, 900 GB data, deduplicated space used 600 GB). *Bacula: 12 hours, 
>old software: 2.5 hours*
>b) linux file server backup (50.000 files, 166 GB data).*Bacula 3.5 
>hours, old software: 1 hour*.
>
>I have tried to:
>a) turn off compression&encryption. The result is the same: backup speed 
>around 13 MB/sec.
>b) change destination storage (from a new ibm storage attached over nfs, 
>to a local SSD disk attached on bacula server virtual machine). It took 
>2 hours 50 minutes to backup linux file server (instead of 3.5 hours). 
>Sequential write test tested with linux dd command shows write speed 300 
>MB/sec for IBM storage and 600 MB/sec for local SSD storage (far better 
>than actual throughput).
>
>The network bandwidth is 1 GB (1 GB on client, 10 GB on server) so I 
>guess this is not a problem; however I have noticed that bacula-fd on 
>client side uses 100% of CPU.
>
>I'm using:
>-bacula server version 9.6.5
>-bacula client version 5.2.13 (original from centos 6 repo).
>
>Any idea what is wrong and/or what performance should I expect?
>I would also appreciate some answers on the questions bellow.
>
>Kind regards,
>Ziga Zvan
>
>
>
>
>On 05.08.2020 10:52, Žiga Žvan wrote:
>>
>> Dear all,
>> I have tested bacula sw (9.6.5) and I must say I'm quite happy with 
>> the results (eg. compression, encryption, configureability). However I 
>> have some configuration/design questions I hope, you can help me with.
>>
>> Regarding job schedule, I would like to:
>> - create incremental daily backup (retention 1 week)
>> - create weekly full backup (retention 1 month)
>> - create monthly full backup (retention 1 year)
>>
>> I am using dummy cloud driver that writes to local file storage.  
>> Volume is a directory with fileparts. I would like to have seperate 
>> volumes/pools for each client. I would like to delete the data on disk 
>> after retention period expires. If possible, I would like to delete 
>> just the fileparts with expired backup.
>>
>> Questions:
>> a) At the moment, I'm using two backup job definitions per client and 
>> central schedule definition for all my clients. I have noticed that my 
>> incremental job gets promoted to full after monthly backup ("No prior 
>> Full backup Job record found"; because monthly backup is a seperate 
>> job, but bacula searches for full backups inside the same job). Could 
>> you please suggest a better configuration. If possible, I would like 
>> to keep central schedule definition (If I manipulate pools in a 
>> schedule resource, I would need to define them per client).
>>
>> b) I would like to delete expired backups on disk (and in the catalog 
>> as well). At the moment I'm using one volume in a daily/weekly/monthly 
>> pool per client. In a volume, there are fileparts belonging to expired 
>> backups (eg. part1-23 in the output bellow). I have tried to solve 
>> this with purge/prune scripts in my BackupCatalog job (as suggested in 
>> the whitepapers) but the data does not get deleted. Is there any way 
>> to delete fileparts? Should I create separate volumes after retention 
>> period? Please suggest a better configuration.
>>
>> c) Do I need a restore job for each client? I would just like to 
>> restore backup on the same client, default to /restore folder... When 
>> I use bconsole restore all command, the wizard asks me all the 
>> questions (eg. 5- last backup for a client, which client,fileset...) 
>> but at the end it asks for a restore job which changes all previously 
>> defined things (eg. client).
>>
>> d) At the moment, I have not implemented autochanger functionality. 
>> Clients compress/encrypt the data and send them to b

Re: [Bacula-users] performance&design&configuration challenges

2020-10-06 Thread Josh Fisher


On 10/6/20 3:45 AM, Žiga Žvan wrote:
I believe that I have my spooling attributes set correctly on jobdefs 
(see bellow). Spool attributes = yes; Spool data defaults to no. Any 
other idea for performance problems?

Regard,
Ziga



The client version is very old. First try updating the client to 9.6.x.

For testing purposes, create another storage device on local disk and 
write a full backup to that. If it is much faster to local disk storage 
than it is to the s3 driver, then there may be an issue with how the s3 
driver is compiled, version of s3 driver, etc.


Otherwise, with attribute spooling enabled, the status of the job as 
given by the status dir command in bconsole will change to "despooling 
attributes" or something like that when the client has finished sending 
data. That is the period at the end of the job when the spooled 
attributes are being written to the catalog database. If despooling is 
taking a long time, then database performance might be the bottleneck.





___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] performance&design&configuration challenges

2020-10-06 Thread Žiga Žvan

Hi,
I have done some testing:
a) testing storage with dd command (eg: dd if=/dev/zero 
of=/storage/test1.img bs=1G count=1 oflag=dsync). The results are:

-writing to IBM storage (with cloud enabled) shows 300 MB/sec
-writing to local SSD storage shows 600 MB/sec.
I guess storage is not a bottleneck.
b) testing file copy from linux centos 6 server to bacula server with 
rsync (eg. rsync --info=progress2 source destination)

-writing to local storage: 82 MB/sec
-writing to IBM storage: 85 MB/sec
I guess this is ok for a 1 GB network link.
c) using bacula:
-linux centos 6 file server: 13 MB/sec on IBM storage, 16 MB/sec on 
local SSD storage (version of client 5.2.13).
-windows file server:  around 18 MB/sec - there could be some additional 
problem, because I perform a backup from a deduplicated drive (version 
of client 9.6.5)
d) I have tried to manipulate encryption/compression settings, but I 
believe there is no significant difference


I think that  bacula rate (15 MB/sec) in quite slow comparing to file 
copy results (85 MB/sec) from the same client/server. It should be 
better... Do you agree?


I have implemented autochanger in order to perform backup from both 
servers at the same time. We shall see the results tomorrow.
I have not changed the version of the client on linux server yet. My 
windows server uses new client version, so that was not my first idea... 
Will try this tomorrow if needed.


What about retention?
I would like to:
- create incremental daily backup (retention 1 week)
- create weekly full backup (retention 1 month)
- create monthly full backup (retention 1 year)

At the moment I use different job/schedule for monthly backup, but that 
triggers full backup also on Monday after monthly backup (I would like 
to run incremental then). Is there a better way? Relevant parts of conf 
below...


Regards,
Ziga

JobDefs {
Name = "bazar2-job"
Schedule = "WeeklyCycle"
...
}

Job {
  Name = "bazar2-backup"
  JobDefs = "bazar2-job"
  Full Backup Pool = bazar2-weekly-pool
  Incremental Backup Pool = bazar2-daily-pool
}

Job {
  Name = "bazar2-monthly-backup"
  Level = Full
  JobDefs = "bazar2-job"
  Pool = bazar2-monthly-pool
  Schedule = "MonthlyFull"  #schedule : see in bacula-dir.conf (monthly 
pool with longer retention)

}





Example output:

06-Oct 12:19 bacula-dir JobId 714: Bacula bacula-dir 9.6.5 (11Jun20):
  Build OS:   x86_64-redhat-linux-gnu-bacula redhat (Core)
  JobId:  714
  Job:bazar2-monthly-backup.2020-10-06_09.33.25_03
  Backup Level:   Full
  Client: "bazar2.kranj.cetrtapot.si-fd" 5.2.13 (19Jan13) 
x86_64-redhat-linux-gnu,redhat,(Core)
  FileSet:"bazar2-fileset" 2020-09-30 15:40:26
  Pool:   "bazar2-monthly-pool" (From Job resource)
  Catalog:"MyCatalog" (From Client resource)
  Storage:"FSTestBackup" (From Job resource)
  Scheduled time: 06-Oct-2020 09:33:15
  Start time: 06-Oct-2020 09:33:28
  End time:   06-Oct-2020 12:19:19
  Elapsed time:   2 hours 45 mins 51 secs
  Priority:   10
  FD Files Written:   53,682
  SD Files Written:   53,682
  FD Bytes Written:   168,149,175,433 (168.1 GB)
  SD Bytes Written:   168,158,044,149 (168.1 GB)
  Rate:   16897.7 KB/s
  Software Compression:   36.6% 1.6:1
  Comm Line Compression:  None
  Snapshot/VSS:   no
  Encryption: no
  Accurate:   no
  Volume name(s): bazar2-monthly-vol-0300
  Volume Session Id:  11
  Volume Session Time:1601893281
  Last Volume Bytes:  337,370,601,852 (337.3 GB)
  Non-fatal FD errors:0
  SD Errors:  0
  FD termination status:  OK
  SD termination status:  OK
  Termination:Backup OK


On 06.10.2020 14:28, Josh Fisher wrote:


On 10/6/20 3:45 AM, Žiga Žvan wrote:
I believe that I have my spooling attributes set correctly on jobdefs 
(see bellow). Spool attributes = yes; Spool data defaults to no. Any 
other idea for performance problems?

Regard,
Ziga



The client version is very old. First try updating the client to 9.6.x.

For testing purposes, create another storage device on local disk and 
write a full backup to that. If it is much faster to local disk 
storage than it is to the s3 driver, then there may be an issue with 
how the s3 driver is compiled, version of s3 driver, etc.


Otherwise, with attribute spooling enabled, the status of the job as 
given by the status dir command in bconsole will change to "despooling 
attributes" or something like that when the client has finished 
sending data. That is the period at the end of the job when the 
spooled attributes are being written to the catalog database. If 
despooling is taking a long time, then database performance might be 
the bottleneck.





___
Bacula-users mailing list
Bacula-users@lists.sourceforge.n

Re: [Bacula-users] performance&design&configuration challenges

2020-10-07 Thread Joe GREER
Ziga,

It is sad to hear your having issues with Bacula. Some of your concerns
have been here since 2005. The only thing you can do to speed things up
is to spool the whole job to very fast disk(SSD), break up your large
job(number of files), make sure your database is on very fast disk(SSD)
and have a person that is very familiar with Postgres look at your DB to
see if it needs some tweaking.

Here is a post from Andreas Koch back in 2017 with similar issues and
at the time VERY powerful hardware getting poor performance : 

"It appears that such trickery might be unnecessary if the Bacula FD
could
perform something similar (hiding the latency of individual meta-data
operations) on-the-fly, e.g. by executing in a multi-threaded fashion.
This
has been proposed as Item 15 in the Bacula `Projects' list since
November
2005 but does not appear to have been implemented yet (?)."

https://sourceforge.net/p/bacula/mailman/message/36021244/

Thanks,
Joe




This message and any documents attached are confidential - without any
specifications - created for the exclusive use of its intended
recipient(s), and may be legally privileged. Any modification, printing,
use, or distribution of this email that is not authorised is prohibited.
If you have received this email in error, please notify us immediately,
delete it from your system and destroy any attachments.
-- French version --
Ce message et toutes les pièces jointes sont confidentiels et - sans
mention particulière - établis à l'intention et pour l'exploitation
exclusive de son ou ses destinataires. Toute modification, édition,
utilisation ou diffusion non autorisée est interdite. Si vous avez reçu
ce message par erreur, merci d'en avertir immédiatement l'émetteur et de
détruire le message et pièces jointes.



>>> Žiga Žvan  10/6/2020 9:56 AM >>>
Hi,
I have done some testing:
a) testing storage with dd command (eg: dd if=/dev/zero 
of=/storage/test1.img bs=1G count=1 oflag=dsync). The results are:
-writing to IBM storage (with cloud enabled) shows 300 MB/sec
-writing to local SSD storage shows 600 MB/sec.
I guess storage is not a bottleneck.
b) testing file copy from linux centos 6 server to bacula server with 
rsync (eg. rsync --info=progress2 source destination)
-writing to local storage: 82 MB/sec
-writing to IBM storage: 85 MB/sec
I guess this is ok for a 1 GB network link.
c) using bacula:
-linux centos 6 file server: 13 MB/sec on IBM storage, 16 MB/sec on 
local SSD storage (version of client 5.2.13).
-windows file server:  around 18 MB/sec - there could be some
additional 
problem, because I perform a backup from a deduplicated drive (version

of client 9.6.5)
d) I have tried to manipulate encryption/compression settings, but I 
believe there is no significant difference

I think that  bacula rate (15 MB/sec) in quite slow comparing to file 
copy results (85 MB/sec) from the same client/server. It should be 
better... Do you agree?

I have implemented autochanger in order to perform backup from both 
servers at the same time. We shall see the results tomorrow.
I have not changed the version of the client on linux server yet. My 
windows server uses new client version, so that was not my first
idea... 
Will try this tomorrow if needed.

What about retention?
I would like to:
- create incremental daily backup (retention 1 week)
- create weekly full backup (retention 1 month)
- create monthly full backup (retention 1 year)

At the moment I use different job/schedule for monthly backup, but that

triggers full backup also on Monday after monthly backup (I would like

to run incremental then). Is there a better way? Relevant parts of conf

below...

Regards,
Ziga

JobDefs {
Name = "bazar2-job"
Schedule = "WeeklyCycle"
...
}
Job {
   Name = "bazar2-backup"
   JobDefs = "bazar2-job"
   Full Backup Pool = bazar2-weekly-pool
   Incremental Backup Pool = bazar2-daily-pool
}
Job {
   Name = "bazar2-monthly-backup"
   Level = Full
   JobDefs = "bazar2-job"
   Pool = bazar2-monthly-pool
   Schedule = "MonthlyFull"  #schedule : see in bacula-dir.conf
(monthly 
pool with longer retention)
}




Example output:

06-Oct 12:19 bacula-dir JobId 714: Bacula bacula-dir 9.6.5 (11Jun20):
   Build OS: x86_64-redhat-linux-gnu-bacula redhat
(Core)
   JobId:   714
   Job:  
bazar2-monthly-backup.2020-10-06_09.33.25_03
   Backup Level: Full
   Client: "bazar2.kranj.cetrtapot.si-fd" 5.2.13
(19Jan13) x86_64-redhat-linux-gnu,redhat,(Core)
   FileSet:   "bazar2-fileset" 2020-09-30 15:40:26
   Pool: "bazar2-monthly-pool" (From Job
resource)
   Catalog:   "MyCatalog" (From Client resource)
   Storage:   "FSTestBackup" (From Job resource)
   Scheduled time: 06-Oct-2020 09:33:15
   Start time: 06-Oct-2020 09:33:28
   End time: 06-Oct-2020 

Re: [Bacula-users] performance&design&configuration challenges

2020-10-07 Thread Žiga Žvan
Thanks Joe for this info. It looks like it is a client issue as it is 
written in the document (many small files; operations like stat(), 
fstat() consume 100% cpu on the client).


I think that implementing autochanger solves my problems (mutliple 
clients will write at the same time and utilize bandwidth).
I have installed and tested version 9.6.5 of bacula-client. It does not 
show any better performance (still 3,5 hours for 166 GB, 5 files), 
however bconsole status dir command now shows progress (files and bytes 
written). With old client, this reporting did not work (it  showed 0 
files till the end of backup).


Regards,
Ziga

On 07.10.2020 12:29, Joe GREER wrote:

Ziga,

It is sad to hear your having issues with Bacula. Some of your 
concerns have been here since 2005. The only thing you can do to speed 
things up is to spool the whole job to very fast disk(SSD), break up 
your large job(number of files), make sure your database is on very 
fast disk(SSD) and have a person that is very familiar with Postgres 
look at your DB to see if it needs some tweaking.


Here is a post from Andreas Koch back in 2017 with similar issues and 
at the time VERY powerful hardware getting poor performance :


"It appears that such trickery might be unnecessary if the Bacula FD could
perform something similar (hiding the latency of individual meta-data
operations) on-the-fly, e.g. by executing in a multi-threaded fashion. 
This

has been proposed as Item 15 in the Bacula `Projects' list since November
2005 but does not appear to have been implemented yet (?)."

https://sourceforge.net/p/bacula/mailman/message/36021244/

Thanks,
Joe




This message and any documents attached are confidential - without any 
specifications - created for the exclusive use of its intended 
recipient(s), and may be legally privileged. Any modification, 
printing, use, or distribution of this email that is not authorised is 
prohibited. If you have received this email in error, please notify us 
immediately, delete it from your system and destroy any attachments.


-- French version --
Ce message et toutes les pièces jointes sont confidentiels et - sans 
mention particulière - établis à l'intention et pour l'exploitation 
exclusive de son ou ses destinataires. Toute modification, édition, 
utilisation ou diffusion non autorisée est interdite. Si vous avez 
reçu ce message par erreur, merci d'en avertir immédiatement 
l'émetteur et de détruire le message et pièces jointes.

>>> Žiga Žvan  10/6/2020 9:56 AM >>>
Hi,
I have done some testing:
a) testing storage with dd command (eg: dd if=/dev/zero
of=/storage/test1.img bs=1G count=1 oflag=dsync). The results are:
-writing to IBM storage (with cloud enabled) shows 300 MB/sec
-writing to local SSD storage shows 600 MB/sec.
I guess storage is not a bottleneck.
b) testing file copy from linux centos 6 server to bacula server with
rsync (eg. rsync --info=progress2 source destination)
-writing to local storage: 82 MB/sec
-writing to IBM storage: 85 MB/sec
I guess this is ok for a 1 GB network link.
c) using bacula:
-linux centos 6 file server: 13 MB/sec on IBM storage, 16 MB/sec on
local SSD storage (version of client 5.2.13).
-windows file server:  around 18 MB/sec - there could be some additional
problem, because I perform a backup from a deduplicated drive (version
of client 9.6.5)
d) I have tried to manipulate encryption/compression settings, but I
believe there is no significant difference

I think that  bacula rate (15 MB/sec) in quite slow comparing to file
copy results (85 MB/sec) from the same client/server. It should be
better... Do you agree?

I have implemented autochanger in order to perform backup from both
servers at the same time. We shall see the results tomorrow.
I have not changed the version of the client on linux server yet. My
windows server uses new client version, so that was not my first idea...
Will try this tomorrow if needed.

What about retention?
I would like to:
- create incremental daily backup (retention 1 week)
- create weekly full backup (retention 1 month)
- create monthly full backup (retention 1 year)

At the moment I use different job/schedule for monthly backup, but that
triggers full backup also on Monday after monthly backup (I would like
to run incremental then). Is there a better way? Relevant parts of conf
below...

Regards,
Ziga

JobDefs {
Name = "bazar2-job"
Schedule = "WeeklyCycle"
...
}

Job {
   Name = "bazar2-backup"
   JobDefs = "bazar2-job"
   Full Backup Pool = bazar2-weekly-pool
   Incremental Backup Pool = bazar2-daily-pool
}

Job {
   Name = "bazar2-monthly-backup"
   Level = Full
   JobDefs = "bazar2-job"
   Pool = bazar2-monthly-pool
   Schedule = "MonthlyFull"  #schedule : see in bacula-dir.conf (monthly
pool with longer retention)
}





Example output:

06-Oct 12:19 bacula-dir JobId 714: Bacula bacula-dir 9.6.5 (11Jun20):
   Build OS: x86_64-redhat-linux-gnu-bacula redhat (Core)
   JobId:  714
   Job: bazar2-monthly-b

Re: [Bacula-users] Performance with latent networks

2011-04-11 Thread Gavin McCullagh
Hi,

On Mon, 11 Apr 2011, Peter Hoskin wrote:

> I'm using bacula to do backups of some remote servers, over the Internet
> encapsulated in OpenVPN (just to make sure things are encrypted and kept off
> public address space).
> 
> The bacula-fd is in Montreal Canada with 100mbit Ethernet. I also have
> another bacula-fd in Canberra Australia on 100mbit Ethernet. The
> bacula-director is in Sydney Australia with ADSL2+ at full line sync. The
> latency to Montreal is about 300ms while the latency to Canberra is about
> 30ms.

The issue, I imagine with transfer rates is between the bacula-fd and
bacula-sd.  Do we presume the -sd is in Sydney?  You don't say what speed
the Sydney ADSL2+ link is (though apparently it can manage at least
2.2MByte/sec).

Is that 300ms over the VPN?  If you run a long ping, is there any
noticeable packet loss?

> The problem I'm encountering. backups from the Montreal box will peak at a
> transfer rate of 100kb/sec despite my ability to do 2.2mb/sec via http, ftp,
> ssh, rsync, etc. from the same host.

Presumably these tests are between the -fd and -sd hosts along the same VPN.

Broadly AIUI, one bulk TCP transfer should be the same as another and the
two should be affected by latency in the same way.  This suggests to me
that there's something else going on.  Maybe the bacula-fd host is busy or
the bacula-fd itself is doing encryption or compression which is slowing
down its send rate?  Are these incremental backups or full?  Are you using
"accurate" backups (which need data to be sent to the fd from the dir)?

Have you checked the disk and cpu load on the bacula fd and sd during these
backups?

> So it appears the problem is network latency. Is there anything I can try to
> improve backup speeds from Montreal?

I may be wrong, but I'm not convinced it's just latency.

Gavin




--
Xperia(TM) PLAY
It's a major breakthrough. An authentic gaming
smartphone on the nation's most reliable network.
And it wants your games.
http://p.sf.net/sfu/verizon-sfdev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Performance with latent networks

2011-04-11 Thread Peter Hoskin
Hi,

 

> The issue, I imagine with transfer rates is between the bacula-fd and

> bacula-sd.

 

Correct


> Do we presume the -sd is in Sydney?  You don't say what speed

> the Sydney ADSL2+ link is (though apparently it can manage at least

> 2.2MByte/sec).

 

24mbit down, 1mbit up

 

> Is that 300ms over the VPN?  If you run a long ping, is there any

> noticeable packet loss?

 

With ping -s 65500 -c 1000 I got:

 

1000 packets transmitted, 1000 received, 0% packet loss, time 999884ms

rtt min/avg/max/mdev = 421.944/1202.740/4353.202/578.943 ms, pipe 5

 

So there seems to be no real packet loss even with larger packets.

 

> Maybe the bacula-fd host is busy or

> the bacula-fd itself is doing encryption or compression which is slowing

> down its send rate?  Are these incremental backups or full?  Are you using

> "accurate" backups (which need data to be sent to the fd from the dir)?

> 

> Have you checked the disk and cpu load on the bacula fd and sd during
these

> backups?

 

I've checked both hosts. There is plenty of memory, CPU & bandwidth sitting
idle at both ends. The loadavg is below 0.5.

 

The hosts in Montreal actually have more spare resources than the Canberra
host which transfers oh so much better.

 

I'm not using accurate backups. But I do have GZIP turned on.

 

> > So it appears the problem is network latency. Is there anything I can
try to

> > improve backup speeds from Montreal?

--
Forrester Wave Report - Recovery time is now measured in hours and minutes
not days. Key insights are discussed in the 2010 Forrester Wave Report as
part of an in-depth evaluation of disaster recovery service providers.
Forrester found the best-in-class provider in terms of services and vision.
Read this report now!  http://p.sf.net/sfu/ibm-webcastpromo___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Performance with many files

2011-07-06 Thread Phil Stracchino
On 07/06/11 08:04, Adrian Reyer wrote:
> Hi,
> 
> I am using bacula for a bit more than a month now and the database gets
> slower and slower both for selecting stuff and for running backups as
> such.
> I am using a MySQL database, still myisam tables and I am considering
> switching to InnoDB tables or postgresql.


Just for the record:

Unless you are using merge tables (which, since the advent of table
partitioning, you shouldn't be) or full-text indexes, there is NO USE
CASE for MySQL for which the correct answer to "What storage engine
should I use for my tables?" is MyISAM.[1]  At this point, wherever
possible, EVERYONE should be using InnoDB.

(Also, preferably everyone should be using MySQL 5.5.  However, RHEL -
for example - isn't even shipping MySQL 5.1 yet, let alone 5.5.  They'll
probably start shipping MySQL 5.5 along about the time MySQL hits 6.5.)

There are many reasons for this, including performance, crash recovery,
and referential integrity (InnoDB offers full ACID guarantees, MyISAM
does not).  MyISAM was designed to run acceptably well on servers with
32MB or less RAM, and it not only *does not*, it CANNOT make effective
use of more than a small fraction of the memory available on modern-day
commodity hardware.  MyISAM cannot re-apply an interrupted transaction,
cannot roll back a failed transaction, and it is not robust in the face
of events like disk full conditions or unexpected power outages.


You will (still) hear a lot of FUD from people who frankly don't
understand the issues, about how InnoDB locks are slower than MyISAM
locks.  This is, *technically*, true.  However, it completely fails to
take into account that not only are InnoDB locks row level while MyISAM
locks are page level - meaning that many *write* transactions can
execute simultaneously on the same InnoDB table as long as they update
different rows, while NOTHING can execute simultaneously to any write
transaction on a MyISAM table - but, thanks to multi-view consistency,
InnoDB can execute most queries without needing to lock anything at all.
 The real performance situation is this:  With an identical transaction
load and identical data on identical hardware, on a 100% read query
load, which is the *best possible* performance case for MyISAM, InnoDB
still outperforms MyISAM by 60% or more.  On a query load that is 75%
reads, 25% writes, InnoDB outperforms MySQL by over 400%.

So, yes.  Convert all of your tables to InnoDB.  Also, if you can,
update to MySQL 5.5 if you're not already using it.  (Properly
configured, InnoDB in MySQL 5.5 on Linux has a 150% performance increase
over InnoDB 5.1, and on Windows, 5.5 InnoDB performs 1500% better than
5.1 InnoDB, according to Oracle's benchmarks.)  Throw as much memory at
the InnoDB buffer pool as you can spare, pare down MyISAM buffers that
you're not using, and if you're using 5.5, look at the new
innodb_buffer_pool_instances variable.  You can get a basic check of
your MySQL configuration using MySQLtuner (free download from
http://mysqltuner.com/mysqltuner.pl; requires Perl, DBI.pm, and DBD::mysql.)



[1]  At this time, MySQL *itself* still requires MyISAM for the grant
tables.  Word from inside Oracle says that fixing this and enabling the
grant tables to also be stored in InnoDB is work in progress, and that
once this is accomplished, the entire MyISAM storage engine will
probably be deprecated.


-- 
  Phil Stracchino, CDK#2 DoD#299792458 ICBM: 43.5607, -71.355
  ala...@caerllewys.net   ala...@metrocast.net   p...@co.ordinate.org
  Renaissance Man, Unix ronin, Perl hacker, SQL wrangler, Free Stater
 It's not the years, it's the mileage.

--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Performance with many files

2011-07-06 Thread Adrian Reyer
On Wed, Jul 06, 2011 at 10:09:56AM -0400, Phil Stracchino wrote:
> should I use for my tables?" is MyISAM.[1]  At this point, wherever
> possible, EVERYONE should be using InnoDB.

I will, if the current backup ever finishes. For a start on MySQL 5.1
though (Debian squeeze). I am aware InnoDB has a more stable performance
according to the posts I have found in various bacula-mysql related
posts. Your post gives me some hope I can get away with converting the
table format instead of migrating to postgres. Simple for the fact I
have nicer backup scripts for mysql than for postgres.

> your MySQL configuration using MySQLtuner (free download from
> http://mysqltuner.com/mysqltuner.pl; requires Perl, DBI.pm, and DBD::mysql.)

I am using that one and tuning-primer.sh from http://www.day32.com/MySQL/

Regards,
Adrian
-- 
LiHAS - Adrian Reyer - Hessenwiesenstraße 10 - D-70565 Stuttgart
Fon: +49 (7 11) 78 28 50 90 - Fax:  +49 (7 11) 78 28 50 91
Mail: li...@lihas.de - Web: http://lihas.de
Linux, Netzwerke, Consulting & Support - USt-ID: DE 227 816 626 Stuttgart

--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Performance with many files

2011-07-06 Thread Phil Stracchino
On 07/06/11 10:41, Adrian Reyer wrote:
> On Wed, Jul 06, 2011 at 10:09:56AM -0400, Phil Stracchino wrote:
>> should I use for my tables?" is MyISAM.[1]  At this point, wherever
>> possible, EVERYONE should be using InnoDB.
> 
> I will, if the current backup ever finishes. For a start on MySQL 5.1
> though (Debian squeeze). I am aware InnoDB has a more stable performance
> according to the posts I have found in various bacula-mysql related
> posts. Your post gives me some hope I can get away with converting the
> table format instead of migrating to postgres. Simple for the fact I
> have nicer backup scripts for mysql than for postgres.


Oh, sure.  It's dead simple.

for table in $(mysql -N --batch -e 'select
concat(table_schema,'.',table_name) from information_schema.tables where
engine='MyISAM' and table_schema not in
('information_schema','mysql')'); do mysql -N --batch -e "alter table
$table engine=InnoDB" ; done


Keep in mind that on MySQL 5.1, you should preferably be using the
InnoDB plugin rather than the built-in InnoDB engine.  The plugin InnoDB
engine is newer and performs better.


-- 
  Phil Stracchino, CDK#2 DoD#299792458 ICBM: 43.5607, -71.355
  ala...@caerllewys.net   ala...@metrocast.net   p...@co.ordinate.org
  Renaissance Man, Unix ronin, Perl hacker, SQL wrangler, Free Stater
 It's not the years, it's the mileage.

--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Performance with many files

2011-07-07 Thread Adrian Reyer
On Wed, Jul 06, 2011 at 11:08:44AM -0400, Phil Stracchino wrote:
> for table in $(mysql -N --batch -e 'select
> concat(table_schema,'.',table_name) from information_schema.tables where
> engine='MyISAM' and table_schema not in
> ('information_schema','mysql')'); do mysql -N --batch -e "alter table
> $table engine=InnoDB" ; done

actually the outer ' in the first mysql need to be replaced by " or the
inner ' to be quoted.
However, for some reason mysql 5.1 with compiled in innodb calced a lot
on the tables but never actually changed them to innodb. So I just did a
classic mysqldump, changed the MyISAM for InnoDB and loaded that again.
Speed improved many many times. My incremental backup finished after
just 10 minutes while it took 2h earlier.

Regards,
Adrian
-- 
LiHAS - Adrian Reyer - Hessenwiesenstraße 10 - D-70565 Stuttgart
Fon: +49 (7 11) 78 28 50 90 - Fax:  +49 (7 11) 78 28 50 91
Mail: li...@lihas.de - Web: http://lihas.de
Linux, Netzwerke, Consulting & Support - USt-ID: DE 227 816 626 Stuttgart

--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Performance with many files

2011-08-11 Thread Adrian Reyer
On Fri, Jul 08, 2011 at 08:30:17AM +0200, Adrian Reyer wrote:
> Speed improved many many times. My incremental backup finished after
> just 10 minutes while it took 2h earlier.

This had been the benefit of using InnoDB over MyISAM. However, at 12GB
RAM and 8900 File entries (12GB file on disk) it became slow again
and I took the step to convert to PostgreSQL.
While I only gave 8GB of memory to PostgreSQL it is quite a bit faster
so far. A full backup that took <1day 1 month ago with less entries in the
database was up to >3 days on MySQL this month. With PostgreSQL it had
been down to <1day again.
The hardware is the same system with 16GB RAM it has been before, serving
as an iSCSI-storage for enhancing the bacula-sd residing on some other
box the same time. The import read the dump at a constant 2MB/s whicht I
regarded somewhat slow, but I think the 'constant' is the important part
here. I did the migration with Bacula manual and
http://mtu.net/~jpschewe/blog/2010/06/migrating-bacula-from-mysql-to-postgresql/

Just to throw in some numbers that might help others.

Regards,
Adrian
-- 
LiHAS - Adrian Reyer - Hessenwiesenstraße 10 - D-70565 Stuttgart
Fon: +49 (7 11) 78 28 50 90 - Fax:  +49 (7 11) 78 28 50 91
Mail: li...@lihas.de - Web: http://lihas.de
Linux, Netzwerke, Consulting & Support - USt-ID: DE 227 816 626 Stuttgart

--
Get a FREE DOWNLOAD! and learn more about uberSVN rich system, 
user administration capabilities and model configuration. Take 
the hassle out of deploying and managing Subversion and the 
tools developers use with it. 
http://p.sf.net/sfu/wandisco-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


  1   2   >