Re: [Bacula-users] Bacula Catalog Server - MySQL - 1 Core Used Max ?
Hello, If you have Attribute Spooling turned off, you will essentially insert records into the database one at a time. The database is locked around each insert so even though multiple threads are running they cannot run at the same time. If you have a lot of files that are being backed up by each job, the DB insertions of attributes will be very slow. If you have Attribute Spooling turned on, multiple simultaneous threads will be used to insert attributes (metadata) into the database, and that will be done in big batches. Providing the Jobs are not referencing the same data (most likely) there will be no global locks so multiple DB inserts can run at the same time and they can be quite efficient. For best performance the DB must be tuned as the default MySQL values are not optimal for Bacula. Best regards, Kern On 15-03-24 11:18 AM, Phil Stracchino wrote: > On 03/24/15 11:47, Robert Heinzmann wrote: >> Hello, >> >> we have a rather large Bacula setup - ~600 Clients, database for catalog >> is 19GB large. File table is ~65 million records. >> >> Right now, bacula director can not run backups as scheduled and delays >> the jobs for up to 2 hours, as it seems the catalog is not keeping up. >> >> SD has not hit it’s storage IO limit, as I/O Wait is almost zero (24x >> 10k spindles in Hardware Raid, doing ~50MB/s avg). >> >> Also we have defined 20 parallel backup drives doing backups, so drives >> is not a blocker I guess. >> >> It seems that the bacula catalog is CPU bound and that only a single >> core is used (of 4). >> >> How can I parallelize database access with Bacula Director to speed up >> the backup process. > If you have queries being executed serially by only a single connection, > then they cannot be parallelized across multiple cores, because MySQL > can use only a single core on a single query (trying to parallelize > below the query level doesn't work out well). I don't know whether the > Bacula director's DB accesses *could* be parallelized, but I suspect > many of them cannot. > > -- Dive into the World of Parallel Programming The Go Parallel Website, sponsored by Intel and developed in partnership with Slashdot Media, is your hub for all things parallel software development, from weekly thought leadership blogs to news, videos, case studies, tutorials and more. Take a look and join the conversation now. http://goparallel.sourceforge.net/ ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] Bacula Catalog Server - MySQL - 1 Core Used Max ?
On 03/24/15 11:47, Robert Heinzmann wrote: > Hello, > > we have a rather large Bacula setup - ~600 Clients, database for catalog > is 19GB large. File table is ~65 million records. > > Right now, bacula director can not run backups as scheduled and delays > the jobs for up to 2 hours, as it seems the catalog is not keeping up. > > SD has not hit it’s storage IO limit, as I/O Wait is almost zero (24x > 10k spindles in Hardware Raid, doing ~50MB/s avg). > > Also we have defined 20 parallel backup drives doing backups, so drives > is not a blocker I guess. > > It seems that the bacula catalog is CPU bound and that only a single > core is used (of 4). > > How can I parallelize database access with Bacula Director to speed up > the backup process. If you have queries being executed serially by only a single connection, then they cannot be parallelized across multiple cores, because MySQL can use only a single core on a single query (trying to parallelize below the query level doesn't work out well). I don't know whether the Bacula director's DB accesses *could* be parallelized, but I suspect many of them cannot. -- Phil Stracchino Babylon Communications ph...@caerllewys.net p...@co.ordinate.org Landline: 603.293.8485 -- Dive into the World of Parallel Programming The Go Parallel Website, sponsored by Intel and developed in partnership with Slashdot Media, is your hub for all things parallel software development, from weekly thought leadership blogs to news, videos, case studies, tutorials and more. Take a look and join the conversation now. http://goparallel.sourceforge.net/ ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] Volumes purged but not recycled
On Tue, Mar 24, 2015 at 1:04 PM, Michael Schwager < mschwa...@mochotrading.com> wrote: > The question then is: How do I get a Purged, Recycle-able tape to get > written to starting at the beginning? Do I need to delete it and re-add it? Ooops- I figured it out. The order of operations that was performed was: * Purge (Recycle = 0) * Recycle = 1 (I did this, but Bacula did nothing to the tape) * status=Append (I did this) ...At that point, we had a purged, appendable volume and like you say, Bacula appended to it. Today it was already Recycle=1 and the data backed up last night was basically useless so I re-purged it. Bacula took it immediately and I can see that it's writing to the tape from the beginning. Thanks for the help. *- Mike Schwager* * Linux Network Engineer, Mocho Trading LLC* * 312-646-4783 Phone312-637-0011 Cell312-957-9804 Fax* -- This message is for the named person(s) use only. It may contain confidential proprietary or legally privileged information. No confidentiality or privilege is waived or lost by any mistransmission. If you receive this message in error, please immediately delete it and all copies of it from your system, destroy any hard copies of it and notify the sender. You must not, directly or indirectly use, disclose, distribute, print, or copy any part of this message if you are not the intended recipient. Mocho Trading LLC reserves the right to monitor all e-mail communications through its networks. Any views expressed in this message are those of the individual sender, except where the message states otherwise and the sender is authorized to state them to be the views of any such entity. -- Dive into the World of Parallel Programming The Go Parallel Website, sponsored by Intel and developed in partnership with Slashdot Media, is your hub for all things parallel software development, from weekly thought leadership blogs to news, videos, case studies, tutorials and more. Take a look and join the conversation now. http://goparallel.sourceforge.net/___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] Volumes purged but not recycled
Hi Michael, Bacula will only recycle a volume when there is no appendable volume in the pool. If automatic volume labeling is configured, it will try to label a new one. If none of this occurs, then it will try to recycle one. And when Bacula recycles a volume it overwrites the contents of that volume (disk or tape). In the case you described here, you set the status to append, so Bacula started to write ate the end of the volume. Basically, Bacula will try to recycle volumes in Purged, Recycle, Used or Full status accordingly to the recycling algorithm. If you had marked the volume purged, them Bacula would have recycled it. Best regards, Ana On Tue, Mar 24, 2015 at 3:04 PM, Michael Schwager < mschwa...@mochotrading.com> wrote: > On Mon, Mar 23, 2015 at 10:43 AM, Ana Emília M. Arruda < > emiliaarr...@gmail.com> wrote: > >> When the volumes 16 to 26 were created, do you had "Recycle = yes" >> configured in this pool? It seems that these volumes were created without >> this option configured in your pool. Every change you made in your pools >> configurations, they must be updated to the existing volumes in catalog: >> update -> pool from resource. And to update all the existing volumes to the >> new configurations: update volume -> all volumes from pool (you can use >> from all pools if you change more than one pool). > > > Thanks, Ana. That would explain my one issue. I'm sure I did not have it > configured in the pool. I have updated all the volumes. I thought "Recycle" > was an indicator of state and not an indicator of capability. Now I > understand. > > But I had marked the volume with MediaID 16 with Recycle=Yes, and then I > marked it with Append. Then Bacula appended a few files to it, but it ran > out of room almost immediately. I guess that makes sense. > > The question then is: How do I get a Purged, Recycle-able tape to get > written to starting at the beginning? Do I need to delete it and re-add it? > > > *- Mike Schwager* > > * Linux Network Engineer, Mocho Trading LLC* > * 312-646-4783 <312-646-4783> Phone312-637-0011 <312-637-0011> > Cell312-957-9804 <312-957-9804> Fax* > > > This message is for the named person(s) use only. It may contain > confidential proprietary or legally privileged information. No > confidentiality or privilege is waived or lost by any mistransmission. If > you receive this message in error, please immediately delete it and all > copies of it from your system, destroy any hard copies of it and notify the > sender. You must not, directly or indirectly use, disclose, distribute, > print, or copy any part of this message if you are not the intended > recipient. Mocho Trading LLC reserves the right to monitor all e-mail > communications through its networks. Any views expressed in this message > are those of the individual sender, except where the message states > otherwise and the sender is authorized to state them to be the views of any > such entity. > -- Dive into the World of Parallel Programming The Go Parallel Website, sponsored by Intel and developed in partnership with Slashdot Media, is your hub for all things parallel software development, from weekly thought leadership blogs to news, videos, case studies, tutorials and more. Take a look and join the conversation now. http://goparallel.sourceforge.net/___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] Bacula 5.2.13 ClientRunAfterJob
Thank you for the replies. Seems that external script is the way to go. Thank you, On Sat, Mar 21, 2015 at 10:22 PM, Davide Franco wrote: > Hi Peter, > > As Kern mentioned, I'd mention a shell script instead of the whole command > in the Job definition. > I do this on "my jobs" and it work like a charm. > > Hope it will help you to solve your problem. > > Best regards > > Davide > > On Sat, Mar 21, 2015 at 8:28 PM, Kern Sibbald wrote: > >> Hello, >> >> Bacula does not run a shell when it executes a "command" as you are >> doing. Consequently shell characters such as * will be treated as >> themselves rather than shell characters (in this case a wild card). This >> is because such interpretation is done by the shell. So the solution is to >> either prefix the command with a call to a shell (lots of escaping to do) >> or *much* simpler, run the command in a shell script that you execute. If >> I am not mistaken, there are nice examples in the manual. Obviously (I >> hope), if you can be more specific and avoid wild cards you will not need a >> shell command prefix or a script. I am not patient enough to work out all >> the escaping to pass shell characters correctly to the shell, so I always >> use a script when doing complicated stuff. >> >> Best regards, >> Kern >> >> >> On 15-03-19 06:54 AM, Peter Wood wrote: >> >> Thank you for the reply. >> >> Yes, I've done all the debug steps I can think of. >> >> Permissions is a good point though. The file permissions are 600 >> root:root. Only root can remove the file. >> >> Bacula-fd process runs as root so I'm expecting it will be able to >> remove the file, right? >> >> SELinux is disabled. >> >> >> On Thu, Mar 19, 2015 at 4:39 AM, Ana Emília M. Arruda < >> emiliaarr...@gmail.com> wrote: >> >>> Hi Peter, >>> >>> Have you checked if the job falis or not? The Client Run After Jog do >>> not runs if the job falis. You can use the bellow if you want that the >>> script runs despite of the job falis or not: >>> >>> Run Script { >>> RunsWhen = After >>> RunsOnFailure = yes >>> Commnad = "/bin/rm -f /backup/daily/mysql-Slave*" >>> } >>> >>> Have you tried to run the command at the commnad line? Have you >>> checked permissions? Have you checked any messages at Bacula's log file? Is >>> the mysqldump generating the file at /backup/daily/? >>> >>> Best regards, >>> Ana >>> >>> On Wed, Mar 18, 2015 at 11:54 PM, Peter Wood >>> wrote: >>> In Bacula 5.2.13 I use Client Run Before Job = "mysqldump " to dump the database before the backup starts. After it is complete I use Client Run After Job = "/bin/rm -f /backup/daily/mysql-Slave*" with the intend to remove the backup file due to lack of free space to keep more then one backup. The backup report shows that it ran the command: 18-Mar 09:39 db1-fd JobId 12980: shell command: run ClientAfterJob "/bin/rm -f /backup/daily/mysql-Slave*" Unfortunately the file is not removed. Any idea? Is it the use of wildcard? Thanks, -- Peter -- Dive into the World of Parallel Programming The Go Parallel Website, sponsored by Intel and developed in partnership with Slashdot Media, is your hub for all things parallel software development, from weekly thought leadership blogs to news, videos, case studies, tutorials and more. Take a look and join the conversation now. http://goparallel.sourceforge.net/ ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users >>> >> >> >> -- >> Dive into the World of Parallel Programming The Go Parallel Website, >> sponsored >> by Intel and developed in partnership with Slashdot Media, is your hub for >> all >> things parallel software development, from weekly thought leadership blogs to >> news, videos, case studies, tutorials and more. Take a look and join the >> conversation now. http://goparallel.sourceforge.net/ >> >> >> >> ___ >> Bacula-users mailing >> listBacula-users@lists.sourceforge.nethttps://lists.sourceforge.net/lists/listinfo/bacula-users >> >> >> >> >> -- >> Dive into the World of Parallel Programming The Go Parallel Website, >> sponsored >> by Intel and developed in partnership with Slashdot Media, is your hub >> for all >> things parallel software development, from weekly thought leadership >> blogs to >> news, videos, case studies, tutorials and more. Take a look and join the >> conversation now. http://goparallel.sourceforge.net/ >> ___
Re: [Bacula-users] Volumes purged but not recycled
On Mon, Mar 23, 2015 at 10:43 AM, Ana Emília M. Arruda < emiliaarr...@gmail.com> wrote: > When the volumes 16 to 26 were created, do you had "Recycle = yes" > configured in this pool? It seems that these volumes were created without > this option configured in your pool. Every change you made in your pools > configurations, they must be updated to the existing volumes in catalog: > update -> pool from resource. And to update all the existing volumes to the > new configurations: update volume -> all volumes from pool (you can use > from all pools if you change more than one pool). Thanks, Ana. That would explain my one issue. I'm sure I did not have it configured in the pool. I have updated all the volumes. I thought "Recycle" was an indicator of state and not an indicator of capability. Now I understand. But I had marked the volume with MediaID 16 with Recycle=Yes, and then I marked it with Append. Then Bacula appended a few files to it, but it ran out of room almost immediately. I guess that makes sense. The question then is: How do I get a Purged, Recycle-able tape to get written to starting at the beginning? Do I need to delete it and re-add it? *- Mike Schwager* * Linux Network Engineer, Mocho Trading LLC* * 312-646-4783 Phone312-637-0011 Cell312-957-9804 Fax* -- This message is for the named person(s) use only. It may contain confidential proprietary or legally privileged information. No confidentiality or privilege is waived or lost by any mistransmission. If you receive this message in error, please immediately delete it and all copies of it from your system, destroy any hard copies of it and notify the sender. You must not, directly or indirectly use, disclose, distribute, print, or copy any part of this message if you are not the intended recipient. Mocho Trading LLC reserves the right to monitor all e-mail communications through its networks. Any views expressed in this message are those of the individual sender, except where the message states otherwise and the sender is authorized to state them to be the views of any such entity. -- Dive into the World of Parallel Programming The Go Parallel Website, sponsored by Intel and developed in partnership with Slashdot Media, is your hub for all things parallel software development, from weekly thought leadership blogs to news, videos, case studies, tutorials and more. Take a look and join the conversation now. http://goparallel.sourceforge.net/___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] Bacula Catalog Server - MySQL - 1 Core Used Max ?
IMHO I know that there are several MySQL tunings that can be made, but one that I tested that really improves the writing times in a factor of more than 1000 is the enabling of synchronous mode transactions: innodb_flush_log_at_trx_commit = 0 innodb_flush_method = O_DIRECT Regards, == Heitor Medrado de Faria - LPIC-III | ITIL-F 01 a 13 de Abril - Novo Treinamento Telepresencial Bacula: http://www.bacula.com.br/?p=2174 61 8268-4220 Site: www.bacula.com.br | Facebook: heitor.faria | Gtalk: heitorfa...@gmail.com === - Mensagem original - > De: "Josh Fisher" > Para: bacula-users@lists.sourceforge.net > Enviadas: Terça-feira, 24 de março de 2015 14:16:06 > Assunto: Re: [Bacula-users] Bacula Catalog Server - MySQL - 1 Core Used Max ? > > > > On 3/24/2015 12:57 PM, Robert Heinzmann wrote: > > One more question for clarification: > > > > Attribute spooling only solves "insert and update issues" and not select > > issues right ? > > > > Our monitoring shows that we only have 10 -20 inserts/s for the DB but the > > selects are capped at ~370 selects/second during peak times. > > Look into enabling innodb_file_per_table setting. InnoDB uses 4 threads > as i/o threads in addition to the connection manager thread and one > thread per connection. Each table in a separate file means that indices > on those tables are also separated and may allow for more parallel i/o. > But still, attribute spooling is needed to optimize DB transactions. > > > -- > Dive into the World of Parallel Programming The Go Parallel Website, > sponsored > by Intel and developed in partnership with Slashdot Media, is your hub for > all > things parallel software development, from weekly thought leadership blogs to > news, videos, case studies, tutorials and more. Take a look and join the > conversation now. http://goparallel.sourceforge.net/ > ___ > Bacula-users mailing list > Bacula-users@lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/bacula-users > -- Dive into the World of Parallel Programming The Go Parallel Website, sponsored by Intel and developed in partnership with Slashdot Media, is your hub for all things parallel software development, from weekly thought leadership blogs to news, videos, case studies, tutorials and more. Take a look and join the conversation now. http://goparallel.sourceforge.net/ ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] Bacula Catalog Server - MySQL - 1 Core Used Max ?
On 3/24/2015 12:57 PM, Robert Heinzmann wrote: > One more question for clarification: > > Attribute spooling only solves "insert and update issues" and not select > issues right ? > > Our monitoring shows that we only have 10 -20 inserts/s for the DB but the > selects are capped at ~370 selects/second during peak times. Look into enabling innodb_file_per_table setting. InnoDB uses 4 threads as i/o threads in addition to the connection manager thread and one thread per connection. Each table in a separate file means that indices on those tables are also separated and may allow for more parallel i/o. But still, attribute spooling is needed to optimize DB transactions. -- Dive into the World of Parallel Programming The Go Parallel Website, sponsored by Intel and developed in partnership with Slashdot Media, is your hub for all things parallel software development, from weekly thought leadership blogs to news, videos, case studies, tutorials and more. Take a look and join the conversation now. http://goparallel.sourceforge.net/ ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] Bacula Catalog Server - MySQL - 1 Core Used Max ?
> One more question for clarification: > > Attribute spooling only solves "insert and update issues" and not select > issues right ? > Correct. > > Our monitoring shows that we only have 10 -20 inserts/s for the DB but the > selects are capped at ~370 selects/second during peak times. > I would expect very few selects during a backup and very many inserts. Every single file that is backed up should be an insert. With attribute spooling this changes into a batch insert after the job has finished. John -- Dive into the World of Parallel Programming The Go Parallel Website, sponsored by Intel and developed in partnership with Slashdot Media, is your hub for all things parallel software development, from weekly thought leadership blogs to news, videos, case studies, tutorials and more. Take a look and join the conversation now. http://goparallel.sourceforge.net/ ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] Bacula Catalog Server - MySQL - 1 Core Used Max ?
One more question for clarification: Attribute spooling only solves "insert and update issues" and not select issues right ? Our monitoring shows that we only have 10 -20 inserts/s for the DB but the selects are capped at ~370 selects/second during peak times. Regards, Robert -- Dive into the World of Parallel Programming The Go Parallel Website, sponsored by Intel and developed in partnership with Slashdot Media, is your hub for all things parallel software development, from weekly thought leadership blogs to news, videos, case studies, tutorials and more. Take a look and join the conversation now. http://goparallel.sourceforge.net/ ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] storeage authentication for new client / job
Hi Gary, You have a Bacula´s storage version older than your new client´s version: storage: zeppo-sd Version: 5.0.0 (26 January 2010) x86_64-redhat-linux-gnu redhat client: eddienew-fd Version: 7.0.5 (28 July 2014) x86_64-redhat-linux-gnu redhat One) It is not recommended to have a client running an earlier version than director/storage. Would it be possible a client´s version downgrade? Best regards, Ana On Tue, Mar 24, 2015 at 12:25 PM, Gary Stainburn wrote: > Hi folks. > > I've set no a new server 'eddienew' which is going to replace 'eddie' > > I've copied /etc/bacula/clients/eddie.conf to eddienew.conf and edited > accordingly. The results are below. All I have done is change every > instance > of 'eddie' to 'eddienew' and for completeness changed the passwords. > > The bacula-fd.conf on eddienew was copied and edited likewise. > > I can successfully do: > status client=eddienew-fd > status client=eddie-fd > status storage=zeppo-servers > run job=eddie > > However, when I try run job=eddienew it fails with autorization errors on > the > storeage. > > *** eddienew.conf > Job { > Name = "eddienew" > JobDefs = "LinuxJob" > enabled = yes > Client = eddienew-fd > Schedule = "Week2" > Storage = zeppo-servers > pool = Servers > Messages = Standard > Write Bootstrap = "/var/bacula/eddienew-bootstrap" > } > Client { > Name = eddienew-fd > Address = 10.1.1.226 > FDPort = 9102 > Catalog = MyCatalog > Password = "" > File Retention = 90 days# 90 days > Job Retention = 6 months# six months > AutoPrune = yes # Prune expired Jobs/Files > } > *** > > *** Session Log > *status client=eddie-fd > Connecting to Client eddie-fd at 10.1.1.115:9102 > > eddie-fd Version: 2.4.2 (26 July 2008) i686-pc-linux-gnu redhat > Daemon started 03-Mar-15 20:39, 22 Jobs run since started. > Heap: heap=958,464 smbytes=81,388 max_bytes=488,803 bufs=72 max_bufs=187 > Sizeof: boffset_t=8 size_t=4 debug=0 trace=0 > > Running Jobs: > Director connected at: 24-Mar-15 15:18 > No Jobs running. > > > Terminated Jobs: > JobId LevelFiles Bytes Status FinishedName > == > 110658 Incr483345.8 M OK 16-Mar-15 19:15 eddie > 110768 Incr442683.3 M OK 17-Mar-15 19:25 eddie > 110891 Incr456275.8 M OK 18-Mar-15 19:13 eddie > 111006 Incr418345.8 M OK 19-Mar-15 19:15 eddie > 52 Incr420210.6 M OK 20-Mar-15 16:39 eddie > 67 Diff 1,030744.8 M OK 20-Mar-15 19:34 eddie > 111278 Incr 4551.97 M OK 21-Mar-15 19:07 eddie > 111388 Incr 3930.25 M OK 22-Mar-15 19:06 eddie > 111506 Incr611727.6 M OK 23-Mar-15 19:26 eddie > 111611 Incr329308.9 M OK 24-Mar-15 15:02 eddie > > *status client=eddienew-fd > Connecting to Client eddienew-fd at 10.1.1.226:9102 > > eddienew-fd Version: 7.0.5 (28 July 2014) x86_64-redhat-linux-gnu redhat > One) > Daemon started 24-Mar-15 15:04. Jobs: run=0 running=0. > Heap: heap=135,168 smbytes=149,513 max_bytes=153,951 bufs=54 max_bufs=110 > Sizes: boffset_t=8 size_t=8 debug=0 trace=0 mode=0,0 bwlimit=0kB/s > > Running Jobs: > Director connected at: 24-Mar-15 15:18 > No Jobs running. > > > Terminated Jobs: > > *status storage=zeppo-servers > Connecting to Storage daemon zeppo-servers at zeppo.ringways.co.uk:9103 > > zeppo-sd Version: 5.0.0 (26 January 2010) x86_64-redhat-linux-gnu redhat > Daemon started 24-Mar-15 14:57, 3 Jobs run since started. > Heap: heap=135,168 smbytes=92,913 max_bytes=227,995 bufs=90 max_bufs=111 > Sizes: boffset_t=8 size_t=8 int32_t=4 int64_t=8 > > Running Jobs: > No Jobs running. > > > Jobs waiting to reserve a drive: > > > Terminated Jobs: > JobId LevelFiles Bytes Status FinishedName > === > 111498 Full 0 0 Other24-Mar-15 13:58 eddienew > 111498 Full 0 0 Other24-Mar-15 14:08 eddienew > 111567 Incr 0 0 Other24-Mar-15 14:16 wsales6 > 111498 Full 0 0 Other24-Mar-15 14:18 eddienew > 111498 Full 0 0 Other24-Mar-15 14:29 eddienew > 111498 Full 0 0 Other24-Mar-15 14:39 eddienew > 111609 Full 0 0 Cancel 24-Mar-15 14:57 eddienew > 111610 Full 0 0 Cancel 24-Mar-15 14:58 eddienew > 111611 Incr329309.0 M OK 24-Mar-15 15:04 eddie > 111613 Full 0 0 Cancel 24-Mar-15 15:13 eddienew > > > Device status: > Device "ServiceStorage" (/var/bacula/service) is not open. > Device "ServersStorage" (/var/bacula/servers) is not open. > > > Used Volume status: > > > Data spooling: 0 active jobs, 0 bytes
Re: [Bacula-users] Bacula Catalog Server - MySQL - 1 Core Used Max ?
> How much space do I need in the working directory of the SD to do attribute > spooling ? I expect for your dataset less than 1GB. John -- Dive into the World of Parallel Programming The Go Parallel Website, sponsored by Intel and developed in partnership with Slashdot Media, is your hub for all things parallel software development, from weekly thought leadership blogs to news, videos, case studies, tutorials and more. Take a look and join the conversation now. http://goparallel.sourceforge.net/ ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] Bacula Catalog Server - MySQL - 1 Core Used Max ?
Hello, >If you do not have attribute spooling enabled enable that. Also tune your >mysql database. The defaults in a lot of systems assume you have only 64MB of >ram or similar. Database already is on innodb and fast BBU supported storage and has "innodb_buffer_size" > dataset size, so everything comes from memory. Also I don't see many IO's. Also the catalog has < 1% IO wait during the heavy job times at night, while we have 25% (1 core) of user CPU. How much space do I need in the working directory of the SD to do attribute spooling ? Regards, Robert -- Dive into the World of Parallel Programming The Go Parallel Website, sponsored by Intel and developed in partnership with Slashdot Media, is your hub for all things parallel software development, from weekly thought leadership blogs to news, videos, case studies, tutorials and more. Take a look and join the conversation now. http://goparallel.sourceforge.net/ ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
[Bacula-users] storeage authentication for new client / job
Hi folks. I've set no a new server 'eddienew' which is going to replace 'eddie' I've copied /etc/bacula/clients/eddie.conf to eddienew.conf and edited accordingly. The results are below. All I have done is change every instance of 'eddie' to 'eddienew' and for completeness changed the passwords. The bacula-fd.conf on eddienew was copied and edited likewise. I can successfully do: status client=eddienew-fd status client=eddie-fd status storage=zeppo-servers run job=eddie However, when I try run job=eddienew it fails with autorization errors on the storeage. *** eddienew.conf Job { Name = "eddienew" JobDefs = "LinuxJob" enabled = yes Client = eddienew-fd Schedule = "Week2" Storage = zeppo-servers pool = Servers Messages = Standard Write Bootstrap = "/var/bacula/eddienew-bootstrap" } Client { Name = eddienew-fd Address = 10.1.1.226 FDPort = 9102 Catalog = MyCatalog Password = "" File Retention = 90 days# 90 days Job Retention = 6 months# six months AutoPrune = yes # Prune expired Jobs/Files } *** *** Session Log *status client=eddie-fd Connecting to Client eddie-fd at 10.1.1.115:9102 eddie-fd Version: 2.4.2 (26 July 2008) i686-pc-linux-gnu redhat Daemon started 03-Mar-15 20:39, 22 Jobs run since started. Heap: heap=958,464 smbytes=81,388 max_bytes=488,803 bufs=72 max_bufs=187 Sizeof: boffset_t=8 size_t=4 debug=0 trace=0 Running Jobs: Director connected at: 24-Mar-15 15:18 No Jobs running. Terminated Jobs: JobId LevelFiles Bytes Status FinishedName == 110658 Incr483345.8 M OK 16-Mar-15 19:15 eddie 110768 Incr442683.3 M OK 17-Mar-15 19:25 eddie 110891 Incr456275.8 M OK 18-Mar-15 19:13 eddie 111006 Incr418345.8 M OK 19-Mar-15 19:15 eddie 52 Incr420210.6 M OK 20-Mar-15 16:39 eddie 67 Diff 1,030744.8 M OK 20-Mar-15 19:34 eddie 111278 Incr 4551.97 M OK 21-Mar-15 19:07 eddie 111388 Incr 3930.25 M OK 22-Mar-15 19:06 eddie 111506 Incr611727.6 M OK 23-Mar-15 19:26 eddie 111611 Incr329308.9 M OK 24-Mar-15 15:02 eddie *status client=eddienew-fd Connecting to Client eddienew-fd at 10.1.1.226:9102 eddienew-fd Version: 7.0.5 (28 July 2014) x86_64-redhat-linux-gnu redhat One) Daemon started 24-Mar-15 15:04. Jobs: run=0 running=0. Heap: heap=135,168 smbytes=149,513 max_bytes=153,951 bufs=54 max_bufs=110 Sizes: boffset_t=8 size_t=8 debug=0 trace=0 mode=0,0 bwlimit=0kB/s Running Jobs: Director connected at: 24-Mar-15 15:18 No Jobs running. Terminated Jobs: *status storage=zeppo-servers Connecting to Storage daemon zeppo-servers at zeppo.ringways.co.uk:9103 zeppo-sd Version: 5.0.0 (26 January 2010) x86_64-redhat-linux-gnu redhat Daemon started 24-Mar-15 14:57, 3 Jobs run since started. Heap: heap=135,168 smbytes=92,913 max_bytes=227,995 bufs=90 max_bufs=111 Sizes: boffset_t=8 size_t=8 int32_t=4 int64_t=8 Running Jobs: No Jobs running. Jobs waiting to reserve a drive: Terminated Jobs: JobId LevelFiles Bytes Status FinishedName === 111498 Full 0 0 Other24-Mar-15 13:58 eddienew 111498 Full 0 0 Other24-Mar-15 14:08 eddienew 111567 Incr 0 0 Other24-Mar-15 14:16 wsales6 111498 Full 0 0 Other24-Mar-15 14:18 eddienew 111498 Full 0 0 Other24-Mar-15 14:29 eddienew 111498 Full 0 0 Other24-Mar-15 14:39 eddienew 111609 Full 0 0 Cancel 24-Mar-15 14:57 eddienew 111610 Full 0 0 Cancel 24-Mar-15 14:58 eddienew 111611 Incr329309.0 M OK 24-Mar-15 15:04 eddie 111613 Full 0 0 Cancel 24-Mar-15 15:13 eddienew Device status: Device "ServiceStorage" (/var/bacula/service) is not open. Device "ServersStorage" (/var/bacula/servers) is not open. Used Volume status: Data spooling: 0 active jobs, 0 bytes; 1 total jobs, 309,413,234 max bytes/job. Attr spooling: 0 active jobs, 0 bytes; 1 total jobs, 88,237 max bytes. run job=eddienew Using Catalog "MyCatalog" Run Backup job JobName: eddienew Level:Incremental Client: eddienew-fd FileSet: Linux Full Pool: Servers (From Job resource) Storage: zeppo-servers (From Job resource) When: 2015-03-24 15:21:14 Priority: 7 OK to run? (yes/mod/no): yes * Job queued. JobId=111615 24-Mar 15:21 eddie1-dir JobId 111615: No prior Full backup Job record found. 24-Mar 15:21 eddie1-dir JobId 111615: No prior or suitable Full backup found in catalog. Doing FULL backup. 24-Mar 15:21 eddie1-dir JobId 111615: shell command: run BeforeJob "/etc
Re: [Bacula-users] Bacula Catalog Server - MySQL - 1 Core Used Max ?
> we have a rather large Bacula setup - ~600 Clients, database for catalog is > 19GB large. File table is ~65 million records. > > Right now, bacula director can not run backups as scheduled and delays the > jobs for up to 2 hours, as it seems the catalog is not keeping up. > > CPU usage on SD is: 25% (0,5 Core) > > CPU usage on Dir is: 10% (0,2 Core) > > CPU usage on Catalog is: 25% (1 Core) > > SD has not hit it’s storage IO limit, as I/O Wait is almost zero (24x 10k > spindles in Hardware Raid, doing ~50MB/s avg). > > Also we have defined 20 parallel backup drives doing backups, so drives is > not a blocker I guess. > > It seems that the bacula catalog is CPU bound and that only a single core is > used (of 4). > > How can I parallelize database access with Bacula Director to speed up the > backup process. > If you do not have attribute spooling enabled enable that. Also tune your mysql database. The defaults in a lot of systems assume you have only 64MB of ram or similar. John -- Dive into the World of Parallel Programming The Go Parallel Website, sponsored by Intel and developed in partnership with Slashdot Media, is your hub for all things parallel software development, from weekly thought leadership blogs to news, videos, case studies, tutorials and more. Take a look and join the conversation now. http://goparallel.sourceforge.net/ ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
[Bacula-users] Bacula Catalog Server - MySQL - 1 Core Used Max ?
Hello, we have a rather large Bacula setup - ~600 Clients, database for catalog is 19GB large. File table is ~65 million records. Right now, bacula director can not run backups as scheduled and delays the jobs for up to 2 hours, as it seems the catalog is not keeping up. CPU usage on SD is: 25% (0,5 Core) CPU usage on Dir is: 10% (0,2 Core) CPU usage on Catalog is: 25% (1 Core) SD has not hit it's storage IO limit, as I/O Wait is almost zero (24x 10k spindles in Hardware Raid, doing ~50MB/s avg). Also we have defined 20 parallel backup drives doing backups, so drives is not a blocker I guess. It seems that the bacula catalog is CPU bound and that only a single core is used (of 4). How can I parallelize database access with Bacula Director to speed up the backup process. Regards, Robert -- Dive into the World of Parallel Programming The Go Parallel Website, sponsored by Intel and developed in partnership with Slashdot Media, is your hub for all things parallel software development, from weekly thought leadership blogs to news, videos, case studies, tutorials and more. Take a look and join the conversation now. http://goparallel.sourceforge.net/___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users