Re: [Bacula-users] very slow virtualfull job
On Thu, Apr 25, 2013 at 11:41:37PM +, James Harper wrote: What could have gone wrong with my mysql to make this happen? I've tried rebooting it. You very likely use MySQL with MyISAM tables. This is a very bad combination for bacula. It will be better with InnoDB tables and correctly tuned MySQL for these many inserts. However, postgres can be tuned the same way and has the additional benefit of being able to use parts of indices. As every insert has to update the tables and the indices, you have way less writes with postgres. I went the way MySQL 4GB MyISAM - 12GB MYISAM - 12GB InnoDB - Postgres 12GB myself. How difficult is it to convert an existing installation over to postgresql? I've been meaning to do this for a while and it may be faster than trying to resolve the issue... It is not that hard, if I remember right the easiest way is spomething like that: 1. create a new postgres-bacula-db 2. dump the table contents from mysql 3. modify the dump to suit postgres 4. insert the dump into postgres 2.-4. I did in one line issuing something like mysqldump ... | sed ... | psql The main things to do with 'sed' for me had been to replace the different 0 timestamps. You get -00-00 00:00:00 with mysql and postgres expects 1970-01-01 00:00 instead if I remember correctly. I think I used http://mtu.net/~jpschewe/blog/2010/06/migrating-bacula-from-mysql-to-postgresql/ back then as a hint, but it needed even further sequences. http://www.bacula.org/manuals/en/catalog/catalog/Installi_Configur_PostgreS.html is the base for the above post. Regards, Adrian -- LiHAS - Adrian Reyer - Hessenwiesenstraße 10 - D-70565 Stuttgart Fon: +49 (7 11) 78 28 50 90 - Fax: +49 (7 11) 78 28 50 91 Mail: li...@lihas.de - Web: http://lihas.de Linux, Netzwerke, Consulting Support - USt-ID: DE 227 816 626 Stuttgart -- Try New Relic Now We'll Send You this Cool Shirt New Relic is the only SaaS-based application performance monitoring service that delivers powerful full stack analytics. Optimize and monitor your browser, app, servers with just a few lines of code. Try New Relic and get this awesome Nerd Life shirt! http://p.sf.net/sfu/newrelic_d2d_apr ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] very slow virtualfull job
On Thu, Apr 25, 2013 at 11:41:37PM +, James Harper wrote: What could have gone wrong with my mysql to make this happen? I've tried rebooting it. You very likely use MySQL with MyISAM tables. This is a very bad combination for bacula. It will be better with InnoDB tables and correctly tuned MySQL for these many inserts. However, postgres can be tuned the same way and has the additional benefit of being able to use parts of indices. As every insert has to update the tables and the indices, you have way less writes with postgres. I went the way MySQL 4GB MyISAM - 12GB MYISAM - 12GB InnoDB - Postgres 12GB myself. I converted to innodb last time this happened. It didn't fix it but the problem went away by itself shortly after so I never got a chance to investigate further. James -- Try New Relic Now We'll Send You this Cool Shirt New Relic is the only SaaS-based application performance monitoring service that delivers powerful full stack analytics. Optimize and monitor your browser, app, servers with just a few lines of code. Try New Relic and get this awesome Nerd Life shirt! http://p.sf.net/sfu/newrelic_d2d_apr ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
[Bacula-users] question about job resource
Hello, I am wondering about the relationship between the default job and the general jobs for other clients that are configured. It was impression that the default job sets the basic specifications, and that each job that is configured inherits the defaults parameters, and anything that is specified in a client job is an override off the default job? The reason I ask, is I have in my default job: JobDefs { Name = DefaultJob Type = Backup Level = Incremental Client = d0lppb021 FileSet = Standard Linux OS Schedule = DailyCycle Storage = File Messages = Standard Pool = FullBackups Priority = 10 Write Bootstrap = /var/spool/bacula/%c.bsr Enabled = No Reschedule On Error = yes Reschedule Interval = 30 minutes Reschedule Times = 4 Spool Data = no Spool Attributes = yes Allow Higher Duplicates = no Allow Duplicate Jobs = no Cancel Queued Duplicates = yes } And then all my client jobs basically look like this: Job { Name = D0LPHB040 System Backups Type = Backup Level = Full Client = d0lphb040 FileSet = Legacy Platform wData Backups Schedule = DailyCycle Storage = File Pool = FullBackups Messages = Standard Priority = 30 } However, I see that things like reschedule interval and reschedule times are not being honored, as today the job started at 7:45am (the client was down) and by 7:53 the job was terminated as failed. So.. are settings supposed to be inherited, or am I supposed to be putting all lines in the jobs for all clients? Thanks, This is a PRIVATE message. If you are not the intended recipient, please delete without copying and kindly advise us by e-mail of the mistake in delivery. NOTE: Regardless of content, this e-mail shall not operate to bind SKOPOS to any order or other contract unless pursuant to explicit written agreement or government initiative expressly permitting the use of e-mail for such purpose. -- Try New Relic Now We'll Send You this Cool Shirt New Relic is the only SaaS-based application performance monitoring service that delivers powerful full stack analytics. Optimize and monitor your browser, app, servers with just a few lines of code. Try New Relic and get this awesome Nerd Life shirt! http://p.sf.net/sfu/newrelic_d2d_apr___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
[Bacula-users] Autochanger waiting on max Storage jobs Issues
Marcin, Thanks for the help. That's exactly what the problem was and now I'm running multiple jobs using the same pool. +-- |This was sent by jbu...@themxgroup.com via Backup Central. |Forward SPAM to ab...@backupcentral.com. +-- -- Try New Relic Now We'll Send You this Cool Shirt New Relic is the only SaaS-based application performance monitoring service that delivers powerful full stack analytics. Optimize and monitor your browser, app, servers with just a few lines of code. Try New Relic and get this awesome Nerd Life shirt! http://p.sf.net/sfu/newrelic_d2d_apr ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] question about job resource
Hi Jonathan, You have to put a JobDefs = JobDefs-Resource-Name directive in each Job definition. For example, for your above job definition: Job { Name = D0LPHB040 System Backups JobDefs=DefaultJob Type = Backup Level = Full Client = d0lphb040 FileSet = Legacy Platform wData Backups Schedule = DailyCycle Storage = File Pool = FullBackups Messages = Standard Priority = 30 } Regards, Ana On Fri, Apr 26, 2013 at 10:08 AM, Jonathan Horne jho...@skopos.us wrote: Hello, I am wondering about the relationship between the default job and the general jobs for other clients that are configured. It was impression that the default job sets the basic specifications, and that each job that is configured inherits the defaults parameters, and anything that is specified in a client job is an override off the default job? The reason I ask, is I have in my default job: JobDefs { Name = DefaultJob Type = Backup Level = Incremental Client = d0lppb021 FileSet = Standard Linux OS Schedule = DailyCycle Storage = File Messages = Standard Pool = FullBackups Priority = 10 Write Bootstrap = /var/spool/bacula/%c.bsr Enabled = No Reschedule On Error = yes Reschedule Interval = 30 minutes Reschedule Times = 4 Spool Data = no Spool Attributes = yes Allow Higher Duplicates = no Allow Duplicate Jobs = no Cancel Queued Duplicates = yes } And then all my client jobs basically look like this: Job { Name = D0LPHB040 System Backups Type = Backup Level = Full Client = d0lphb040 FileSet = Legacy Platform wData Backups Schedule = DailyCycle Storage = File Pool = FullBackups Messages = Standard Priority = 30 } However, I see that things like reschedule interval and reschedule times are not being honored, as today the job started at 7:45am (the client was down) and by 7:53 the job was terminated as failed. So.. are settings supposed to be inherited, or am I supposed to be putting all lines in the jobs for all clients? Thanks, -- This is a PRIVATE message. If you are not the intended recipient, please delete without copying and kindly advise us by e-mail of the mistake in delivery. NOTE: Regardless of content, this e-mail shall not operate to bind SKOPOS to any order or other contract unless pursuant to explicit written agreement or government initiative expressly permitting the use of e-mail for such purpose. -- Try New Relic Now We'll Send You this Cool Shirt New Relic is the only SaaS-based application performance monitoring service that delivers powerful full stack analytics. Optimize and monitor your browser, app, servers with just a few lines of code. Try New Relic and get this awesome Nerd Life shirt! http://p.sf.net/sfu/newrelic_d2d_apr ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users -- Try New Relic Now We'll Send You this Cool Shirt New Relic is the only SaaS-based application performance monitoring service that delivers powerful full stack analytics. Optimize and monitor your browser, app, servers with just a few lines of code. Try New Relic and get this awesome Nerd Life shirt! http://p.sf.net/sfu/newrelic_d2d_apr___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
[Bacula-users] Automated reports
Hi, I'm looking for a way to get daily and weekly automated report. I've found a few different projects: send_bacula_backup_report-0.4send a list of all jobs in past x days bacula-reportsWorks by consolidating emails, I'll know if it works tomorrow. breport Java based, but doesn't seem to be supported and I haven't been able to make it work properly Does anyone have any better suggestions? We'd like these to run automatically from a cron job. thanks in advance. JBB -- Try New Relic Now We'll Send You this Cool Shirt New Relic is the only SaaS-based application performance monitoring service that delivers powerful full stack analytics. Optimize and monitor your browser, app, servers with just a few lines of code. Try New Relic and get this awesome Nerd Life shirt! http://p.sf.net/sfu/newrelic_d2d_apr ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] Bacula-users Digest, Vol 84, Issue 10
Hi, Regarding my long-running efforts to restore a lost catalog from mid-febuary of this year, I believe I've finally succeeded (at least most of the way). I was recreating a catalog using bscan by scanning 35 LTO-5 tapes. It took about two weeks, and near the end I had an i/o error on the drive. I cleaned the drive and started again at the beginning of the tape where the i/o error happened. This seems to have worked, as I know have a catalog that bconsole can use, and it looks reasonably intact. I can't say authoritatively b/c I just recently took over this system and hadn't looked much at the backup system before I lost the catalog. I've been able to restore the data I needed (at least most of it), so this is great. There's a possible glitch, but I'm going to post about that separately. Thanks for all the help from the list along the way! -M Op 2013-04-15 om 04:14 schreef Michael Stauffer _g: Bacula 3.0.1 Hi, I'm tantalizingly near the end of a bscan reconstruciton of a catalog over 35 tapes. You may remember my posts starting mid February - it took a while but it also turns out both drives in my device had failed, so they've been replaced. It's taken 11 days so far to get to tape 30 out of 35, and ... an i/o error has occured. bscan: bscan.c:650 Could not find Job for SessId=9 SessTime=1358880703 record. 14-Apr 17:47 bscan JobId 0: Error: block.c:1004 Read error on fd=3 at file:blk 1202:13403 on device LTO5-0 (/dev/st0). ERR=Input/output error. Bacula status: file=1202 block=13403 Device status: ONLINE IM_REP_EN file=1202 block=-1 The bscan process is still running. Currently 'mt -f /dev/st0 status' is returning /dev/st0: Device or resource busy. Assuming I can get the tape 30 in the drive working again, how do I proceed? Do I return to the bscan process and hit 'enter' to see if it will start again from tape 30? Do I kill the bscan process and start again from tape 30? Can I assume that whatever it finds on tape will be added to whatever's been added to the database so far? Or do I have to start from scratch?! I hope not!!! -- Try New Relic Now We'll Send You this Cool Shirt New Relic is the only SaaS-based application performance monitoring service that delivers powerful full stack analytics. Optimize and monitor your browser, app, servers with just a few lines of code. Try New Relic and get this awesome Nerd Life shirt! http://p.sf.net/sfu/newrelic_d2d_apr ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
[Bacula-users] error during restore
Hi, I've run a restore (twice, same result) on a catalog I recently recreated from tapes using bscan. Mostly it's worked, I see virtually all of the data restored. It might actually be all, I have no way of knowing. At the end of the restore I get this error: 25-Apr 04:45 bacula-dir JobId 69: Error: Bacula bacula-dir 3.0.1 (30Apr09): 25-Apr-2013 04:45:39 Build OS: i686-redhat-linux-gnu redhat JobId: 69 Job:RestoreFiles.2013-04-24_16.39.28_28 Restore Client: ernie-fd Start time: 24-Apr-2013 16:39:30 End time: 25-Apr-2013 04:45:39 Files Expected: 2,218,711 Files Restored: 2,109,418 Bytes Restored: 334,750,797,515 Rate: 7683.2 KB/s FD Errors: 1 FD termination status: Error SD termination status: OK Termination:*** Restore Error *** So it seems to be saying that most of the files were restored, but about 5% were not. This is the only error in /var/lib/bacula/log, and there are no errors in the bacula server's /var/log/messages. On the 'ernie-fd' server, there are no errors in /var/log/messages, and there's no /var/lib/bacula/log. Anyone have any insights? When I try to run the restore again, after choosing the job to restore, it immediately has the file list created, like it's cached it somewhere. Whereas the first time I did it, it took a long time to create the file list. Can I remove the file list cache and have it recrated again to see if that helps? Here's the output from bconsole|restore all before running the actual restore, in case it's relevant: Automatically selected FileSet: home_local_fs +---+---+---+-+-+--- -+ | JobId | Level | JobFiles | JobBytes| StartTime | VolumeName | +---+---+---+-+-+--- -+ | 7 | F | 2,241,998 | 347,743,234,389 | 2012-12-28 21:07:02 | L50009 | | 7 | F | 2,241,998 | 347,743,234,389 | 2012-12-28 21:07:02 | L50010 | +---+---+---+-+-+--- -+ You have selected the following JobId: 7 Building directory tree for JobId(s) 7 ... Bootstrap records written to /var/lib/bacula/bacula-dir.restore.3.bsr The job will require the following Volume(s) Storage(s)SD Device(s) === L50009i500-changer i500-changer L50010i500-changer i500-changer 2,218,711 files selected to be restored. Run Restore job JobName: RestoreFiles Bootstrap: /var/lib/bacula/bacula-dir.restore.3.bsr Where: /scratchy/bacula-restores Replace: always FileSet: Full Set Backup Client: ernie-fd Restore Client: ernie-fd Storage: i500-changer When:2013-04-21 16:49:50 Catalog: MyCatalog Priority:10 Plugin Options: *None* OK to run? (yes/mod/no): Thanks, M -- Try New Relic Now We'll Send You this Cool Shirt New Relic is the only SaaS-based application performance monitoring service that delivers powerful full stack analytics. Optimize and monitor your browser, app, servers with just a few lines of code. Try New Relic and get this awesome Nerd Life shirt! http://p.sf.net/sfu/newrelic_d2d_apr___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
[Bacula-users] Job failed, how to recycle only volumes touched by this job?
Hi Bacula Users! A backup job I was running failed due to a power error, and I now have several hundred file volumes in a pool of hard drives which have this partial/failed backup job written to them. How can I manually tell bacula that it can discard this partial backup and start overwriting these volumes again without it overwriting other volumes in the same pool (which are storing a successfully completed full backup)? Thanks! Leon -- 陆智诚 | Leon White 绿色和平 | Greenpeace East Asia +86 186 0692 9781 | Skype: strophy 行动,带来改变。 Positive change through action. -- Try New Relic Now We'll Send You this Cool Shirt New Relic is the only SaaS-based application performance monitoring service that delivers powerful full stack analytics. Optimize and monitor your browser, app, servers with just a few lines of code. Try New Relic and get this awesome Nerd Life shirt! http://p.sf.net/sfu/newrelic_d2d_apr___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users