Re: [Bacula-users] Problem with install bacula
Hi, 05.09.2007 03:34,, Ryan Novosielski wrote:: -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Manuel Ostendorf wrote: Hello, have you an idea to solve my problem? I still have my problem. Unfortunately, I can't help. But... Manuel On 8/30/07, *Manuel Ostendorf* [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote: Hello, I cannot install bacule. If I tried after using ./configure --with-mysql with make, I got errors. == Error in /home/Ponte/bacula == /bin/sh: line 1: cd: scripts: Datei oder Verzeichnis nicht gefunden ==Entering directory /home/Ponte/bacula make[253]: Entering directory `/home/Ponte/bacula' /bin/sh: line 1: cd: src: Datei oder Verzeichnis nicht gefunden ==Entering directory /home/Ponte/bacula make[254]: Entering directory `/home/Ponte/bacula' /bin/sh: line 1: cd: src: Datei oder Verzeichnis nicht gefunden ==Entering directory /home/Ponte/bacula make[255]: Entering directory `/home/Ponte/bacula' /bin/sh: line 1: cd: src: Datei oder Verzeichnis nicht gefunden ==Entering directory /home/Ponte/bacula make[256]: Entering directory `/home/Ponte/bacula' /bin/sh: line 1: cd: src: Datei oder Verzeichnis nicht gefunden ==Entering directory /home/Ponte/bacula make[257]: Entering directory `/home/Ponte/bacula' /bin/sh: line 1: cd: src: Datei oder Verzeichnis nicht gefunden config.out Configuration on Thu Aug 30 18:20:53 CEST 2007: Host: i686-pc-linux-gnu -- suse 10.2 Bacula version: 2.2.0 (08 August 2007) Source code location: . Install binaries: /sbin Install config files: /etc/bacula Scripts directory: /etc/bacula Working directory: /var/bacula/working PID directory: /var/run Subsys directory: /var/lock/subsys Man directory: /usr/share/man Data directory: /usr/share C Compiler: gcc 4.1.2 C++ Compiler: /usr/bin/g++ 4.1.2 Compiler flags: -g -O2 -Wall -fno-strict-aliasing -fno-exceptions -fno-rtti Linker flags: Libraries: -lpthread Statically Linked Tools:yes Statically Linked FD: no Statically Linked SD: no Statically Linked DIR: no Statically Linked CONS: no Database type: MySQL Database lib: -L/usr/lib/mysql -lmysqlclient_r -lz Database name: bacula Database user: bacula Job Output Email: [EMAIL PROTECTED] Traceback Email:[EMAIL PROTECTED] SMTP Host Address: localhost Director Port: 9101 File daemon Port: 9102 Storage daemon Port:9103 Director User: Director Group: Storage Daemon User: Storage DaemonGroup: File Daemon User: File Daemon Group: SQL binaries Directory /usr/bin Large file support: yes Bacula conio support: yes -lncurses readline support: no TCP Wrappers support: no TLS support:no Encryption support: no ZLIB support: yes enable-smartalloc: yes bat support:no enable-gnome: no enable-bwx-console: no enable-tray-monitor: client-only:no build-dird: yes build-stored: yes ACL support:yes Python support: no Batch insert enabled: yes Can you tell me, what that problem is? How can I solve that problem? Thanks Manuel Ostendorf I don't speak that language, so I can't begin to figure out what that means. If you could tell us, probably others might know also. Datei oder Verzeichnis nicht gefunden is File or Directory not found. Without actually trying to build Bacula recently, and without a closer look at the Makefile, I suppose this could be a permisions problem. What I think happens is that make or rather the commands called by it can't change to the directories it needs to do its work in. Check that the directories actually exist and can be accessed by whoever runs make. Also it might be useful to provide some additional information: How did you load the sources, how did you unpack them, and what does the main Bacula directory look like? Especially file ownership and permissions might be interesting. (It's interesting that ./configure seems to run ok, but make doesn't, though...) Arno -- Arno Lehmann IT-Service Lehmann www.its-lehmann.de -
Re: [Bacula-users] Frequent Intervention emails
Hello, 05.09.2007 07:00,, Support wrote:: Dear All Anyone have an idea as to where I can change a setting to reduce the frequency of Bacula: Intervention needed ... that occur every 2 minutes. Normally it sends mail immediately, after one hour, two hours, four hours, and so on... This happened when I forgot to change tapes in an autoloader and for 5 hours got a 150+ emails requesting I mount another volume. I would like an email every 15 or 20 minutes. I think this maybe autoloader/autochanger specific as sometime back I had a similar pronlem with a DLT unit and it sent an email every 20 minutes. My suspicion is the fix or hack is in wait.c in /stored Is it possible you have a heartbeat interval set in the SD? Arno Thanks Stephen Carr - This SF.net email is sponsored by: Splunk Inc. Still grepping through log files to find problems? Stop. Now Search log events and configuration files using AJAX and a browser. Download your FREE copy of Splunk now http://get.splunk.com/ ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users -- Arno Lehmann IT-Service Lehmann www.its-lehmann.de - This SF.net email is sponsored by: Splunk Inc. Still grepping through log files to find problems? Stop. Now Search log events and configuration files using AJAX and a browser. Download your FREE copy of Splunk now http://get.splunk.com/ ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] Data Encryption with Winbacula 2.0.3 [solved]
Hi all, sorry, but I didn't know about the oppenssl Tool for Windows. I installed that tool and made the same configuration as on Linux and... ...it works Daniel Hi, is it possible to work with data encryption on a bacula-fd for Windows XP? I couldn't find any entries in the list or in the documentation. I am testing a Bacula Server 2.0.3 on Debian Etch My Clients are Linux and Windows. The Data Encryption on Linux seams to work well. Thanks for your answers? Daniel ___ Jetzt neu! Schützen Sie Ihren PC mit McAfee und WEB.DE. 3 Monate kostenlos testen. http://www.pc-sicherheit.web.de/startseite/?mc=00 - This SF.net email is sponsored by: Splunk Inc. Still grepping through log files to find problems? Stop. Now Search log events and configuration files using AJAX and a browser. Download your FREE copy of Splunk now http://get.splunk.com/ ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
[Bacula-users] sorting bconsole's 'list media'?
My output in bconsole for 'list media' appears sorted by MediaId. Is there a setting or something where I can have the default sort by VolumeName? Mike - This SF.net email is sponsored by: Splunk Inc. Still grepping through log files to find problems? Stop. Now Search log events and configuration files using AJAX and a browser. Download your FREE copy of Splunk now http://get.splunk.com/ ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] Frequent Intervention emails
Arno Lehmann wrote: Hello, 05.09.2007 07:00,, Support wrote:: Dear All Anyone have an idea as to where I can change a setting to reduce the frequency of Bacula: Intervention needed ... that occur every 2 minutes. Normally it sends mail immediately, after one hour, two hours, four hours, and so on... That is what mine did this morning. Now, 1h, 2h, and 4h brian- - This SF.net email is sponsored by: Splunk Inc. Still grepping through log files to find problems? Stop. Now Search log events and configuration files using AJAX and a browser. Download your FREE copy of Splunk now http://get.splunk.com/ ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
[Bacula-users] Bug with Maximum Volume Jobs on bacula 2.0.3?!
I've noticed a strange behavior with Maximum Volume Jobs parameter. I've set it to 3 at the volume initialization. For the following backup on this volume, i change the parameter to 5 (because I have 3 new servers to backup) after few time i have this error : 05-sep 15:18 vel-bck-1-sd: papi-mgc-002-backup.2007-09-05_00.01.05 Error: Re-read of last block OK, but block numbers differ. Last block=62775 Current block=62777. and the volume become Full (he's not really full but bacula close him) The only way to pass through this problem is to delete the volume in catalog and erase completely the tape (with mt). mt -f /dev/tape weof isn't sufficient why this behavior ? I think the Maximum Volume Jobs information is also stored on the tape (in the volume himself) so tape needs to be erase for working back have anyone noticed that? Thanks Noran - This SF.net email is sponsored by: Splunk Inc. Still grepping through log files to find problems? Stop. Now Search log events and configuration files using AJAX and a browser. Download your FREE copy of Splunk now http://get.splunk.com/___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
[Bacula-users] restore: device is blocked waiting for media
Hi, I've got to restore a few files from a on-disk restore. I've run the restore procedure, marking the files, and then the systems status appears to be BLOCKED waiting for media (?) as it was unlabeled (?). Running Jobs: Reading: Full Restore job restore_mySelf JobId=1050 Volume=mammuth_uff_g_job-2007-08-30--23-47 pool=Default device=mammuth_device (/backup/mammuth) Files=0 Bytes=0 Bytes/sec=0 FDReadSeqNo=32 in_msg=31 out_msg=5 fd=6 Device mammuth_device (/backup/mammuth) is not open or does not exist. Device is BLOCKED waiting for media. and the file from which restore exists and seems right (with its size). What am I doing wrong? Thanks, Luca - This SF.net email is sponsored by: Splunk Inc. Still grepping through log files to find problems? Stop. Now Search log events and configuration files using AJAX and a browser. Download your FREE copy of Splunk now http://get.splunk.com/ ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
[Bacula-users] Bacula again running at 5% of full speed
All, The slowdown on RHEL has returned, even after upgrading from 1.38.8 to 2.0.3: JobID FileSet St T L EndTime Bytes Rate Elapsed 2838raid1 Cn B F 08-Aug 13:25332.1 GB 2476.8 KB/s 1 day 13 hours 14 mins 59 secs 2843 system OK B F 16-Aug 12:573.992 GB 11036.4 KB/s 6 mins 1 sec 2844raid1 OK B F 16-Aug 21:271.495 TB 54848.7 KB/s 7 hours 34 mins 18 secs 2897 system OK B F 02-Sep 23:413.583 GB 2274.0 KB/s 26 mins 12 secs 2900raid1 Cn B F 05-Sep 13:34341.7 GB 2477.1 KB/s 1 day 14 hours 19 mins 20 secs I don't know what has changed since job 2844 ran normally, but this month's full backup (job 2900) is back to running very slowly (2477.1 KB/s is 4.5% of 54848.7 KB/s). This never happened with RHEL 3 and its version of Postgresql. Current system information: O/S: Red Hat Enterprise Linux Server release 5 (Tikanga) Bacula: both 1.38.8 and 2.0.3 Postgresql: postgresql-server-8.1.9-1.el5 I would really appreciate suggestions for diagnosing this problem, particularly how to get additional information regarding what Bacula is doing when it's running slowly. While I suspect the slowdown is probably due to the Bacula/Postgresql interaction, I'm not sure how to pinpoint that as the cause of the problem. Can Bacula be compiled with debugging flags to produce additional logging information? The Postgresql server is running on another computer, so using tcpdump on the network traffic is an option as well. Thanks. Tod On Thu, 2007-08-16 at 13:34 -0400, Tod Hagan wrote: All, Rather that try to figure out why 1.38.8 was running slowly after upgrading RHEL 3 to RHEL 5 and its newer version of Postgresql, I upgraded Bacula to 2.0.3. Once I got grant_postgresql_privileges running with help from the list, this message was reported: psql:stdin:62: NOTICE: number of page slots needed (29760) exceeds max_fsm_pages (2) HINT: Consider increasing the configuration parameter max_fsm_pages to a value over 29760. I edited /var/lib/pgsql/data/postgresql.conf to set max_fsm_pages = 4 and restarted Postgresql. A test backup shows that speeds are now comparable to the old configuration of 1.38.8 on RHEL 3. Even better, bconsole commands such as getting the director status or doing queries using sqlquery are now appreciably faster. Thanks all for your help. Tod -- Tod Hagan Information Technologist AIRMAP/Climate Change Research Center Institute for the Study of Earth, Oceans, and Space University of New Hampshire Durham, NH 03824 Phone: 603-862-3116 - This SF.net email is sponsored by: Splunk Inc. Still grepping through log files to find problems? Stop. Now Search log events and configuration files using AJAX and a browser. Download your FREE copy of Splunk now http://get.splunk.com/ ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] Bug with Maximum Volume Jobs on bacula 2.0.3?!
Hi, 05.09.2007 17:31,, Xeos Laenor wrote:: I've noticed a strange behavior with Maximum Volume Jobs parameter. I've set it to 3 at the volume initialization. For the following backup on this volume, i change the parameter to 5 (because I have 3 new servers to backup) after few time i have this error : 05-sep 15:18 vel-bck-1-sd: papi-mgc-002-backup.2007-09-05_00.01.05 Error: Re-read of last block OK, but block numbers differ. Last block=62775 Current block=62777. Which version of Bacula? and the volume become Full (he's not really full but bacula close him) Could it be that the tape is simply full? The only way to pass through this problem is to delete the volume in catalog and erase completely the tape (with mt). mt -f /dev/tape weof isn't sufficient Right, Bacula needs to know the tape is full. Furthermore, a single weof is not sufficient to erase a tape. Typically, you need two EOFs in sequence. why this behavior ? I think the Maximum Volume Jobs information is also stored on the tape (in the volume himself) so tape needs to be erase for working back No, this information is not stored on tape, but in the catalog. After you change the pool definition, you have to manually update the existing volume if you want this change to affect them. The pool definition is merely a template for the creation of new volumes. have anyone noticed that? Not me... Arno Thanks Noran - This SF.net email is sponsored by: Splunk Inc. Still grepping through log files to find problems? Stop. Now Search log events and configuration files using AJAX and a browser. Download your FREE copy of Splunk now http://get.splunk.com/ ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users -- Arno Lehmann IT-Service Lehmann www.its-lehmann.de - This SF.net email is sponsored by: Splunk Inc. Still grepping through log files to find problems? Stop. Now Search log events and configuration files using AJAX and a browser. Download your FREE copy of Splunk now http://get.splunk.com/ ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] restore: device is blocked waiting for media
Hi, 05.09.2007 18:02,, Luca Ferrari wrote:: Hi, I've got to restore a few files from a on-disk restore. I've run the restore procedure, marking the files, and then the systems status appears to be BLOCKED waiting for media (?) as it was unlabeled (?). Running Jobs: Reading: Full Restore job restore_mySelf JobId=1050 Volume=mammuth_uff_g_job-2007-08-30--23-47 pool=Default device=mammuth_device (/backup/mammuth) Files=0 Bytes=0 Bytes/sec=0 FDReadSeqNo=32 in_msg=31 out_msg=5 fd=6 Device mammuth_device (/backup/mammuth) is not open or does not exist. Device is BLOCKED waiting for media. and the file from which restore exists and seems right (with its size). What am I doing wrong? I think we'll need some more detailed information here... does Bacula send a notification to load the volume it wants, what are the Media Types, and so on... the relevant parts of your configuration might help here. Arno Thanks, Luca - This SF.net email is sponsored by: Splunk Inc. Still grepping through log files to find problems? Stop. Now Search log events and configuration files using AJAX and a browser. Download your FREE copy of Splunk now http://get.splunk.com/ ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users -- Arno Lehmann IT-Service Lehmann www.its-lehmann.de - This SF.net email is sponsored by: Splunk Inc. Still grepping through log files to find problems? Stop. Now Search log events and configuration files using AJAX and a browser. Download your FREE copy of Splunk now http://get.splunk.com/ ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] Bug with Maximum Volume Jobs on bacula 2.0.3?!
Bacula 2.0.3 2007/9/5, Arno Lehmann [EMAIL PROTECTED]: Hi, No, this information is not stored on tape, but in the catalog. After you change the pool definition, you have to manually update the existing volume if you want this change to affect them. The pool definition is merely a template for the creation of new volumes. Hum ok. it's maybe the solution for my problem when i want to change Maximum Volume Jobs parameter, i also have to use update command in bconsole in order to bacula really use the new value on the volume, right? until now, i only put the new value in conf file and reload in bconsole. Apparently, it's not sufficient so. ^^ ok it's my fault, it was in the doc :-\ Thanks for your response. Franck - This SF.net email is sponsored by: Splunk Inc. Still grepping through log files to find problems? Stop. Now Search log events and configuration files using AJAX and a browser. Download your FREE copy of Splunk now http://get.splunk.com/___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] missing part on a DVD... why? what to do?
Hi, 05.09.2007 18:43,, Wes Hardaker wrote:: AL == Arno Lehmann [EMAIL PROTECTED] writes: [Arno: sorry... you're going to get multiple copies of this as I messed up my from address that the list expects me to use] No problem... [This message is actually about an old thread I started a while back regarding DVDs not being written properly, probably due to small writes; Arno Lehmann kindly responded with good advice] WH I have it working with DVDs and the situation in question as I can WH tell went like this: WH - it was backing up the main system. WH - I believe it tried to store the information in: BaculaDVD0009.8. WH - However, the disc doesn't actually contain that part, but the log WH shows success: WH WH 12-Jun 23:05 machine-sd: Ready to append to end of Volume BaculaDVD0009 part=8 size=3040970041 WH 12-Jun 23:05 machine-sd: Job write elapsed time = 00:00:04, Transfer rate = 57.92 K bytes/second WH 12-Jun 23:05 machine-sd: Part 8 (233000 bytes) written to DVD. AL It might be that the part file in question still is in your temporary AL storage directory. Do you have Write Part After Job set in the job AL definition? I have, in the end implemented Arno's solution which is to set Write Part After Job on only the catalog backup that gets executed after everything else. However, in all of this I had 2 disks (out of 11) that failed in the way indicated: bacula says to mount the disk but it indeed was mounted but bacula didn't know that because the last part was missing from the disk. (I fixed these by marking them as Full, and bacula then proceeded to desire a new volume). So my remaining questions are that I can't find documentation for: I haven't used DV writing for a long time, so I'm not sure about these questions, but anyway... 1) is the disk with the missing part actually missing data (I assume so)? It probably will, yes. 2) is the part that was supposed to be written to an older volume (say #8 in a pool) but couldn't be written to the new next volume (#9) instead or is that part lost as well? I think it will be lost, but you can try to check. As far as I know, Bacula does not rename the part files it writes to DVD, so, if a part from volume 8 is written to disk 9, there should be a file with a name belonging to volume 8 on that disk. 3) How do I verify that the contents of a DVD match what bacula expects there to be on the disk. I haven't seen a archive verification option anywhere? Hmmm... a Volume to Catalog verify job could help, but there seem to be problems there. I'd try bls on the disks and see what that reports. 4) assuming that some of the archives are bad, what's the best way to fix the issue? Can I mark the volume as Error and will bacula automatically re-archive the files that were in that volume again on the next run through the various backup schedules? (the brunt of the question is really: do files get re-backed up automatically when a volume is marked in error, or does something else need to be done). That won't work. It will be best to check which jobs have their data on the disks in question (using the 'query' command) and manually re-run any jobs you need. 5) Should I give up and buy a tape drive (ha ha; sigh) Seriously, in my opinion that would be the best thing to do... even using a used DLT or LTO drive will probably be more reliable than DVD backup, and tape backups, in my experience, require much less administration than DVD ones, so the extra money you spend will result in less time to operate Bacula in the future. Arno -- Arno Lehmann IT-Service Lehmann www.its-lehmann.de - This SF.net email is sponsored by: Splunk Inc. Still grepping through log files to find problems? Stop. Now Search log events and configuration files using AJAX and a browser. Download your FREE copy of Splunk now http://get.splunk.com/ ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] Bacula again running at 5% of full speed
Hi, 05.09.2007 20:06,, Tod Hagan wrote:: All, The slowdown on RHEL has returned, even after upgrading from 1.38.8 to 2.0.3: JobID FileSet St T L EndTime Bytes Rate Elapsed 2838raid1 Cn B F 08-Aug 13:25332.1 GB 2476.8 KB/s 1 day 13 hours 14 mins 59 secs 2843 system OK B F 16-Aug 12:573.992 GB 11036.4 KB/s 6 mins 1 sec 2844raid1 OK B F 16-Aug 21:271.495 TB 54848.7 KB/s 7 hours 34 mins 18 secs 2897 system OK B F 02-Sep 23:413.583 GB 2274.0 KB/s 26 mins 12 secs 2900raid1 Cn B F 05-Sep 13:34341.7 GB 2477.1 KB/s 1 day 14 hours 19 mins 20 secs I don't know what has changed since job 2844 ran normally, but this month's full backup (job 2900) is back to running very slowly (2477.1 KB/s is 4.5% of 54848.7 KB/s). This never happened with RHEL 3 and its version of Postgresql. Current system information: O/S: Red Hat Enterprise Linux Server release 5 (Tikanga) Bacula: both 1.38.8 and 2.0.3 Postgresql: postgresql-server-8.1.9-1.el5 I would really appreciate suggestions for diagnosing this problem, particularly how to get additional information regarding what Bacula is doing when it's running slowly. While I suspect the slowdown is probably due to the Bacula/Postgresql interaction, I'm not sure how to pinpoint that as the cause of the problem. Can Bacula be compiled with debugging flags to produce additional logging information? No need to compile, at least for now... use the 'setdebug' command, e.g. 'setdebug dir level=200 trace=1' and 'setdebug sd=your_SD level=200 trace=1' and read the resulting (large!) trace files in the working directories. Unfortunately, there are no time stamps in the log files, so it's hard to determine what actually needs so much time... Also, check what your systems are actually doing... using vmstat, top, and perhaps strace on the DIR machine might reveal where all that time goes; on the catalog database server, you should also observe PostgreSQL, but since I'm not a PostgreSQL guy, you better ask others for advice :-) The Postgresql server is running on another computer, so using tcpdump on the network traffic is an option as well. tcpdump could help, but I guess that would not help in actually finding out why the catalog is so slow (assuming the catalog _is_ the bottle-neck here). Arno Thanks. Tod On Thu, 2007-08-16 at 13:34 -0400, Tod Hagan wrote: All, Rather that try to figure out why 1.38.8 was running slowly after upgrading RHEL 3 to RHEL 5 and its newer version of Postgresql, I upgraded Bacula to 2.0.3. Once I got grant_postgresql_privileges running with help from the list, this message was reported: psql:stdin:62: NOTICE: number of page slots needed (29760) exceeds max_fsm_pages (2) HINT: Consider increasing the configuration parameter max_fsm_pages to a value over 29760. I edited /var/lib/pgsql/data/postgresql.conf to set max_fsm_pages = 4 and restarted Postgresql. A test backup shows that speeds are now comparable to the old configuration of 1.38.8 on RHEL 3. Even better, bconsole commands such as getting the director status or doing queries using sqlquery are now appreciably faster. Thanks all for your help. Tod -- Arno Lehmann IT-Service Lehmann www.its-lehmann.de - This SF.net email is sponsored by: Splunk Inc. Still grepping through log files to find problems? Stop. Now Search log events and configuration files using AJAX and a browser. Download your FREE copy of Splunk now http://get.splunk.com/ ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] Bug with Maximum Volume Jobs on bacula 2.0.3?!
Hello, 05.09.2007 21:31,, Xeos Laenor wrote:: Bacula 2.0.3 Damn, I should have read the subject line :-) 2007/9/5, Arno Lehmann [EMAIL PROTECTED] mailto:[EMAIL PROTECTED]: Hi, No, this information is not stored on tape, but in the catalog. After you change the pool definition, you have to manually update the existing volume if you want this change to affect them. The pool definition is merely a template for the creation of new volumes. Hum ok. it's maybe the solution for my problem when i want to change Maximum Volume Jobs parameter, i also have to use update command in bconsole in order to bacula really use the new value on the volume, right? Exactly. until now, i only put the new value in conf file and reload in bconsole. Apparently, it's not sufficient so. ^^ ok it's my fault, it was in the doc :-\ Thanks for your response. Always a pleasure :-) But I'm still curious if that fixes the block numbers mismatch... I recall that there was a bug reported that might be what you experience there, and it could be that in 2.2, this one was fixed. You might want to check the bugs mentioned in the ReleaseNotes file for 2.2... Arno Franck - This SF.net email is sponsored by: Splunk Inc. Still grepping through log files to find problems? Stop. Now Search log events and configuration files using AJAX and a browser. Download your FREE copy of Splunk now http://get.splunk.com/ ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users -- Arno Lehmann IT-Service Lehmann www.its-lehmann.de - This SF.net email is sponsored by: Splunk Inc. Still grepping through log files to find problems? Stop. Now Search log events and configuration files using AJAX and a browser. Download your FREE copy of Splunk now http://get.splunk.com/ ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
[Bacula-users] Bacula Windows client strange behaviour
Hi everyone I use bacula with success on a mixed network with MacOS X, LINUX and Windows Xp Workstations. Everithing goes very well but on on two of the Windows XP PCs the backup starts good then slows down and hangs. I mean the average speed slowly goes to 0 b/sec while the job runs. The FD is alive, and the job looks running both on the Director/Storage (Linux) and on the FD client. Everyone uses the latest available Bacula version 2.2.x and the problem appears only on two Windows PC's while the others work really fine. The PC's look identical in services and configuration (the Fileset is different of corse) to the ones working. I've tried both VSS and non-VSS backup with the same result. I've looked into the docs for something useful with absolutely no success. Does anyone have any idea about what to start checking to have this thing debugged? TIA Alessandro - This SF.net email is sponsored by: Splunk Inc. Still grepping through log files to find problems? Stop. Now Search log events and configuration files using AJAX and a browser. Download your FREE copy of Splunk now http://get.splunk.com/ ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
[Bacula-users] Schedules and start time delay
Is there any way to have the Max Start Time Delay overrideable on a schedule other than doing multiple jobs atm? The reason I ask is that backups during the week need to start no later than a certain time, but weekends are not a problem. Blake Dunlap Network Operations ISDN-Net, Inc. - This SF.net email is sponsored by: Splunk Inc. Still grepping through log files to find problems? Stop. Now Search log events and configuration files using AJAX and a browser. Download your FREE copy of Splunk now http://get.splunk.com/___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] missing part on a DVD... why? what to do?
AL == Arno Lehmann [EMAIL PROTECTED] writes: (first: thanks for the help!) 4) assuming that some of the archives are bad, what's the best way to fix the issue? Can I mark the volume as Error and will bacula automatically re-archive the files that were in that volume again on the next run through the various backup schedules? (the brunt of the question is really: do files get re-backed up automatically when a volume is marked in error, or does something else need to be done). AL That won't work. It will be best to check which jobs have their data AL on the disks in question (using the 'query' command) and manually AL re-run any jobs you need. That won't work if it's an incremental backup, right? Because re-running it will take an incremental from the last one. 5) Should I give up and buy a tape drive (ha ha; sigh) AL Seriously, in my opinion that would be the best thing to do... even AL using a used DLT or LTO drive will probably be more reliable than DVD AL backup, and tape backups, in my experience, require much less AL administration than DVD ones, so the extra money you spend will result AL in less time to operate Bacula in the future. Sigh... You're right, of course, but I was trying to do this without a cost-outlay since I have a DVD writer already ;-) IE, cheap but functional at-home backup solution. Hmm... I wonder if rewriting the DVD backend to write to a ISO mounted in loopback and then burn the iso would be more reliable. It'd take more scratch disk space, but wouldn't suffer from problems like this. It'd also be a lot slower since you'd have to reburn the whole disk when you added a part to the ISO. -- In the bathtub of history the truth is harder to hold than the soap, and much more difficult to find. -- Terry Pratchett - This SF.net email is sponsored by: Splunk Inc. Still grepping through log files to find problems? Stop. Now Search log events and configuration files using AJAX and a browser. Download your FREE copy of Splunk now http://get.splunk.com/ ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
[Bacula-users] autochanger mount/unmount problem
I've been having this trouble for a while. It was just posted on the list to add autochanger=yes into a spot in the bacula-dir.conf file and that solved one problem I had unmounting/mounting... but I'm still left with this one... anybody have any ideas what's still wrong? Bob *unmount Dell-PV136T Automatically selected Catalog: MyCatalog Using Catalog MyCatalog Connecting to Storage daemon Dell-PV136T at gyrus:9103 ... 3307 Issuing autochanger unload slot 18, drive 0 command. 3995 Bad autochanger unload slot 18, drive 0: ERR=Child exited with code 1 Results=Unloading drive 0 into Storage Element 18...mtx: Request Sense: Long Report=yes mtx: Request Sense: Valid Residual=no mtx: Request Sense: Error Code=70 (Current) mtx: Request Sense: Sense Key=Illegal Request mtx: Request Sense: FileMark=no mtx: Request Sense: EOM=no mtx: Request Sense: ILI=no mtx: Request Sense: Additional Sense Code = 53 mtx: Request Sense: Additional Sense Qualifier = 01 mtx: Request Sense: BPV=no mtx: R3002 Device IBMLTO2-1 (/dev/nst0) unmounted. *mount Automatically selected Storage: Dell-PV136T Enter autochanger slot: 18 3301 Issuing autochanger loaded? drive 0 command. 3302 Autochanger loaded? drive 0, result is Slot 18. 3301 Issuing autochanger loaded? drive 0 command. 3302 Autochanger loaded? drive 0, result is Slot 18. 3001 Mounted Volume: LTO219L2 3001 Device IBMLTO2-1 (/dev/nst0) is mounted with Volume LTO219L2 - This SF.net email is sponsored by: Splunk Inc. Still grepping through log files to find problems? Stop. Now Search log events and configuration files using AJAX and a browser. Download your FREE copy of Splunk now http://get.splunk.com/ ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] missing part on a DVD... why? what to do?
Hi, 05.09.2007 23:00,, Wes Hardaker wrote:: AL == Arno Lehmann [EMAIL PROTECTED] writes: (first: thanks for the help!) 4) assuming that some of the archives are bad, what's the best way to fix the issue? Can I mark the volume as Error and will bacula automatically re-archive the files that were in that volume again on the next run through the various backup schedules? (the brunt of the question is really: do files get re-backed up automatically when a volume is marked in error, or does something else need to be done). AL That won't work. It will be best to check which jobs have their data AL on the disks in question (using the 'query' command) and manually AL re-run any jobs you need. That won't work if it's an incremental backup, right? Because re-running it will take an incremental from the last one. Yes. You need either a full backup or one level above what is no longer accessible. 5) Should I give up and buy a tape drive (ha ha; sigh) AL Seriously, in my opinion that would be the best thing to do... even AL using a used DLT or LTO drive will probably be more reliable than DVD AL backup, and tape backups, in my experience, require much less AL administration than DVD ones, so the extra money you spend will result AL in less time to operate Bacula in the future. Sigh... You're right, of course, but I was trying to do this without a cost-outlay since I have a DVD writer already ;-) IE, cheap but functional at-home backup solution. Hmm... I wonder if rewriting the DVD backend to write to a ISO mounted in loopback and then burn the iso would be more reliable. That might be one option, but I guess the main problem is that Bacula simply doesn't handle things very well when the writing-to-disk phase has problems. You'd need something more integrated into the SD, or implement some way to signal re-try this part to the next disk to the SD. It'd take more scratch disk space, but wouldn't suffer from problems like this. I don't think so... when the actual writing goes wrong for whatever reason, you simply cant't tell the SD to retry the parts still in spool space. It'd also be a lot slower since you'd have to reburn the whole disk when you added a part to the ISO. Quite a lot slower - first read the existing contents, integrate the new part to it (possible requiring remastering of the whole file system), then writing the whole image to disk... also an additional strain to the disks themselves. Not to forget the DVD writer - I know that some of them get funny when they get warm :-) Arno -- Arno Lehmann IT-Service Lehmann www.its-lehmann.de - This SF.net email is sponsored by: Splunk Inc. Still grepping through log files to find problems? Stop. Now Search log events and configuration files using AJAX and a browser. Download your FREE copy of Splunk now http://get.splunk.com/ ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] Frequent Intervention emails
Dear Arno Yes I do have the heart beat interval set to 2 minutes. I will remove it. I had put it there to see if it would help with client disconnects it did not. FYI If a client disconnects I found out the default TCP timeout of 2 hours is timed from the initial connection time. Thanks Stephen Carr Arno Lehmann wrote: Hello, 05.09.2007 07:00,, Support wrote:: Dear All Anyone have an idea as to where I can change a setting to reduce the frequency of Bacula: Intervention needed ... that occur every 2 minutes. Normally it sends mail immediately, after one hour, two hours, four hours, and so on... This happened when I forgot to change tapes in an autoloader and for 5 hours got a 150+ emails requesting I mount another volume. I would like an email every 15 or 20 minutes. I think this maybe autoloader/autochanger specific as sometime back I had a similar pronlem with a DLT unit and it sent an email every 20 minutes. My suspicion is the fix or hack is in wait.c in /stored Is it possible you have a heartbeat interval set in the SD? Arno Thanks Stephen Carr - This SF.net email is sponsored by: Splunk Inc. Still grepping through log files to find problems? Stop. Now Search log events and configuration files using AJAX and a browser. Download your FREE copy of Splunk now http://get.splunk.com/ ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users -- Arno Lehmann IT-Service Lehmann www.its-lehmann.de - This SF.net email is sponsored by: Splunk Inc. Still grepping through log files to find problems? Stop. Now Search log events and configuration files using AJAX and a browser. Download your FREE copy of Splunk now http://get.splunk.com/ ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users - This SF.net email is sponsored by: Splunk Inc. Still grepping through log files to find problems? Stop. Now Search log events and configuration files using AJAX and a browser. Download your FREE copy of Splunk now http://get.splunk.com/ ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
[Bacula-users] bscan and HFS+?
Hi All, Trying to use bscan to recover files from a purged volume (client e-mailed about deleted files the day after the volume was purged, of course...), clients are OSX , and the below is what I'm getting from a dry run not modifying the database. I'm assuming type=14 is the resource fork and type=13 are chunks of the data fork. Has anybody done this successfully? I was running 1.3.8, but just upgraded to 2.2.0. bscan: bscan.c:792 Unknown stream type!!! stream=14 len=32 bscan: bscan.c:792 Unknown stream type!!! stream=13 len=32768 bscan: bscan.c:792 Unknown stream type!!! stream=13 len=32768 bscan: bscan.c:792 Unknown stream type!!! stream=13 len=32768 bscan: bscan.c:792 Unknown stream type!!! stream=13 len=32768 bscan: bscan.c:792 Unknown stream type!!! stream=13 len=32768 bscan: bscan.c:792 Unknown stream type!!! stream=13 len=31013 bscan: bscan.c:792 Unknown stream type!!! stream=14 len=32 bscan: bscan.c:792 Unknown stream type!!! stream=13 len=32768 bscan: bscan.c:792 Unknown stream type!!! stream=13 len=8777 bscan: bscan.c:792 Unknown stream type!!! stream=14 len=32 Thanks! --jim - This SF.net email is sponsored by: Splunk Inc. Still grepping through log files to find problems? Stop. Now Search log events and configuration files using AJAX and a browser. Download your FREE copy of Splunk now http://get.splunk.com/ ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] autochanger mount/unmount problem
Hi, 05.09.2007 22:30,, Bob Hetzel wrote:: I've been having this trouble for a while. It was just posted on the list to add autochanger=yes into a spot in the bacula-dir.conf file and that solved one problem I had unmounting/mounting... but I'm still left with this one... anybody have any ideas what's still wrong? Bob *unmount Dell-PV136T Automatically selected Catalog: MyCatalog Using Catalog MyCatalog Connecting to Storage daemon Dell-PV136T at gyrus:9103 ... 3307 Issuing autochanger unload slot 18, drive 0 command. 3995 Bad autochanger unload slot 18, drive 0: ERR=Child exited with code 1 This is an issue in the mtx-changer script. In this case, mtx itself produced an error. Most likely, you either need to offline the tape before it can be unloaded. Look into the mtx-changer script, there is some inline documentation available. Arno Results=Unloading drive 0 into Storage Element 18...mtx: Request Sense: Long Report=yes mtx: Request Sense: Valid Residual=no mtx: Request Sense: Error Code=70 (Current) mtx: Request Sense: Sense Key=Illegal Request mtx: Request Sense: FileMark=no mtx: Request Sense: EOM=no mtx: Request Sense: ILI=no mtx: Request Sense: Additional Sense Code = 53 mtx: Request Sense: Additional Sense Qualifier = 01 mtx: Request Sense: BPV=no mtx: R3002 Device IBMLTO2-1 (/dev/nst0) unmounted. *mount Automatically selected Storage: Dell-PV136T Enter autochanger slot: 18 3301 Issuing autochanger loaded? drive 0 command. 3302 Autochanger loaded? drive 0, result is Slot 18. 3301 Issuing autochanger loaded? drive 0 command. 3302 Autochanger loaded? drive 0, result is Slot 18. 3001 Mounted Volume: LTO219L2 3001 Device IBMLTO2-1 (/dev/nst0) is mounted with Volume LTO219L2 - This SF.net email is sponsored by: Splunk Inc. Still grepping through log files to find problems? Stop. Now Search log events and configuration files using AJAX and a browser. Download your FREE copy of Splunk now http://get.splunk.com/ ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users -- Arno Lehmann IT-Service Lehmann www.its-lehmann.de - This SF.net email is sponsored by: Splunk Inc. Still grepping through log files to find problems? Stop. Now Search log events and configuration files using AJAX and a browser. Download your FREE copy of Splunk now http://get.splunk.com/ ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] missing part on a DVD... why? what to do?
AL == Arno Lehmann [EMAIL PROTECTED] writes: AL That might be one option, but I guess the main problem is that Bacula AL simply doesn't handle things very well when the writing-to-disk phase AL has problems. You'd need something more integrated into the SD, or AL implement some way to signal re-try this part to the next disk to AL the SD. I have a hard time believing there is no way to say whoops, I accidentally lost volume 5 in a fire Please continue as if it doesn't exist and all the files that were on it should no longer exist in the catalog. That's really what I need at the moment (I know which volumes are bad). Trying to guess at how the schema works (danger!) it looks like you might be able to look at the jobmedia table and use it to remove stuff for a broken volume from the Files table so it would get backed up next time and suddenly be needed again. But that's based on not reading any documentation on what the columns actually mean; but if someone says I'm on the right track I might go down that road :-) -- In the bathtub of history the truth is harder to hold than the soap, and much more difficult to find. -- Terry Pratchett - This SF.net email is sponsored by: Splunk Inc. Still grepping through log files to find problems? Stop. Now Search log events and configuration files using AJAX and a browser. Download your FREE copy of Splunk now http://get.splunk.com/ ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] Verify reports always OK
* Thomas Glatthor schrieb am 05.09.07 um 18:42 Uhr: is no one using verify jobs with bacula? can no one confirm that verify is working as expected in 2.2.0 and it must be my problem instead of a serious bug? You might open a bug report if noone can verify here. -Marc -- * (morganj): 0 is false and 1 is true, correct? * * (alec_eso): 1, morganj * * (morganj): bastard.* - This SF.net email is sponsored by: Splunk Inc. Still grepping through log files to find problems? Stop. Now Search log events and configuration files using AJAX and a browser. Download your FREE copy of Splunk now http://get.splunk.com/ ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] Bacula Windows client strange behaviour
I mean the average speed slowly goes to 0 b/sec while the job runs. Definitely sounds like it is stopping cold. The FD is alive, and the job looks running both on the Director/Storage (Linux) and on the FD client. Does the FD respond to a cancel from the director? Does anyone have any idea about what to start checking to have this thing debugged? I find that 'Process Monitor' from sysinternals (www.sysinternals.com) is great for looking at what a process is doing. Maybe you can have a look at bacula-fd at or around the point where it stops? James - This SF.net email is sponsored by: Splunk Inc. Still grepping through log files to find problems? Stop. Now Search log events and configuration files using AJAX and a browser. Download your FREE copy of Splunk now http://get.splunk.com/ ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] Bacula Windows client strange behaviour
Hi Alessandro, On Wednesday 05 September 2007 04:31:22 pm Alessandro Bianchi wrote: Hi everyone I use bacula with success on a mixed network with MacOS X, LINUX and Windows Xp Workstations. Everithing goes very well but on on two of the Windows XP PCs the backup starts good then slows down and hangs. I mean the average speed slowly goes to 0 b/sec while the job runs. The FD is alive, and the job looks running both on the Director/Storage (Linux) and on the FD client. Everyone uses the latest available Bacula version 2.2.x and the problem appears only on two Windows PC's while the others work really fine. The PC's look identical in services and configuration (the Fileset is different of corse) to the ones working. I've tried both VSS and non-VSS backup with the same result. I've looked into the docs for something useful with absolutely no success. Does anyone have any idea about what to start checking to have this thing debugged? File client may as well be just busy scanning files. If some of the filesets happen to include folders with thousands of small (or otherwise) files then scanning those folders and collecting attribute data for all of the files can be painfully slow. In my experience this often creates quite a bottleneck. You can check this by requesting client status from director and checking if it keeps progressing through the list of files, e.g. if the names of the current files being processed by a client FD keep are changing at a steady rate. You can also do estimate listing for that client and compare esimate runtime to other clients which run their backups faster. That will tell you how much time FD needs to scan all files in a fileset. --Ivan - This SF.net email is sponsored by: Splunk Inc. Still grepping through log files to find problems? Stop. Now Search log events and configuration files using AJAX and a browser. Download your FREE copy of Splunk now http://get.splunk.com/ ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users