[Bacula-users] Re: Segmentation violation
I Tried to upgrade MySQL to 5. and recomile bacula (yes I did a make distclean), but same error, have now installed MySQL 4.1.15 again, and recompiled bacula again, deleted the database and deleted the tapes so I could start fresh. When trying to restore from a large job, bacula crashes, when trying to restore from a small job everything works fine. Does bacula has shortcomings when comes to enterprise systems? Now I have run several backups of several servers and everything works fine, until I wanted to do a restore from the biggest job, +---+---+---+-+-++---+ | JobId | Level | JobFiles | JobBytes| StartTime | VolumeName | StartFile | +---+---+---+-+-++---+ | 2 | F | 8,759,302 | 821,505,476,167 | 2005-12-09 16:27:02 | 09L2 | 0 | +---+---+---+-+-++---+ You have selected the following JobId: 2 Building directory tree for JobId 2 ... Query failed: SELECT MediaType FROM JobMedia,Media WHERE JobMedia.JobId=2 AND JobMedia.MediaId=Media.MediaId: ERR=Lost connection to MySQL server during query There were no files inserted into the tree, so file selection is not possible.Most likely your retention policy pruned the files Do you want to restore all the files? (yes|no): ¤¤ When trying to restore from a smaller job, everything works fine: +---+---+--++-++---+ | JobId | Level | JobFiles | JobBytes | StartTime | VolumeName | StartFile | +---+---+--++-++---+ | 3 | F | 833,832 | 52,358,007,603 | 2005-12-09 16:29:06 | 09L2 |29 | | 4 | I |1,434 |322,179,540 | 2005-12-10 00:05:01 | 01L2 | 0 | +---+---+--++-++---+ You have selected the following JobIds: 3,4 Building directory tree for JobId 3 ... + Building directory tree for JobId 4 ... 2 Jobs, 828,824 files inserted into the tree. Have I met any limitations in bacula? The system status was like this the moment before the crash: last pid: 34607; load averages: 1.24, 0.58, 0.25 up 3+19:19:56 08:57:34 59 processes: 6 running, 53 sleeping Mem: 572M Active, 1156M Inact, 212M Wired, 64M Cache, 112M Buf, 3008K Free Swap: 4096M Total, 500K Used, 4095M Free PID USERNAMETHR PRI NICE SIZERES STATE C TIME WCPU COMMAND 20584 mysql 9 200 57640K 33344K kserel 0 256:57 86.77% mysqld 31511 root 6 200 5608K 2304K kserel 0 144:10 0.00% bacula-sd 31532 root 6 200 518M 513M RUN0 53:32 0.00% bacula-dir Kern Sibbald wrote: This looks to me like a problem with MySQL. Did you by any chance load pre-built binaries rather than build on your system. My best guess at the moment is that Bacula is built with version X of MySQL and you are running with version Y on your machine. That means that some of the packet fields have moved or are aligned differently. Rebuiding Bacula from source directly on your machine should resolve this, if I am right ... To answer your question. Yes, of course, you can do as many backup and restores at the same time as you want without Bacula crashing (providing it is working properly). On Wednesday 07 December 2005 22:57, rkvam wrote: Actually, even when there is no backup running, the restore job fails, anyone have a clue? Did I fuck up the database while trying to restore while backuping? Same crash everytime I try to restore the same job as I tried to run while segfault. Automatically selected FileSet: Store +---+---+---+-+-+-- --+---+ | JobId | Level | JobFiles | JobBytes| StartTime | VolumeName | StartFile | +---+---+---+-+-+-- --+---+ |13 | F | 8,743,518 | 819,499,095,510 | 2005-12-05 21:30:55 | 10L2 | 117 | |18 | I | 143,886 | 27,865,423,512 | 2005-12-07 00:15:07 | 01L2 | 9 | |30 | I |36,820 | 21,111,393,490 | 2005-12-07 16:24:15 | 01L2 |42 | +---+---+---+-+-+-- --+---+ You have selected the following JobIds: 13,18,30 Building directory tree for JobId 13 ... Query failed: SELECT MediaType FROM JobMedia,Media WHERE JobMedia.JobId=13 AND JobMedia.MediaId=Media.MediaId: ERR=Lost connection to MySQL server during query Building directory tree for JobId 18 ... Building directory tree for JobId 30 ... 3
Re: [Bacula-users] Installation Problem
Are you running make or make install here (looks like the latter)? You need to do make before make install. If that doesn't work, then to rule out any interaction with previous problems, I suggest you start with afresh (from the source tar file into an empty directory), run your configure and then invoke make without any options. __Martin --- This SF.net email is sponsored by: Splunk Inc. Do you grep through log files for problems? Stop! Download the new AJAX search engine that makes searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! http://ads.osdn.com/?ad_id=7637alloc_id=16865op=click ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
[Bacula-users] Bacula client RPMs
Hi ! Is there a switch for rpmbuild that only builds the clients and only checks the dependencies for those ? I didn`t find anything checking the specfile but maybe i missed it. Would be nice if i could just specify rpmbuild --rebuild --define build_rh7 1 --define client_only 1 bacula.src.rpm because i only need to build one server but 4 diff versions of the client. I just ran into the problem that the RHAS2.1 (rh7 compatible) does not have most of the required packets ... they`re simply not available for that OS-release :( Best Regards, -- Daniel HoltkampRiege Software International GmbH System Administration Mollsfeld 10 40670 Meerbusch, Germany Phone: +49-2159-9148-41 mail: holtkamp [at] riege.comFax: +49-2159-9148-11 --- This SF.net email is sponsored by: Splunk Inc. Do you grep through log files for problems? Stop! Download the new AJAX search engine that makes searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! http://ads.osdn.com/?ad_id=7637alloc_id=16865op=click ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] Re: Segmentation violation
Hello, I repeat what I said the last time: this looks like a MySQL problem. Given the new information you have presented, and the fact that you are having problems when working with almost 9 million files backed up (a lot), I would suspect that the problem is with your MySQL configuration. If you used the standard MySQL installation, it is probably not configured for such large databases. See below for more ... On Monday 12 December 2005 10:52, Roger Kvam wrote: I Tried to upgrade MySQL to 5. and recomile bacula (yes I did a make distclean), but same error, have now installed MySQL 4.1.15 again, and recompiled bacula again, deleted the database and deleted the tapes so I could start fresh. When trying to restore from a large job, bacula crashes, From the message you show below, it looks to me like MySQL crashed or at least disconnected. when trying to restore from a small job everything works fine. Does bacula has shortcomings when comes to enterprise systems? Bacula is not what I would call an enterprise system, so one could say it has that kind of shortcoming. In your case, you are dealing with a quite large backup set, and it looks like you have not adapted MySQL to handle it appropriatedly. Perhaps some of the other users on the list can help you. If I am not mistaken, the Bacula document also mentions some steps you might take for large databases, but your best bet is to look at the MySQL documentation. Now I have run several backups of several servers and everything works fine, until I wanted to do a restore from the biggest job, +---+---+---+-+-+-- --+---+ | JobId | Level | JobFiles | JobBytes| StartTime | VolumeName | StartFile | +---+---+---+-+-+-- --+---+ | 2 | F | 8,759,302 | 821,505,476,167 | 2005-12-09 16:27:02 | 09L2 | 0 | +---+---+---+-+-+-- --+---+ You have selected the following JobId: 2 Building directory tree for JobId 2 ... Query failed: SELECT MediaType FROM JobMedia,Media WHERE JobMedia.JobId=2 AND JobMedia.MediaId=Media.MediaId: ERR=Lost connection to MySQL server during query There were no files inserted into the tree, so file selection is not possible.Most likely your retention policy pruned the files Do you want to restore all the files? (yes|no): ¤¤ When trying to restore from a smaller job, everything works fine: +---+---+--++-+ +---+ | JobId | Level | JobFiles | JobBytes | StartTime | VolumeName | StartFile | +---+---+--++-+ +---+ | 3 | F | 833,832 | 52,358,007,603 | 2005-12-09 16:29:06 | 09L2 |29 | | 4 | I |1,434 |322,179,540 | 2005-12-10 00:05:01 | 01L2 | 0 | +---+---+--++-+ +---+ You have selected the following JobIds: 3,4 Building directory tree for JobId 3 ... + Building directory tree for JobId 4 ... 2 Jobs, 828,824 files inserted into the tree. Have I met any limitations in bacula? The system status was like this the moment before the crash: last pid: 34607; load averages: 1.24, 0.58, 0.25 up 3+19:19:56 08:57:34 59 processes: 6 running, 53 sleeping Mem: 572M Active, 1156M Inact, 212M Wired, 64M Cache, 112M Buf, 3008K Free Swap: 4096M Total, 500K Used, 4095M Free PID USERNAMETHR PRI NICE SIZERES STATE C TIME WCPU COMMAND 20584 mysql 9 200 57640K 33344K kserel 0 256:57 86.77% mysqld 31511 root 6 200 5608K 2304K kserel 0 144:10 0.00% bacula-sd 31532 root 6 200 518M 513M RUN0 53:32 0.00% bacula-dir Kern Sibbald wrote: This looks to me like a problem with MySQL. Did you by any chance load pre-built binaries rather than build on your system. My best guess at the moment is that Bacula is built with version X of MySQL and you are running with version Y on your machine. That means that some of the packet fields have moved or are aligned differently. Rebuiding Bacula from source directly on your machine should resolve this, if I am right ... To answer your question. Yes, of course, you can do as many backup and restores at the same time as you want without Bacula crashing (providing it is working properly). On Wednesday 07 December 2005 22:57, rkvam wrote: Actually, even when there is no backup running, the restore job fails, anyone have a clue? Did I fuck up the database while trying to restore while backuping? Same crash everytime I try to restore
Re: [Bacula-users] Bacula BETA 1.38.3
Hello, Volker Dierks wrote: Usually, I'd see if the problem can be reproduced with the existing system setup. If that's possible, I'd first check if the actual cause might be purely SCSI device related. That's what I'm going to do first. I'll create the second pool again (with the same tapes) and put all nodes into that pool ... I've done this tonight .. in turn: - the backup up started as planned on drive two with the same tape as Thursday (the tape was already mounted so no mtx stuff take place) - after some minutes (and 500 MB written data on that tape) everything hangs again .. so I restarted everything and disabled that tape - I mounted the next tape and started the backup again. After 7 GB of written data to that tape (and 5 successful backuped nodes) I got to bed. Until here, it lookes like the problems were truly caused by the tape. But this morning I got the following mail: 12-Dec 03:24 mw-mcs-sd: nfs-1.2005-12-12_02.15.08 Error: block.c:538 Write error at 12:5438 on device Drive-2 (/dev/nst1). ERR=Input/output error. 12-Dec 03:24 mw-mcs-sd: nfs-1.2005-12-12_02.15.08 Error: Error writing final EOF to tape. This Volume may not be readable. dev.c:1553 ioctl MTWEOF error on Drive-2 (/dev/nst1). ERR=No such device or address. 12-Dec 03:24 mw-mcs-sd: End of medium on Volume MW-MCS-1-12 Bytes=7,078,064,979 Blocks=109,722 at 12-Dec-2005 03:24. 12-Dec 03:24 mw-mcs-sd: 3301 Issuing autochanger loaded drive 1 command. 12-Dec 03:24 mw-mcs-sd: 3302 Autochanger loaded drive 1, result is Slot 12. 12-Dec 04:10 mw-mcs-sd: 3307 Issuing autochanger unload slot 12, drive 1 command. 12-Dec 04:14 mw-mcs-sd: 3995 Bad autochanger unload slot 13, drive 1: ERR=Child died from signal 15: Termination. 12-Dec 04:14 mw-mcs-sd: Please mount Volume MW-MCS-1-13 on Storage Device Drive-2 (/dev/nst1) for Job nfs-1.2005-12-12_02.15.08 12-Dec 05:14 mw-mcs-sd: Please mount Volume MW-MCS-1-13 on Storage Device Drive-2 (/dev/nst1) for Job nfs-1.2005-12-12_02.15.08 12-Dec 07:14 mw-mcs-sd: Please mount Volume MW-MCS-1-13 on Storage Device Drive-2 (/dev/nst1) for Job nfs-1.2005-12-12_02.15.08 12-Dec 08:59 nfs-1-fd: nfs-1.2005-12-12_02.15.08 Fatal error: backup.c:498 Network send error to SD. ERR=Broken pipe 12-Dec 08:59 mw-mcs-dir: nfs-1.2005-12-12_02.15.08 Error: Bacula 1.38.2 (20Nov05): 12-Dec-2005 08:59:32 At 08:59 I stopped bacula-dir and -sd. The kernel-Log contains the same SCSI ABORT messages as posted before starting at 02:54: Dec 12 02:54:30 backup kernel: scsi1:0:5:0: Attempting to queue an ABORT message The last thing I can imagine is: All tapes which were used in Drive-2 up to now are previously used (by amanda). This is the way I recycled them: mt -f /dev/nst1 rewind mt -f /dev/nst1 setdensity 0x89 mt -f /dev/nst1 rewind mt -f /dev/nst1 weof mt -f /dev/nst1 weof write the Bacula label Perhaps this is not the right way? I've attached our configartion and would be very thankful, if someone can confirm that it's correct. It's the one drive configuration pointing to Pool: DRIVE-2. When using this configuration against Pool: DRIVE-1 (all tapes in this pool are fresh new ones) everything is working fine. Volker PS: I'm running mt -f /dev/nst1 erase on MW-MCS-1-12 atm. If this fails, I would say that drive two is faulty. --- This SF.net email is sponsored by: Splunk Inc. Do you grep through log files for problems? Stop! Download the new AJAX search engine that makes searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! http://ads.osdn.com/?ad_id=7637alloc_id=16865op=click ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] Bacula BETA 1.38.3
Sorry, I forgot to attach the file ... here it is. Volker conf.tgz Description: GNU Unix tar archive
Re: [Bacula-users] Fwd: Returned mail: see transcript for details
Hello, Yes, several people including Volker pointed out that my reverse lookup (controlled by the ISP) pointed to a non-existent domain. I reported this to my ISP, Sunrise, and not only did they fix it within an hour, but they called me back to say that it was fixed ! On Monday 12 December 2005 12:37, Martin Simmons wrote: On Sun, 11 Dec 2005 11:14:30 +0100, Kern Sibbald [EMAIL PROTECTED] said: Kern For Volker Dierks, Kern Unless I am missing something, it appears that your mail server is incorrectly Kern configured. It is complaining about permanent DNS errors in my email Kern directed to you. I find it hard to believe that my site has no proper DNS Kern data. Below you will find your forwarded error message, and you will also Kern find a copy of my DNS data as obtained from another site: From here, 194.158.240.20 reverse resolves to matou.sibbald.ch (note not .com) but matou.sibbald.ch. doesn't forward resolve to anything. I think that might be the problem. __Martin Kern -- Forwarded Message -- Kern Subject: Returned mail: see transcript for details Kern Date: Sunday 11 December 2005 10:58 Kern From: Mail Delivery Subsystem [EMAIL PROTECTED] Kern To: [EMAIL PROTECTED] Kern The original message was received at Sun, 11 Dec 2005 10:57:52 +0100 Kern from rufus [192.168.68.112] Kern- The following addresses had permanent fatal errors - Kern [EMAIL PROTECTED] Kern (reason: 550 5.7.1 [EMAIL PROTECTED]... Use the mailserver of your Kern ISP. Your IP address [194.158.240.20] has no proper DNS data.) Kern- Transcript of session follows - Kern ... while talking to mail1.metaways.net.: DATA Kern 550 5.7.1 [EMAIL PROTECTED]... Use the mailserver of your ISP. Your Kern IP address [194.158.240.20] has no proper DNS data. 550 5.1.1 Kern [EMAIL PROTECTED]... User unknown Kern 503 5.0.0 Need RCPT (recipient) Kern --- Kern DNS data for Kern's site from a Canadian machine === Kern [EMAIL PROTECTED] kern]$ dig sibbald.com mx Kern ; DiG 9.3.1 sibbald.com mx Kern ;; global options: printcmd Kern ;; Got answer: Kern ;; -HEADER- opcode: QUERY, status: NOERROR, id: 2782 Kern ;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 5, ADDITIONAL: 5 Kern ;; QUESTION SECTION: Kern ;sibbald.com. IN MX Kern ;; ANSWER SECTION: Kern sibbald.com.86400 IN MX 20 router.darlow.co.uk. Kern sibbald.com.86400 IN MX 10 matou.sibbald.com. Kern ;; AUTHORITY SECTION: Kern sibbald.com.86400 IN NS dns.fourmilab.ch. Kern sibbald.com.86400 IN NS ns5.dnsmadeeasy.com. Kern sibbald.com.86400 IN NS ns6.dnsmadeeasy.com. Kern sibbald.com.86400 IN NS ns7.dnsmadeeasy.com. Kern sibbald.com.86400 IN NS matou.sibbald.com. Kern ;; ADDITIONAL SECTION: Kern matou.sibbald.com. 26266 IN A 194.158.240.20 Kern dns.fourmilab.ch. 26266 IN A 193.8.230.138 Kern ns5.dnsmadeeasy.com.50860 IN A 63.219.151.12 Kern ns6.dnsmadeeasy.com.62854 IN A 64.246.42.203 Kern ns7.dnsmadeeasy.com.52221 IN A 205.234.170.139 Kern ;; Query time: 366 msec Kern ;; SERVER: 10.55.0.18#53(10.55.0.18) Kern ;; WHEN: Sun Dec 11 05:13:07 2005 Kern ;; MSG SIZE rcvd: 287 Kern -- Kern Best regards, Kern Kern Kern ( Kern /\ Kern V_V -- Best regards, Kern ( /\ V_V --- This SF.net email is sponsored by: Splunk Inc. Do you grep through log files for problems? Stop! Download the new AJAX search engine that makes searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! http://ads.osdn.com/?ad_id=7637alloc_id=16865op=click ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] A scheduling quirk
John Kodis wrote: I've been using the standard monthly-full, weekly-differential, and daily-incremental backup scheme that's provided by the bacula-supplied configuration files. This is all working fine, but I've now added enough clients that a full backup takes just over two days. After the full backup completes, it is immediately followed by the two daily incremental backups and the three catalog backups that have queued up while the full backup was in progress. While it's only a minor waste of tape, it's still something that I'd like to avoid if I can do so easily. Is there a job directive or some other way to say Don't schedule another instance of a job if the same job is already waiting to run? -- John Kodis. I'm not so sure this is a waste of resources. Only the files that were changed will be in those incrementals. The first one will catch all the files that changed while the full was running. The second one will be virtually empty. If you skip these backups and then a problem occurs the following day, you won't be able to restore to the most current situation, but only to the situation at the moment of the full backup. Jo --- This SF.net email is sponsored by: Splunk Inc. Do you grep through log files for problems? Stop! Download the new AJAX search engine that makes searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! http://ads.osdn.com/?ad_id=7637alloc_id=16865op=click ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] Problems with Bacula installation...
[EMAIL PROTECTED] wrote: The biggest problem I have is the following error message; Warning: Couldn't rewind device /dev/nst0 ERR=dev.c:406 Rewind error on /dev/nst0. ERR=Input/output error. I get this error message all the time and of course my backups do not run due to this error. I get this error periodically, using single drives (a VXA1 and an LTO1). Usually, it means that a tape has been changed, but not unmounted first. If I execute an unmount and a mount on the drive in question, it invariably goes away. You might try this before you go relabelling tapes. The other problem I have is bacula dies on a regular basis (usually when the full backups run), exactly when the first backup is scheduled to run. When I check the logs, I see; 05-Dec 22:00 ganymede-dir: Start Backup JobId 1963, Job=Custweb2.2005-12-05_22.00.00 and nothing else… Does it dump core? Are you running with debugging enabled? These will probably be necessary to isolate this issue. -- Phil Stracchino [EMAIL PROTECTED] Renaissance Man, Unix generalist, Perl hacker Mobile: 603-216-7037 Landline: 603-886-3518 --- This SF.net email is sponsored by: Splunk Inc. Do you grep through log files for problems? Stop! Download the new AJAX search engine that makes searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! http://ads.osdn.com/?ad_id=7637alloc_id=16865op=click ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] Problems with Bacula installation...
Hello, [EMAIL PROTECTED] wrote: The biggest problem I have is the following error message; Warning: Couldn't rewind device /dev/nst0 ERR=dev.c:406 Rewind error on /dev/nst0. ERR=Input/output error. so far as I have learned it, ERR=Input/output error on the drive device means that the loaded tape has not been fully calibrated. I would try to increase the sleep-time in mtx-changer (around line 125 - the load case). 05-Dec 22:00 ganymede-dir: Start Backup JobId 1963, Job=Custweb2.2005-12-05_22.00.00 and nothing else… I have no idea on this ... Volker --- This SF.net email is sponsored by: Splunk Inc. Do you grep through log files for problems? Stop! Download the new AJAX search engine that makes searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! http://ads.osdn.com/?ad_idv37alloc_id865op=click ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
[Bacula-users] MySQL backend database structure
Hello all, I have a short question: Is there any documentation (formal or otherwise) describing the SQL databases structre (table structure, usage, etc.) used by the MySQL backend ? The only help in that respect is the query.sql file that gives some hints on how things are looked up in the bacula database. Not much else though... Also, a related question: How stable is that structure i.e. are there any current/planned changes to that database ? TIA, Florian P.S. Wrt different SQL backends I didn't checked if there are any differences btw. MySQL and e.g. PostgreSQL (I guess not). My question pertained to MySQL since that I am currently deploying. --- This SF.net email is sponsored by: Splunk Inc. Do you grep through log files for problems? Stop! Download the new AJAX search engine that makes searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! http://ads.osdn.com/?ad_idv37alloc_id865op=click ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
RE: [Bacula-users] Problems with Bacula installation...
Ok I have increased the sleep time in the mtx-changer file so we will see what that does, however I am not very hopeful as most of the time, when I get this error message it has been sitting for hours, I do a mount and it still happens... - Thank you, Grant Della Vecchia System Administrator AIS Media, Inc. 7000 Central Parkway, Suite 1700 Atlanta, GA 30328 Tel: 770.350.7998 | Fax: 770.350.9409 URL: www.aismedia.com | Email: [EMAIL PROTECTED] -- -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Volker Dierks Sent: Monday, December 12, 2005 8:12 AM To: bacula-users@lists.sourceforge.net Subject: Re: [Bacula-users] Problems with Bacula installation... Hello, [EMAIL PROTECTED] wrote: The biggest problem I have is the following error message; Warning: Couldn't rewind device /dev/nst0 ERR=dev.c:406 Rewind error on /dev/nst0. ERR=Input/output error. so far as I have learned it, ERR=Input/output error on the drive device means that the loaded tape has not been fully calibrated. I would try to increase the sleep-time in mtx-changer (around line 125 - the load case). 05-Dec 22:00 ganymede-dir: Start Backup JobId 1963, Job=Custweb2.2005-12-05_22.00.00 and nothing else I have no idea on this ... Volker --- This SF.net email is sponsored by: Splunk Inc. Do you grep through log files for problems? Stop! Download the new AJAX search engine that makes searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! http://ads.osdn.com/?ad_idv37alloc_id865opÌk ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users --- This SF.net email is sponsored by: Splunk Inc. Do you grep through log files for problems? Stop! Download the new AJAX search engine that makes searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! http://ads.osdn.com/?ad_idv37alloc_id865op=click ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] A scheduling quirk
On Mon, Dec 12, 2005 at 02:05:16PM +0100, Jo wrote: I'm not so sure this is a waste of resources. Only the files that were changed will be in those incrementals. The first one will catch all the files that changed while the full was running. The second one will be virtually empty. In my case, these would get run Sunday night, and so both would contain little beyond the log files that have changed. I'd rather not bother saving these twice, and could certainly do without the three near-identical copies of the catalog backup. In addition to the waste of tape, this creates a bunch of job entries that I don't care about. If you skip these backups and then a problem occurs the following day, you won't be able to restore to the most current situation, but only to the situation at the moment of the full backup. That's a good reason to leave the default action as is, although in my situation I'd still like to be able to override it. -- John Kodis. --- This SF.net email is sponsored by: Splunk Inc. Do you grep through log files for problems? Stop! Download the new AJAX search engine that makes searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! http://ads.osdn.com/?ad_id=7637alloc_id=16865op=click ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] Bacula BETA 1.38.3
On Monday 12 December 2005 12:54, Volker Dierks wrote: Sorry, I forgot to attach the file ... here it is. Well, you fell into a documentation error. Please remove the line from your Device resources that says: Maximum Changer Wait = 10 minutes it is incorrect. Despite what the *old* documentation said, the time *must* be specified in seconds. If you want to set it to 10 minutes you need to set it to 600. You probably also want to increast the sleep in mtx-changer ... -- Best regards, Kern ( /\ V_V --- This SF.net email is sponsored by: Splunk Inc. Do you grep through log files for problems? Stop! Download the new AJAX search engine that makes searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! http://ads.osdn.com/?ad_id=7637alloc_id=16865op=click ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] A scheduling quirk
John Kodis wrote: On Mon, Dec 12, 2005 at 02:05:16PM +0100, Jo wrote: I'm not so sure this is a waste of resources. Only the files that were changed will be in those incrementals. The first one will catch all the files that changed while the full was running. The second one will be virtually empty. In my case, these would get run Sunday night, and so both would contain little beyond the log files that have changed. I'd rather not bother saving these twice, and could certainly do without the three near-identical copies of the catalog backup. In addition to the waste of tape, this creates a bunch of job entries that I don't care about. If you skip these backups and then a problem occurs the following day, you won't be able to restore to the most current situation, but only to the situation at the moment of the full backup. That's a good reason to leave the default action as is, although in my situation I'd still like to be able to override it. -- John Kodis. Why don't you change the schedule so no backups are made on Saturday and Sunday? Sorry for not being able to actually answer the question. Jo --- This SF.net email is sponsored by: Splunk Inc. Do you grep through log files for problems? Stop! Download the new AJAX search engine that makes searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! http://ads.osdn.com/?ad_id=7637alloc_id=16865op=click ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] A scheduling quirk
On Mon, Dec 12, 2005 at 03:23:39PM +0100, Jo wrote: Why don't you change the schedule so no backups are made on Saturday and Sunday? I may end up doing something like that (I'd only skip incrementals on the first Saturday and Sunday of the month), although I was hoping for a more general solution. Sorry for not being able to actually answer the question. That's quite alright. It at least tells me that there's not an obvious solution that I've just overlooked, and I'm grateful for that alone. Thanks! -- John Kodis. --- This SF.net email is sponsored by: Splunk Inc. Do you grep through log files for problems? Stop! Download the new AJAX search engine that makes searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! http://ads.osdn.com/?ad_id=7637alloc_id=16865op=click ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
[Bacula-users] Re: Segmentation violation
Well, the strange ting is that the crash happens every time bacula-dir reach 516MB memory usage, and mysql i steady on same consumption as when started. And we have used bacula before (1.6) without these problems. Anyone else tried bacula on these amount of files? Kern Sibbald wrote: Hello, I repeat what I said the last time: this looks like a MySQL problem. Given the new information you have presented, and the fact that you are having problems when working with almost 9 million files backed up (a lot), I would suspect that the problem is with your MySQL configuration. If you used the standard MySQL installation, it is probably not configured for such large databases. See below for more ... On Monday 12 December 2005 10:52, Roger Kvam wrote: I Tried to upgrade MySQL to 5. and recomile bacula (yes I did a make distclean), but same error, have now installed MySQL 4.1.15 again, and recompiled bacula again, deleted the database and deleted the tapes so I could start fresh. When trying to restore from a large job, bacula crashes, From the message you show below, it looks to me like MySQL crashed or at least disconnected. when trying to restore from a small job everything works fine. Does bacula has shortcomings when comes to enterprise systems? Bacula is not what I would call an enterprise system, so one could say it has that kind of shortcoming. In your case, you are dealing with a quite large backup set, and it looks like you have not adapted MySQL to handle it appropriatedly. Perhaps some of the other users on the list can help you. If I am not mistaken, the Bacula document also mentions some steps you might take for large databases, but your best bet is to look at the MySQL documentation. Now I have run several backups of several servers and everything works fine, until I wanted to do a restore from the biggest job, +---+---+---+-+-+-- --+---+ | JobId | Level | JobFiles | JobBytes| StartTime | VolumeName | StartFile | +---+---+---+-+-+-- --+---+ | 2 | F | 8,759,302 | 821,505,476,167 | 2005-12-09 16:27:02 | 09L2 | 0 | +---+---+---+-+-+-- --+---+ You have selected the following JobId: 2 Building directory tree for JobId 2 ... Query failed: SELECT MediaType FROM JobMedia,Media WHERE JobMedia.JobId=2 AND JobMedia.MediaId=Media.MediaId: ERR=Lost connection to MySQL server during query There were no files inserted into the tree, so file selection is not possible.Most likely your retention policy pruned the files Do you want to restore all the files? (yes|no): ¤¤ When trying to restore from a smaller job, everything works fine: +---+---+--++-+ +---+ | JobId | Level | JobFiles | JobBytes | StartTime | VolumeName | StartFile | +---+---+--++-+ +---+ | 3 | F | 833,832 | 52,358,007,603 | 2005-12-09 16:29:06 | 09L2 |29 | | 4 | I |1,434 |322,179,540 | 2005-12-10 00:05:01 | 01L2 | 0 | +---+---+--++-+ +---+ You have selected the following JobIds: 3,4 Building directory tree for JobId 3 ... + Building directory tree for JobId 4 ... 2 Jobs, 828,824 files inserted into the tree. Have I met any limitations in bacula? The system status was like this the moment before the crash: last pid: 34607; load averages: 1.24, 0.58, 0.25 up 3+19:19:56 08:57:34 59 processes: 6 running, 53 sleeping Mem: 572M Active, 1156M Inact, 212M Wired, 64M Cache, 112M Buf, 3008K Free Swap: 4096M Total, 500K Used, 4095M Free PID USERNAMETHR PRI NICE SIZERES STATE C TIME WCPU COMMAND 20584 mysql 9 200 57640K 33344K kserel 0 256:57 86.77% mysqld 31511 root 6 200 5608K 2304K kserel 0 144:10 0.00% bacula-sd 31532 root 6 200 518M 513M RUN0 53:32 0.00% bacula-dir Kern Sibbald wrote: This looks to me like a problem with MySQL. Did you by any chance load pre-built binaries rather than build on your system. My best guess at the moment is that Bacula is built with version X of MySQL and you are running with version Y on your machine. That means that some of the packet fields have moved or are aligned differently. Rebuiding Bacula from source directly on your machine should resolve this, if I am right ... To answer your question. Yes, of course, you can do as many backup and restores at the same time as you want without Bacula crashing (providing it is working properly). On Wednesday 07 December
Re: [Bacula-users] FileDaemon vs manual transfer
20 Gb is a small amount of data. I would suggest to use rsync to get a local copy of data (using compression, and other features), and than copy data on tape, hopefully with bacula. -- Ferdinando Pasqualetti G.T.Dati srl Tel. 0557310862 - 3356172731 - Fax 055720143 [EMAIL PROTECTED] wrote on 12/12/2005 17.01.15: Hello all, I have the following robustness-related question: I am deploying bacula as a multi-site backup soultion, where a central tape storage gets relatively large quantities of data (aprox 20 GBytes per day) from multiple sites via net links of dubious reliability. I am considering two possible options of getting the data to the storage server: 1) Install a File Daemon on each server to be backed up and let it pump the data over the net to the storage server 2) Get the director (running on the same machine as the storage server) run a script that tars the data over the network on a local temp directory and -- if the connection didn't break in the mean time -- dump the result to tape. My question is: While in alternative 2) I can check for connection losses in the middle of the transfer and restart the data transfer, I have no idea how the File Daemon behaves in the same situation: Does it save any state locally such that it makes it robust to connectivity losses ? Can it resume an interrupted job ? How resilient is the storage server to such interruption in the data stream from the client ? Can I compress the data as it is sent from the File Server to the Storage server ? TIA for any hints and pointers, Florian --- This SF.net email is sponsored by: Splunk Inc. Do you grep through log files for problems? Stop! Download the new AJAX search engine that makes searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! http://ads.osdn.com/?ad_idv37alloc_id865op=click ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users --- This SF.net email is sponsored by: Splunk Inc. Do you grep through log files for problems? Stop! Download the new AJAX search engine that makes searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! http://ads.osdn.com/?ad_idv37_id865=click ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] Bacula BETA 1.38.3
On Sun, 11 Dec 2005, Kern Sibbald wrote: The bscan problem that I found caused it to generate a JobMedia record in the database that had an end FileIndex one less than it should have been. This was the last record on a Volume, and the record was continued on the next Volume. When Bacula constructed a bsr, the optimization code had this one off problem, so when the restore job ran, the last record (partial) record on the first tape was ignored. When the restore job got the second tape up, after reading the first (partial) record, it realized that the first part of the record from the first Volume was not there, so my insanity check code aborted. That sounds about right. Perhaps it needed larger tape spools to trigger? AB --- This SF.net email is sponsored by: Splunk Inc. Do you grep through log files for problems? Stop! Download the new AJAX search engine that makes searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! http://ads.osdn.com/?ad_id=7637alloc_id=16865op=click ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
[Bacula-users] Error waiting to reserve a device after upgrade to 1.38.3
Hi all, A couple of weeks ago I upgraded my Bacula installation from 1.34 to 1.38.1. I made a few other changes at the same time and everything has been working well since. Yesterday I decided to upgrade to 1.38.3. I built from source using the same configuration options that I used to build 1.38.1 with the addition of --with-python. The build went OK, no errors. I stopped 1.38.1 and started 1.38.3. Everything seemed to be OK. I ran a couple of small test backups and there were no errors so I assumed that the upgrade went fine. Last night the scheduled backup ran and after the first job, instead of continuing on to the next job, I got this message ... 12-Dec 07:29 MyJob-SMB-sd: Job MyJob.2005-12-12_01.05.01 waiting to reserve a device. This morning, when I first saw this message, I just did a mount from bconsole and the job continued, I have bacula configured for 6 jobs per media and the tape wasn't full so the job should have just started as it hallways has. Now it's time to backup up the catalog, to file, and I'm getting the same message. I have an HP DDS2 drive, no changer, running on Slackware 10. I'm using the same conf files the worked fine on 1.38.1. Can anyone tell me why I'm now getting this error message? Thanks, RickKnight --- This SF.net email is sponsored by: Splunk Inc. Do you grep through log files for problems? Stop! Download the new AJAX search engine that makes searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! http://ads.osdn.com/?ad_id=7637alloc_id=16865op=click ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] Bacula BETA 1.38.3
Hello, Kern Sibbald schrieb: On Monday 12 December 2005 12:54, Volker Dierks wrote: Sorry, I forgot to attach the file ... here it is. Well, you fell into a documentation error. Please remove the line from your Device resources that says: Maximum Changer Wait = 10 minutes it is incorrect. Despite what the *old* documentation said, the time *must* be specified in seconds. Yes. ;-) Even better: Also edit the script to use the wait_for_tape function (or whatever it's called.) Assuming you can reliably get the tape status, this is a much cleaner solution - I've seen tape load times from a few seconds to some minutes one one and the same drive (DLT, admittedly :-) but I prefer to wait until a drive is ready over the fixed timeout. Also, in case the security timeout in the function waiting for the tape to be loaded ever triggers, it is useful to write some log data, especially containing tapealert data. I think that, given a sufficently long emergency timeout, almost all cases of a tape not recognized by the tape drive indicate a serious failure. For example my configuration: - in mtx-changer's wait_for_drive function, I poll every three seconds for some hundred times. - In bacula-sd.conf, I set a timeout of 20 minutes or something. This should never be reached, of course. If you want to set it to 10 minutes you need to set it to 600. You probably also want to increast the sleep in mtx-changer ... Definitely... Arno -- IT-Service Lehmann[EMAIL PROTECTED] Arno Lehmann http://www.its-lehmann.de --- This SF.net email is sponsored by: Splunk Inc. Do you grep through log files for problems? Stop! Download the new AJAX search engine that makes searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! http://ads.osdn.com/?ad_id=7637alloc_id=16865op=click ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
RE: [Bacula-users] Bacula BETA 1.38.3
FYI, I haven't had time to look into it much, but I have been seeing errors with my auto changer since 1.38.1 that I had never seen with 1.36.* before that look a lot like these. As Kern said, as if something seems to be missing from the log, see: 04-Dec 03:34 bug-sd: End of Volume NJO008D at 80:11492 on device Drive-1 (/dev/nst0). Write of 64512 bytes got -1. 04-Dec 03:35 bug-sd: spider.2005-12-04_03.05.04 Error: Re-read of last block failed. Last block=80530 Current block=14717. 04-Dec 03:35 bug-sd: End of medium on Volume NJO008D Bytes=45,428,287,520 Blocks=704,222 at 04-Dec-2005 03:35. 04-Dec 03:35 bug-sd: 3301 Issuing autochanger loaded drive 0 command. 04-Dec 03:35 bug-sd: 3302 Autochanger loaded drive 0, result is Slot 8. 04-Dec 03:35 bug-sd: 3307 Issuing autochanger unload slot 8, drive 0 command. 04-Dec 03:35 bug-sd: 3995 Bad autochanger unload slot 9, drive 0: ERR=Child exited with code 1. 04-Dec 03:35 bug-sd: Please mount Volume NJO009D on Storage Device Drive-1 (/dev/nst0) for Job spider.2005-12-04_03.05.04 Rob -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Kern Sibbald Sent: Monday, December 12, 2005 9:20 AM To: bacula-users@lists.sourceforge.net Cc: Volker Dierks Subject: Re: [Bacula-users] Bacula BETA 1.38.3 On Monday 12 December 2005 12:52, Volker Dierks wrote: Hello, Volker Dierks wrote: Usually, I'd see if the problem can be reproduced with the existing system setup. If that's possible, I'd first check if the actual cause might be purely SCSI device related. That's what I'm going to do first. I'll create the second pool again (with the same tapes) and put all nodes into that pool ... I've done this tonight .. in turn: - the backup up started as planned on drive two with the same tape as Thursday (the tape was already mounted so no mtx stuff take place) - after some minutes (and 500 MB written data on that tape) everything hangs again .. so I restarted everything and disabled that tape - I mounted the next tape and started the backup again. After 7 GB of written data to that tape (and 5 successful backuped nodes) I got to bed. Until here, it lookes like the problems were truly caused by the tape. But this morning I got the following mail: 12-Dec 03:24 mw-mcs-sd: nfs-1.2005-12-12_02.15.08 Error: block.c:538 Write error at 12:5438 on device Drive-2 (/dev/nst1). ERR=Input/output error. 12-Dec 03:24 mw-mcs-sd: nfs-1.2005-12-12_02.15.08 Error: Error writing final EOF to tape. This Volume may not be readable. dev.c:1553 ioctl MTWEOF error on Drive-2 (/dev/nst1). ERR=No such device or address. 12-Dec 03:24 Unless you have 7GB tapes, this looks like a hardware problem: bad media, dirty tape drive, bad drive, bad SCSI cables (or improperly installed), bad SCSI card, ... These kinds of problems typically generate a number of kernel (SCSI) messages in the log. mw-mcs-sd: End of medium on Volume MW-MCS-1-12 Bytes=7,078,064,979 Blocks=109,722 at 12-Dec-2005 03:24. 12-Dec 03:24 mw-mcs-sd: 3301 Issuing autochanger loaded drive 1 command. 12-Dec 03:24 mw-mcs-sd: 3302 Autochanger loaded drive 1, result is Slot 12. 12-Dec 04:10 mw-mcs-sd: 3307 Issuing autochanger unload slot 12, drive 1 command. 12-Dec 04:14 mw-mcs-sd: 3995 Bad autochanger unload slot 13, drive 1: ERR=Child died from signal 15: Termination. This looks like you don't have your autochanger script properly configured as one user pointed out -- setting the sleep longer may help. However, I do not understand why in one message it says unload slot 12, then on the next line it says unload slot 13 ... ERR. There seems to be something missing as Bacula will normally issue a loaded drive or load a drive before unloading it for a second time. 12-Dec 04:14 mw-mcs-sd: Please mount Volume MW-MCS-1-13 on Storage Device Drive-2 (/dev/nst1) for Job nfs-1.2005-12-12_02.15.08 12-Dec 05:14 mw-mcs-sd: Please mount Volume MW-MCS-1-13 on Storage Device Drive-2 (/dev/nst1) for Job nfs-1.2005-12-12_02.15.08 12-Dec 07:14 mw-mcs-sd: Please mount Volume MW-MCS-1-13 on Storage Device Drive-2 (/dev/nst1) for Job nfs-1.2005-12-12_02.15.08 12-Dec 08:59 nfs-1-fd: nfs-1.2005-12-12_02.15.08 Fatal error: backup.c:498 Network send error to SD. ERR=Broken pipe 12-Dec 08:59 mw-mcs-dir: nfs-1.2005-12-12_02.15.08 Error: Bacula 1.38.2 (20Nov05): 12-Dec-2005 08:59:32 At 08:59 I stopped bacula-dir and -sd. The kernel-Log contains the same SCSI ABORT messages as posted before starting at 02:54: Dec 12 02:54:30 backup kernel: scsi1:0:5:0: Attempting to queue an ABORT message If you are getting SCSI ABORT messages, then either there is some hardware problem or the Bacula Device resource is not setup right for that drive. Did you pass *all* the tests in the Tape Testing chapter? The last thing I can imagine is: All tapes which were used in Drive-2 up to now are previously used (by amanda). This is the way I recycled them: mt -f /dev/nst1 rewind mt -f
Re: [Bacula-users] A scheduling quirk
Hello, John Kodis schrieb: On Mon, Dec 12, 2005 at 03:23:39PM +0100, Jo wrote: Why don't you change the schedule so no backups are made on Saturday and Sunday? I may end up doing something like that (I'd only skip incrementals on the first Saturday and Sunday of the month), although I was hoping for a more general solution. No problem... Sorry for not being able to actually answer the question. That's quite alright. It at least tells me that there's not an obvious solution that I've just overlooked, and I'm grateful for that alone. Thanks! Unfortunately, you did miss something :-P From the maual, section Configuring the Director, subsection Jobs: Max Start Delay = time The time specifies the maximum delay between the scheduled time and the actual start time for the Job. For example, a job can be scheduled to run at 1:00am, but because other jobs are running, it may wait to run. If the delay is set to 3600 (one hour) and the job has not begun to run by 2:00am, the job will be canceled. This can be useful, for example, to prevent jobs from running during day time hours. The default is 0 which indicates no limit. You've only got to think the other way: Let the job start, but when it waits too long have it canceled automatically. You achieve the desired result with the correct setup of the Maximum Concurrent Jobs setting at different places, by the way. Hope this helps, Arno -- John Kodis. --- This SF.net email is sponsored by: Splunk Inc. Do you grep through log files for problems? Stop! Download the new AJAX search engine that makes searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! http://ads.osdn.com/?ad_id=7637alloc_id=16865op=click ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users -- IT-Service Lehmann[EMAIL PROTECTED] Arno Lehmann http://www.its-lehmann.de --- This SF.net email is sponsored by: Splunk Inc. Do you grep through log files for problems? Stop! Download the new AJAX search engine that makes searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! http://ads.osdn.com/?ad_id=7637alloc_id=16865op=click ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] Problems with Bacula installation...
One idea: [EMAIL PROTECTED] schrieb: Ok I have increased the sleep time in the mtx-changer file so we will see what that does, however I am not very hopeful as most of the time, when I get this error message it has been sitting for hours, I do a mount and it still happens... Use the function wait_for_tape or whatever it's called in the mtx-changer script. And, in case you still have problems, implement some logging into that script. Worked wonders for me :-) Arno - Thank you, Grant Della Vecchia System Administrator AIS Media, Inc. 7000 Central Parkway, Suite 1700 Atlanta, GA 30328 Tel: 770.350.7998 | Fax: 770.350.9409 URL: www.aismedia.com | Email: [EMAIL PROTECTED] -- -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Volker Dierks Sent: Monday, December 12, 2005 8:12 AM To: bacula-users@lists.sourceforge.net Subject: Re: [Bacula-users] Problems with Bacula installation... Hello, [EMAIL PROTECTED] wrote: The biggest problem I have is the following error message; Warning: Couldn't rewind device /dev/nst0 ERR=dev.c:406 Rewind error on /dev/nst0. ERR=Input/output error. so far as I have learned it, ERR=Input/output error on the drive device means that the loaded tape has not been fully calibrated. I would try to increase the sleep-time in mtx-changer (around line 125 - the load case). 05-Dec 22:00 ganymede-dir: Start Backup JobId 1963, Job=Custweb2.2005-12-05_22.00.00 and nothing else… I have no idea on this ... Volker --- This SF.net email is sponsored by: Splunk Inc. Do you grep through log files for problems? Stop! Download the new AJAX search engine that makes searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! http://ads.osdn.com/?ad_idv37alloc_id865opÌk ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users --- This SF.net email is sponsored by: Splunk Inc. Do you grep through log files for problems? Stop! Download the new AJAX search engine that makes searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! http://ads.osdn.com/?ad_idv37alloc_id865op=click ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users -- IT-Service Lehmann[EMAIL PROTECTED] Arno Lehmann http://www.its-lehmann.de --- This SF.net email is sponsored by: Splunk Inc. Do you grep through log files for problems? Stop! Download the new AJAX search engine that makes searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! http://ads.osdn.com/?ad_idv37alloc_id865op=click ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] MySQL backend database structure
Hello, Florian Daniel Otel schrieb: Hello all, I have a short question: Is there any documentation (formal or otherwise) describing the SQL databases structre (table structure, usage, etc.) used by the MySQL backend ? The only help in that respect is the query.sql file that gives some hints on how things are looked up in the bacula database. Not much else though... The developers manual might have more information. Also, a related question: How stable is that structure i.e. are there any current/planned changes to that database ? Usually, that structure is rather stable. As far as I know, Kern only intends to change it when the major version of Bacula increases, e.g. 1.36-1.37/1.38. (Note the use of 'intends' and not 'allows'...) He usually provides the necessary upgrade scripts. There are no plans for modifying the catalog schema, as far as I know. You will notice that there are many fields, columns or whatever you call them, in there that are not yet used but rather are meant for future implementation. When reading the Projects fle, you will also find that there are some planned enhancements which will require further catalog changes. Or, in short: If you want to implement some functionality which needs the catalog in a certain form you should discuss this on the developers list. On the other hand, a report script I wrote works, as far as I know, correctly with the current and the last catalog version, and it works with PostgreSQL and MySQL. This indicates that the 'vital' part of the catalog is quite stable, I think. Arno TIA, Florian P.S. Wrt different SQL backends I didn't checked if there are any differences btw. MySQL and e.g. PostgreSQL (I guess not). My question pertained to MySQL since that I am currently deploying. --- This SF.net email is sponsored by: Splunk Inc. Do you grep through log files for problems? Stop! Download the new AJAX search engine that makes searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! http://ads.osdn.com/?ad_idv37alloc_id865op=click ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users -- IT-Service Lehmann[EMAIL PROTECTED] Arno Lehmann http://www.its-lehmann.de --- This SF.net email is sponsored by: Splunk Inc. Do you grep through log files for problems? Stop! Download the new AJAX search engine that makes searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! http://ads.osdn.com/?ad_id=7637alloc_id=16865op=click ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] Re: Bacula BETA 1.38.3
Hmmm. I think I am going to pack it in and take a year's vacation ... On Monday 12 December 2005 20:21, Arno Lehmann wrote: Hi, Arno Lehmann schrieb: Hi, Kern Sibbald schrieb: ... Anyway, I would really appreciate it if you could try the 1.38.3 beta. Please use the released tar file rather than the CVS -- the HEAD CVS is now slightly behind the 1.38 branch in terms of fixes ... I'll download it in a few minutes... Although I haven't been able to reproduce the pool problems you are having, I have been working on it nevertheless. When I was able to duplicate the two drive problem of not running jobs simultaneously on both drives, I took the time to review the reservation system in detail, and I found a few places where there *may* have been problems. I should see if the version 1.38.3beta fixes my problem on Tuesday - that's still my jobs-to-different-pools-day. I'll report what happens. And, of course, I've got one run of straight and usual backups before that, although I don't think there will be any problems tomorrow. Unfortunately, I have to report something rather unexpected. The device reservation system works a little too good ;-) I started my bachup server this morning (or rather, it started itself). All tape drives Bacula itself loads were empty, magazines were filled with the correct volumes. Basically, the same situation I have 5 out of seven days for about a year now, with many versions of Bacula. Jobs were scheduled as usual and got triggered as usual, too. After that, not much progress since this morning, 8:20 (almost 12 hours ago): 12-Dec 11:21 goblin-sd: Job DracheStd.2005-12-12_08.20.00 waiting to reserve a device. 12-Dec 11:26 goblin-sd: Job Goblin.2005-12-12_08.25.00 waiting to reserve a device. 12-Dec 11:26 goblin-sd: Job Ork.2005-12-12_08.25.03 waiting to reserve a device. 12-Dec 11:26 goblin-sd: Job ElfSys.2005-12-12_08.25.01 waiting to reserve a device. 12-Dec 11:26 goblin-sd: Job ElfHome.2005-12-12_08.25.02 waiting to reserve a device. 12-Dec 15:21 goblin-sd: Job DracheStd.2005-12-12_08.20.00 waiting to reserve a device. 12-Dec 15:26 goblin-sd: Job Goblin.2005-12-12_08.25.00 waiting to reserve a device. 12-Dec 15:26 goblin-sd: Job Ork.2005-12-12_08.25.03 waiting to reserve a device. 12-Dec 15:26 goblin-sd: Job ElfSys.2005-12-12_08.25.01 waiting to reserve a device. 12-Dec 15:26 goblin-sd: Job ElfHome.2005-12-12_08.25.02 waiting to reserve a device. these are the latest messages. Or, in other words, since I upgraded to 1.38.3 beta, nothing has been backed up at all. The current director status is #sta dir goblin-dir Version: 1.38.3 (09 December 2005) i586-pc-linux-gnu suse 8.1 Daemon started 12-Dec-05 07:32, 1 Job run since started. Scheduled Jobs: Level Type Pri Scheduled Name Volume = == IncrementalBackup10 13-Dec-05 08:20DracheStd DLT-IV-0063 IncrementalBackup10 13-Dec-05 08:20BeowulfStd DLT-IV-0063 Differential Backup10 13-Dec-05 08:25 Goblin DLT-IV-0063 Differential Backup10 13-Dec-05 08:25ElfSys DLT-IV-0063 Differential Backup10 13-Dec-05 08:25ElfHomeDLT-IV-0063 Differential Backup 10 13-Dec-05 08:25Ork*unknown* Differential Backup30 13-Dec-05 08:25GoblinDB DLT-IV-0063 Differential Backup10 13-Dec-05 08:30BackupMail QIC-525-0046 Full Backup 100 13-Dec-05 08:33 BackupCatalog DAT-120-0006 Admin999 13-Dec-05 09:00Shutdown Running Jobs: JobId Level Name Status == 4443 Increme DracheStd.2005-12-12_08.20.00 is waiting on Storage HPDAT 4445 Increme Goblin.2005-12-12_08.25.00 is waiting on Storage HPDAT 4446 Increme ElfSys.2005-12-12_08.25.01 is waiting on Storage HPDAT 4447 Increme ElfHome.2005-12-12_08.25.02 is waiting on Storage HPDAT 4448 Increme Ork.2005-12-12_08.25.03 is waiting on Storage HPDAT 4449 Increme GoblinDB.2005-12-12_08.25.04 is waiting for higher priority jobs to finish 4450 Differe BackupMail.2005-12-12_08.30.00 is waiting on max Client jobs 4451 FullBackupCatalog.2005-12-12_08.34.00 is waiting execution 4452 Shutdown.2005-12-12_09.00.00 is waiting execution Terminated Jobs: JobId Level Files Bytes Status FinishedName 4434 Incr152 5,453,504 OK 11-Dec-05 08:54 Goblin 4435 Incr277323,115,310 OK 11-Dec-05 08:59 ElfSys 4437 Incr880302,413,714 OK 11-Dec-05 08:59 Ork 4439 Diff872
Re: [Bacula-users] Bacula BETA 1.38.3
On Monday 12 December 2005 20:10, Rob wrote: FYI, I haven't had time to look into it much, but I have been seeing errors with my auto changer since 1.38.1 that I had never seen with 1.36.* before that look a lot like these. As Kern said, as if something seems to be missing from the log, see: 04-Dec 03:34 bug-sd: End of Volume NJO008D at 80:11492 on device Drive-1 (/dev/nst0). Write of 64512 bytes got -1. 04-Dec 03:35 bug-sd: spider.2005-12-04_03.05.04 Error: Re-read of last block failed. Last block=80530 Current block=14717. 04-Dec 03:35 bug-sd: End of medium on Volume NJO008D Bytes=45,428,287,520 Blocks=704,222 at 04-Dec-2005 03:35. 04-Dec 03:35 bug-sd: 3301 Issuing autochanger loaded drive 0 command. 04-Dec 03:35 bug-sd: 3302 Autochanger loaded drive 0, result is Slot 8. 04-Dec 03:35 bug-sd: 3307 Issuing autochanger unload slot 8, drive 0 command. 04-Dec 03:35 bug-sd: 3995 Bad autochanger unload slot 9, drive 0: ERR=Child exited with code 1. 04-Dec 03:35 bug-sd: Please mount Volume NJO009D on Storage Device Drive-1 (/dev/nst0) for Job spider.2005-12-04_03.05.04 I'm beginning to think that the error message that edits the slot number is just broken. The error you are seeing is because there is a problem with your mtx-changer script. The error the previous person was seeing was because of a misconfiguration (due to incorrect documentation). Rob -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Kern Sibbald Sent: Monday, December 12, 2005 9:20 AM To: bacula-users@lists.sourceforge.net Cc: Volker Dierks Subject: Re: [Bacula-users] Bacula BETA 1.38.3 On Monday 12 December 2005 12:52, Volker Dierks wrote: Hello, Volker Dierks wrote: Usually, I'd see if the problem can be reproduced with the existing system setup. If that's possible, I'd first check if the actual cause might be purely SCSI device related. That's what I'm going to do first. I'll create the second pool again (with the same tapes) and put all nodes into that pool ... I've done this tonight .. in turn: - the backup up started as planned on drive two with the same tape as Thursday (the tape was already mounted so no mtx stuff take place) - after some minutes (and 500 MB written data on that tape) everything hangs again .. so I restarted everything and disabled that tape - I mounted the next tape and started the backup again. After 7 GB of written data to that tape (and 5 successful backuped nodes) I got to bed. Until here, it lookes like the problems were truly caused by the tape. But this morning I got the following mail: 12-Dec 03:24 mw-mcs-sd: nfs-1.2005-12-12_02.15.08 Error: block.c:538 Write error at 12:5438 on device Drive-2 (/dev/nst1). ERR=Input/output error. 12-Dec 03:24 mw-mcs-sd: nfs-1.2005-12-12_02.15.08 Error: Error writing final EOF to tape. This Volume may not be readable. dev.c:1553 ioctl MTWEOF error on Drive-2 (/dev/nst1). ERR=No such device or address. 12-Dec 03:24 Unless you have 7GB tapes, this looks like a hardware problem: bad media, dirty tape drive, bad drive, bad SCSI cables (or improperly installed), bad SCSI card, ... These kinds of problems typically generate a number of kernel (SCSI) messages in the log. mw-mcs-sd: End of medium on Volume MW-MCS-1-12 Bytes=7,078,064,979 Blocks=109,722 at 12-Dec-2005 03:24. 12-Dec 03:24 mw-mcs-sd: 3301 Issuing autochanger loaded drive 1 command. 12-Dec 03:24 mw-mcs-sd: 3302 Autochanger loaded drive 1, result is Slot 12. 12-Dec 04:10 mw-mcs-sd: 3307 Issuing autochanger unload slot 12, drive 1 command. 12-Dec 04:14 mw-mcs-sd: 3995 Bad autochanger unload slot 13, drive 1: ERR=Child died from signal 15: Termination. This looks like you don't have your autochanger script properly configured as one user pointed out -- setting the sleep longer may help. However, I do not understand why in one message it says unload slot 12, then on the next line it says unload slot 13 ... ERR. There seems to be something missing as Bacula will normally issue a loaded drive or load a drive before unloading it for a second time. 12-Dec 04:14 mw-mcs-sd: Please mount Volume MW-MCS-1-13 on Storage Device Drive-2 (/dev/nst1) for Job nfs-1.2005-12-12_02.15.08 12-Dec 05:14 mw-mcs-sd: Please mount Volume MW-MCS-1-13 on Storage Device Drive-2 (/dev/nst1) for Job nfs-1.2005-12-12_02.15.08 12-Dec 07:14 mw-mcs-sd: Please mount Volume MW-MCS-1-13 on Storage Device Drive-2 (/dev/nst1) for Job nfs-1.2005-12-12_02.15.08 12-Dec 08:59 nfs-1-fd: nfs-1.2005-12-12_02.15.08 Fatal error: backup.c:498 Network send error to SD. ERR=Broken pipe 12-Dec 08:59 mw-mcs-dir: nfs-1.2005-12-12_02.15.08 Error: Bacula 1.38.2 (20Nov05): 12-Dec-2005 08:59:32 At 08:59 I stopped bacula-dir and -sd. The kernel-Log contains the same SCSI ABORT messages as posted before starting at 02:54: Dec 12 02:54:30 backup kernel: scsi1:0:5:0:
Re: [Bacula-users] Bacula BETA 1.38.3
Kern Sibbald schrieb: On Monday 12 December 2005 20:10, Rob wrote: FYI, I haven't had time to look into it much, but I have been seeing errors with my auto changer since 1.38.1 that I had never seen with 1.36.* before that look a lot like these. As Kern said, as if something seems to be missing from the log, see: 04-Dec 03:34 bug-sd: End of Volume NJO008D at 80:11492 on device Drive-1 (/dev/nst0). Write of 64512 bytes got -1. 04-Dec 03:35 bug-sd: spider.2005-12-04_03.05.04 Error: Re-read of last block failed. Last block=80530 Current block=14717. 04-Dec 03:35 bug-sd: End of medium on Volume NJO008D Bytes=45,428,287,520 Blocks=704,222 at 04-Dec-2005 03:35. 04-Dec 03:35 bug-sd: 3301 Issuing autochanger loaded drive 0 command. 04-Dec 03:35 bug-sd: 3302 Autochanger loaded drive 0, result is Slot 8. 04-Dec 03:35 bug-sd: 3307 Issuing autochanger unload slot 8, drive 0 command. 04-Dec 03:35 bug-sd: 3995 Bad autochanger unload slot 9, drive 0: ERR=Child exited with code 1. 04-Dec 03:35 bug-sd: Please mount Volume NJO009D on Storage Device Drive-1 (/dev/nst0) for Job spider.2005-12-04_03.05.04 I'm beginning to think that the error message that edits the slot number is just broken. The error you are seeing is because there is a problem with your mtx-changer script. The error the previous person was seeing was because of a misconfiguration (due to incorrect documentation). Sorry, but I've fooled you. The Maximum Changer Wait = ... option has been added to the attached configuration this morning. Everything posted down there, was without this configuration directive. Sorry ... Volker Rob -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Kern Sibbald Sent: Monday, December 12, 2005 9:20 AM To: bacula-users@lists.sourceforge.net Cc: Volker Dierks Subject: Re: [Bacula-users] Bacula BETA 1.38.3 On Monday 12 December 2005 12:52, Volker Dierks wrote: Hello, Volker Dierks wrote: Usually, I'd see if the problem can be reproduced with the existing system setup. If that's possible, I'd first check if the actual cause might be purely SCSI device related. That's what I'm going to do first. I'll create the second pool again (with the same tapes) and put all nodes into that pool ... I've done this tonight .. in turn: - the backup up started as planned on drive two with the same tape as Thursday (the tape was already mounted so no mtx stuff take place) - after some minutes (and 500 MB written data on that tape) everything hangs again .. so I restarted everything and disabled that tape - I mounted the next tape and started the backup again. After 7 GB of written data to that tape (and 5 successful backuped nodes) I got to bed. Until here, it lookes like the problems were truly caused by the tape. But this morning I got the following mail: 12-Dec 03:24 mw-mcs-sd: nfs-1.2005-12-12_02.15.08 Error: block.c:538 Write error at 12:5438 on device Drive-2 (/dev/nst1). ERR=Input/output error. 12-Dec 03:24 mw-mcs-sd: nfs-1.2005-12-12_02.15.08 Error: Error writing final EOF to tape. This Volume may not be readable. dev.c:1553 ioctl MTWEOF error on Drive-2 (/dev/nst1). ERR=No such device or address. 12-Dec 03:24 Unless you have 7GB tapes, this looks like a hardware problem: bad media, dirty tape drive, bad drive, bad SCSI cables (or improperly installed), bad SCSI card, ... These kinds of problems typically generate a number of kernel (SCSI) messages in the log. mw-mcs-sd: End of medium on Volume MW-MCS-1-12 Bytes=7,078,064,979 Blocks=109,722 at 12-Dec-2005 03:24. 12-Dec 03:24 mw-mcs-sd: 3301 Issuing autochanger loaded drive 1 command. 12-Dec 03:24 mw-mcs-sd: 3302 Autochanger loaded drive 1, result is Slot 12. 12-Dec 04:10 mw-mcs-sd: 3307 Issuing autochanger unload slot 12, drive 1 command. 12-Dec 04:14 mw-mcs-sd: 3995 Bad autochanger unload slot 13, drive 1: ERR=Child died from signal 15: Termination. This looks like you don't have your autochanger script properly configured as one user pointed out -- setting the sleep longer may help. However, I do not understand why in one message it says unload slot 12, then on the next line it says unload slot 13 ... ERR. There seems to be something missing as Bacula will normally issue a loaded drive or load a drive before unloading it for a second time. 12-Dec 04:14 mw-mcs-sd: Please mount Volume MW-MCS-1-13 on Storage Device Drive-2 (/dev/nst1) for Job nfs-1.2005-12-12_02.15.08 12-Dec 05:14 mw-mcs-sd: Please mount Volume MW-MCS-1-13 on Storage Device Drive-2 (/dev/nst1) for Job nfs-1.2005-12-12_02.15.08 12-Dec 07:14 mw-mcs-sd: Please mount Volume MW-MCS-1-13 on Storage Device Drive-2 (/dev/nst1) for Job nfs-1.2005-12-12_02.15.08 12-Dec 08:59 nfs-1-fd: nfs-1.2005-12-12_02.15.08 Fatal error: backup.c:498 Network send error to SD. ERR=Broken pipe 12-Dec 08:59 mw-mcs-dir: nfs-1.2005-12-12_02.15.08 Error: Bacula 1.38.2 (20Nov05):
Re: [Bacula-users] Re: Bacula BETA 1.38.3
Kern Sibbald schrieb: Hmmm. I think I am going to pack it in and take a year's vacation ... Damn :-) You _did_ notice Richard Knights Mail, right? By the way - I have left the backup server in the state I described. Logging is turned off because my limited disk-space would soon result in more trouble, but I *can* still connect to the DIR and issue commands... with temporarily turning on debug logging, this might give some clues. Apart from that, I'm seriously considering installing gdb on that machine... Arno On Monday 12 December 2005 20:21, Arno Lehmann wrote: Hi, Arno Lehmann schrieb: Hi, Kern Sibbald schrieb: ... Anyway, I would really appreciate it if you could try the 1.38.3 beta. Please use the released tar file rather than the CVS -- the HEAD CVS is now slightly behind the 1.38 branch in terms of fixes ... I'll download it in a few minutes... Although I haven't been able to reproduce the pool problems you are having, I have been working on it nevertheless. When I was able to duplicate the two drive problem of not running jobs simultaneously on both drives, I took the time to review the reservation system in detail, and I found a few places where there *may* have been problems. I should see if the version 1.38.3beta fixes my problem on Tuesday - that's still my jobs-to-different-pools-day. I'll report what happens. And, of course, I've got one run of straight and usual backups before that, although I don't think there will be any problems tomorrow. Unfortunately, I have to report something rather unexpected. The device reservation system works a little too good ;-) I started my bachup server this morning (or rather, it started itself). All tape drives Bacula itself loads were empty, magazines were filled with the correct volumes. Basically, the same situation I have 5 out of seven days for about a year now, with many versions of Bacula. Jobs were scheduled as usual and got triggered as usual, too. After that, not much progress since this morning, 8:20 (almost 12 hours ago): 12-Dec 11:21 goblin-sd: Job DracheStd.2005-12-12_08.20.00 waiting to reserve a device. 12-Dec 11:26 goblin-sd: Job Goblin.2005-12-12_08.25.00 waiting to reserve a device. 12-Dec 11:26 goblin-sd: Job Ork.2005-12-12_08.25.03 waiting to reserve a device. 12-Dec 11:26 goblin-sd: Job ElfSys.2005-12-12_08.25.01 waiting to reserve a device. 12-Dec 11:26 goblin-sd: Job ElfHome.2005-12-12_08.25.02 waiting to reserve a device. 12-Dec 15:21 goblin-sd: Job DracheStd.2005-12-12_08.20.00 waiting to reserve a device. 12-Dec 15:26 goblin-sd: Job Goblin.2005-12-12_08.25.00 waiting to reserve a device. 12-Dec 15:26 goblin-sd: Job Ork.2005-12-12_08.25.03 waiting to reserve a device. 12-Dec 15:26 goblin-sd: Job ElfSys.2005-12-12_08.25.01 waiting to reserve a device. 12-Dec 15:26 goblin-sd: Job ElfHome.2005-12-12_08.25.02 waiting to reserve a device. these are the latest messages. Or, in other words, since I upgraded to 1.38.3 beta, nothing has been backed up at all. The current director status is #sta dir goblin-dir Version: 1.38.3 (09 December 2005) i586-pc-linux-gnu suse 8.1 Daemon started 12-Dec-05 07:32, 1 Job run since started. Scheduled Jobs: Level Type Pri Scheduled Name Volume = == IncrementalBackup10 13-Dec-05 08:20DracheStd DLT-IV-0063 IncrementalBackup10 13-Dec-05 08:20BeowulfStd DLT-IV-0063 Differential Backup10 13-Dec-05 08:25 Goblin DLT-IV-0063 Differential Backup10 13-Dec-05 08:25ElfSys DLT-IV-0063 Differential Backup10 13-Dec-05 08:25ElfHomeDLT-IV-0063 Differential Backup 10 13-Dec-05 08:25Ork*unknown* Differential Backup30 13-Dec-05 08:25GoblinDB DLT-IV-0063 Differential Backup10 13-Dec-05 08:30BackupMail QIC-525-0046 Full Backup 100 13-Dec-05 08:33 BackupCatalog DAT-120-0006 Admin999 13-Dec-05 09:00Shutdown Running Jobs: JobId Level Name Status == 4443 Increme DracheStd.2005-12-12_08.20.00 is waiting on Storage HPDAT 4445 Increme Goblin.2005-12-12_08.25.00 is waiting on Storage HPDAT 4446 Increme ElfSys.2005-12-12_08.25.01 is waiting on Storage HPDAT 4447 Increme ElfHome.2005-12-12_08.25.02 is waiting on Storage HPDAT 4448 Increme Ork.2005-12-12_08.25.03 is waiting on Storage HPDAT 4449 Increme GoblinDB.2005-12-12_08.25.04 is waiting for higher priority jobs to finish 4450 Differe BackupMail.2005-12-12_08.30.00 is waiting on max Client jobs 4451 FullBackupCatalog.2005-12-12_08.34.00 is waiting execution 4452 Shutdown.2005-12-12_09.00.00 is waiting execution Terminated Jobs: JobId Level Files Bytes Status Finished
[Bacula-users] Re: error with mysql and suse 9.3
hi, [moderator, please forgive my 2 former emails, wrong sender address, it should be corrected now] while trying to compile the latest bacula I get the following errors in make dird, stored and tools (the last 3 steps): /usr/bin/g++ -O -L../lib -L../cats -L../findlib -o bscan bscan.o block.o device.o dev.o label.o ansi_label.o dvd.o ebcdic.o autochanger.o acquire.o mount.o record.o match_bsr.o parse_bsr.o butil.o read_record.o stored_conf.o spool.o wait.o \ -lsql -L/usr/lib/mysql -lmysqlclient_r -lz -lfind -lbac -lm -lpthread /usr/lib/gcc-lib/i586-suse-linux/3.3.5/../../../../i586-suse-linux/bin/ld: cannot find -lz collect2: ld returned 1 exit status make[1]: *** [bscan] Error 1 make[1]: Leaving directory `/usr/src/bacula-1.38.2/src/stored' (well, this one refers to stored, but the dird and tools are similar). I have mysql, mysql-client, mysql-devel and mysql-shared from suse. Unfortunately there is no bacula for this version of suse (9.3) (if any of you know of one repository, please tell me ;-) ). I also have ncurses-devel (I had another problem compiling it without it, but that is now solved) and readline. So I guess I have all I need, but The funny thing is that if I compile with sqlite, then it builds. The problem is: this is going to be a production server and according to the docs: we recommend that you install either MySQL or PostgreSQL for production work. Any hints? -- Groeten, J.I.Asenjo --- This SF.net email is sponsored by: Splunk Inc. Do you grep through log files for problems? Stop! Download the new AJAX search engine that makes searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! http://ads.osdn.com/?ad_id=7637alloc_id=16865op=click ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
[Bacula-users] Bacula Autochanger Problems
Hello everybody, I use Bacula 1.38.2 whith HP SSL 1016 Autoloader. SD Config: # This is the definition for a # HP SSL1016 Ultrium 460 Autoloader # Autochanger { Name = HP-Library Device = HP-SSL1016 Changer Device = /dev/sg2 Changer Command = /usr/local/bacula-1.38.3/etc/mtx-changer %c %o %S %a %d } Device { Name = HP-SSL1016 Media Type = LTO2 Archive Device = /dev/nst0 # Changer Command = /usr/local/bacula-1.38.3/etc/mtx-changer %c %o %S %a %d AutoChanger = yes #Autoselect = yes; AutomaticMount = yes; # when device opened, read it LabelMedia = yes; AlwaysOpen = yes; } DIR Config: JobDefs { Name = Week-backup Type = Backup Level = Full Client = nas01-fd FileSet = Weekly Full Set Schedule = WeeklyCycle RunBeforeJob = /usr/local/sbin/tape clean RunAfter Job = /usr/local/sbin/tape unload Storage = HP-Library Messages = Standard Pool = WocheWochensicherung Priority = 10 } Then I tried to run a Backup. The mtx-changer scripts works fine. But bacula dosent load the tapes. The command update slots ends up in an error No slots in changer to scan Any hints what went wrong? Many thank in advanced.
Re: [Bacula-users] Re: error with mysql and suse 9.3
Natxo Asenjo wrote: hi, [moderator, please forgive my 2 former emails, wrong sender address, it should be corrected now] while trying to compile the latest bacula I get the following errors in make dird, stored and tools (the last 3 steps): /usr/bin/g++ -O -L../lib -L../cats -L../findlib -o bscan bscan.o block.o device.o dev.o label.o ansi_label.o dvd.o ebcdic.o autochanger.o acquire.o mount.o record.o match_bsr.o parse_bsr.o butil.o read_record.o stored_conf.o spool.o wait.o \ -lsql -L/usr/lib/mysql -lmysqlclient_r -lz -lfind -lbac -lm -lpthread /usr/lib/gcc-lib/i586-suse-linux/3.3.5/../../../../i586-suse-linux/bin/ld: cannot find -lz collect2: ld returned 1 exit status make[1]: *** [bscan] Error 1 make[1]: Leaving directory `/usr/src/bacula-1.38.2/src/stored' (well, this one refers to stored, but the dird and tools are similar). I have mysql, mysql-client, mysql-devel and mysql-shared from suse. Unfortunately there is no bacula for this version of suse (9.3) (if any of you know of one repository, please tell me ;-) ). I also have ncurses-devel (I had another problem compiling it without it, but that is now solved) and readline. So I guess I have all I need, but The funny thing is that if I compile with sqlite, then it builds. The problem is: this is going to be a production server and according to the docs: we recommend that you install either MySQL or PostgreSQL for production work. Any hints? It seems to have found the mysql libraries, it can't find the zlib library. See the error message cannot find -lz. This is the compression library and should be on your system somewhere. Have you got your ld.so.conf set? You can find out how to set this by man ldconfig Regards -- David Logan South Australia when in trouble, or in doubt run in circles, scream and shout --- This SF.net email is sponsored by: Splunk Inc. Do you grep through log files for problems? Stop! Download the new AJAX search engine that makes searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! http://ads.osdn.com/?ad_id=7637alloc_id=16865op=click ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] Re: error with mysql and suse 9.3
On Mon, Dec 12, 2005 at 09:48:30PM +0100, Natxo Asenjo wrote: hi, [moderator, please forgive my 2 former emails, wrong sender address, it should be corrected now] while trying to compile the latest bacula I get the following errors in make dird, stored and tools (the last 3 steps): /usr/bin/g++ -O -L../lib -L../cats -L../findlib -o bscan bscan.o block.o device.o dev.o label.o ansi_label.o dvd.o ebcdic.o autochanger.o acquire.o mount.o record.o match_bsr.o parse_bsr.o butil.o read_record.o stored_conf.o spool.o wait.o \ -lsql -L/usr/lib/mysql -lmysqlclient_r -lz -lfind -lbac -lm -lpthread /usr/lib/gcc-lib/i586-suse-linux/3.3.5/../../../../i586-suse-linux/bin/ld: cannot find -lz Looks like it's asking for the devel version of the zlib package. Install that and try again. -- Frank Sweetser fs at wpi.edu | For every problem, there is a solution that WPI Network Engineer | is simple, elegant, and wrong. - HL Mencken GPG fingerprint = 6174 1257 129E 0D21 D8D4 E8A3 8E39 29E3 E2E8 8CEC --- This SF.net email is sponsored by: Splunk Inc. Do you grep through log files for problems? Stop! Download the new AJAX search engine that makes searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! http://ads.osdn.com/?ad_id=7637alloc_id=16865op=click ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] Re: error with mysql and suse 9.3
Hello, Natxo Asenjo schrieb: hi, [moderator, please forgive my 2 former emails, wrong sender address, it should be corrected now] while trying to compile the latest bacula I get the following errors in make dird, stored and tools (the last 3 steps): /usr/bin/g++ -O -L../lib -L../cats -L../findlib -o bscan bscan.o block.o device.o dev.o label.o ansi_label.o dvd.o ebcdic.o autochanger.o acquire.o mount.o record.o match_bsr.o parse_bsr.o butil.o read_record.o stored_conf.o spool.o wait.o \ -lsql -L/usr/lib/mysql -lmysqlclient_r -lz -lfind -lbac -lm -lpthread /usr/lib/gcc-lib/i586-suse-linux/3.3.5/../../../../i586-suse-linux/bin/ld: cannot find -lz collect2: ld returned 1 exit status make[1]: *** [bscan] Error 1 make[1]: Leaving directory `/usr/src/bacula-1.38.2/src/stored' (well, this one refers to stored, but the dird and tools are similar). Ok, it's missing the libray libz. I have mysql, mysql-client, mysql-devel and mysql-shared from suse. Unfortunately there is no bacula for this version of suse (9.3) (if any of you know of one repository, please tell me ;-) ). I also have ncurses-devel (I had another problem compiling it without it, but that is now solved) and readline. So I guess I have all I need, but Only libz is missing. From my SuSE 9.2 system: $ rpm -qa | grep zlib php4-zlib-4.3.8-8.2 zlib-devel-1.2.1-74.4 zlib-1.2.1-74.4 Try thatcommand and, if not installed, use yast to install zlib-devel. As far as I recall, there was a security fix after release of 9.3, so after installing, you should use you to bring your system to the current version. The funny thing is that if I compile with sqlite, then it builds. The problem is: this is going to be a production server and according to the docs: we recommend that you install either MySQL or PostgreSQL for production work. Any hints? See above. Not having -devel packages installed is a rather common error... Arno -- IT-Service Lehmann[EMAIL PROTECTED] Arno Lehmann http://www.its-lehmann.de --- This SF.net email is sponsored by: Splunk Inc. Do you grep through log files for problems? Stop! Download the new AJAX search engine that makes searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! http://ads.osdn.com/?ad_id=7637alloc_id=16865op=click ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] A scheduling quirk
On Mon, Dec 12, 2005 at 08:33:17PM +0100, Arno Lehmann wrote: Unfortunately, you did miss something :-P From the maual, section Configuring the Director, subsection Jobs: Max Start Delay = time Yes, that looks like it should do what I need. I'm pretty sure that I read through that section of the documentation when I began setting up Bacula, but it didn't occur to me that this could be used to eliminate the back to back incremental and catalog backups that I've noticed. Thanks once more for the excellent and insightful advice. -- John Kodis. --- This SF.net email is sponsored by: Splunk Inc. Do you grep through log files for problems? Stop! Download the new AJAX search engine that makes searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! http://ads.osdn.com/?ad_id=7637alloc_id=16865op=click ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
[Bacula-users] Bacula Autochanger Problems
Mit freundlichen Gren, Viktor Dihor Fernwrme Ulm GmbH Tel. +49-731-3992-267 Fax. +49-731-3992-5-267 E-Mail: [EMAIL PROTECTED] Magirusstrasse 21 89077 Ulm Dieses E-Mail ist vertraulich. Wenn Sie nicht der rechtmige Empfnger sind, drfen Sie den Inhalt weder kopieren, verbreiten oder benutzen. Sollten Sie dieses E-Mail versehentlich erhalten haben, senden Sie es bitte an uns zurck und lschen es anschlieend. This email is confidential. If you are not the intended recipient, you must not copy, disclose or use its contents. If you have received it in error, please inform us immediately by return email and delete the document ---BeginMessage--- Hello everybody, I use Bacula 1.38.2 whith HP SSL 1016 Autoloader. SD Config: # This is the definition for a # HP SSL1016 Ultrium 460 Autoloader # Autochanger { Name = HP-Library Device = HP-SSL1016 Changer Device = /dev/sg2 Changer Command = /usr/local/bacula-1.38.3/etc/mtx-changer %c %o %S %a %d } Device { Name = HP-SSL1016 Media Type = LTO2 Archive Device = /dev/nst0 # Changer Command = /usr/local/bacula-1.38.3/etc/mtx-changer %c %o %S %a %d AutoChanger = yes #Autoselect = yes; AutomaticMount = yes; # when device opened, read it LabelMedia = yes; AlwaysOpen = yes; } DIR Config: JobDefs { Name = Week-backup Type = Backup Level = Full Client = nas01-fd FileSet = Weekly Full Set Schedule = WeeklyCycle RunBeforeJob = /usr/local/sbin/tape clean RunAfter Job = /usr/local/sbin/tape unload Storage = HP-Library Messages = Standard Pool = WocheWochensicherung Priority = 10 } Then I tried to run a Backup. The mtx-changer scripts works fine. But bacula dosent load the tapes. The command update slots ends up in an error No slots in changer to scan Any hints what went wrong? Many thank in advanced. ---End Message---
[Bacula-users] Solaris 10 client change root password
Hello all, I've been trying to install a Bacula client on a Sun ultra 10 with OS Solaris 2.8, I use the following to configure: CFLAGS=-g ./configure \ --sbindir=/bacula/bin \ --sysconfdir=/bacula/bin \ --enable-client-only \ --enable-smartalloc \ --with-pid-dir=/bacula/bin/working \ --with-subsys-dir=/bacula/bin/working \ --with-working-dir=/bacula/working And I get the message: . . . checking for cdrecord... cdrecord checking for pidof... pidof checking for gawk... no checking for mawk... no checking for nawk... nawk checking for nawk... /usr/bin/nawk checking build system type... sparc-sun-solaris2.8 checking host system type... sparc-sun-solaris2.8 passwd: Changing password for root New Password: It continues asking me to change the password indefinetily, what's wrong? I'll apreciate suggestions. -- Didier Herrera Polo --- This SF.net email is sponsored by: Splunk Inc. Do you grep through log files for problems? Stop! Download the new AJAX search engine that makes searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! http://ads.osdn.com/?ad_id=7637alloc_id=16865op=click ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
[Bacula-users] Trouble using SSH tunnel with Bacula
All - After using Bacula for almost a year, I'm having to make some configuration changes due to network topology and firewalls. The situation is this: The network server is behind a firewall. The clients I'm trying to back up are on the other side of the firewall (a DMZ of sorts). The network server can connect to any of the client it wants to, but the clients cannot connect back behind the firewall, thus cannot make the connection back to the backup server. So, I thought I'd try SSH tunneling. I modified the example ssh-tunnel script a bit, and here is my command line: /usr/bin/ssh -fnCN2 -o PreferredAuthentications=publickey \ -i /usr/local/bacula/ssh/id_dsa -l $USER -R 9101:$LOCAL:9101 \ -R 9103:$LOCAL:9103 $CLIENT The $USER is replaced with bacula (a valid account), $CLIENT is replaced with the client's FQDN, and $LOCAL is replaced with the FQDN of the backup server, and all this is run from a Run Before Job directive for the client. Here is my storage resource that I use for clients on the other side of the firewall. It is identical to my regular storage resource, but uses localhost as the address so the clients will connect to localhost:9103. Storage { Name = herodotus-sd-ops Address = localhost SDPort = 9103 Password = apasswordgoeshere Device = AdicFastStor22 Media Type = DLT8000 Autochanger = yes Maximum Concurrent Jobs = 30 } When I fire off the job, messages reports that the ssh-tunnel script completed successfully, and I can see the ports listening on the clients. However, things just hang from there. Director status shows job running. Client status shows no job running. Storage status shows no job running. Netstat shows no new connections on either side. Messages never gives the 12-Dec 16:42 herodotus-dir: Start Backup JobId , Job= I can even connect to localhost:9103 (on the client), type a few characters, and it will disconnect me, just as it would if connected directly to the server and did that. BUT!! When I kill the tunnel, then, and only then, the Start Backup message appears, but of course it just hangs because the client can't contact the server because the tunnel is down. I then have to cancel the backup job. When I do, SD termination status says Waiting on FD. I'm sure it's something sadly simple, but I have been messing with this for the better part of 4 hours, and I still can't figure out what I am doing wrong. Can anyone offer any tips? When I figure this out, I'll write up a section for the manual so we'll have something more for ssh tunneling than just Please see the script... which isn't very helpful to someone unacquainted with the intricacies of ssh tunneling, key-gen/key usage, etc. Any help would be great! Thanks! j- k- -- Joshua Kugler CDE System Administrator http://distance.uaf.edu/ --- This SF.net email is sponsored by: Splunk Inc. Do you grep through log files for problems? Stop! Download the new AJAX search engine that makes searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! http://ads.osdn.com/?ad_id=7637alloc_id=16865op=click ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
RE: [Bacula-users] Bacula BETA 1.38.3
What happened is that the upgrade to 1.38 overwrote my modified mtx-changer script with the default, so you are correct, that was the problem. Still leaves the strange error message though. Thanks, Rob -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Kern Sibbald Sent: Monday, December 12, 2005 2:46 PM To: bacula-users@lists.sourceforge.net Cc: Rob Subject: Re: [Bacula-users] Bacula BETA 1.38.3 On Monday 12 December 2005 20:10, Rob wrote: FYI, I haven't had time to look into it much, but I have been seeing errors with my auto changer since 1.38.1 that I had never seen with 1.36.* before that look a lot like these. As Kern said, as if something seems to be missing from the log, see: 04-Dec 03:34 bug-sd: End of Volume NJO008D at 80:11492 on device Drive-1 (/dev/nst0). Write of 64512 bytes got -1. 04-Dec 03:35 bug-sd: spider.2005-12-04_03.05.04 Error: Re-read of last block failed. Last block=80530 Current block=14717. 04-Dec 03:35 bug-sd: End of medium on Volume NJO008D Bytes=45,428,287,520 Blocks=704,222 at 04-Dec-2005 03:35. 04-Dec 03:35 bug-sd: 3301 Issuing autochanger loaded drive 0 command. 04-Dec 03:35 bug-sd: 3302 Autochanger loaded drive 0, result is Slot 8. 04-Dec 03:35 bug-sd: 3307 Issuing autochanger unload slot 8, drive 0 command. 04-Dec 03:35 bug-sd: 3995 Bad autochanger unload slot 9, drive 0: ERR=Child exited with code 1. 04-Dec 03:35 bug-sd: Please mount Volume NJO009D on Storage Device Drive-1 (/dev/nst0) for Job spider.2005-12-04_03.05.04 I'm beginning to think that the error message that edits the slot number is just broken. The error you are seeing is because there is a problem with your mtx-changer script. The error the previous person was seeing was because of a misconfiguration (due to incorrect documentation). Rob -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Kern Sibbald Sent: Monday, December 12, 2005 9:20 AM To: bacula-users@lists.sourceforge.net Cc: Volker Dierks Subject: Re: [Bacula-users] Bacula BETA 1.38.3 On Monday 12 December 2005 12:52, Volker Dierks wrote: Hello, Volker Dierks wrote: Usually, I'd see if the problem can be reproduced with the existing system setup. If that's possible, I'd first check if the actual cause might be purely SCSI device related. That's what I'm going to do first. I'll create the second pool again (with the same tapes) and put all nodes into that pool ... I've done this tonight .. in turn: - the backup up started as planned on drive two with the same tape as Thursday (the tape was already mounted so no mtx stuff take place) - after some minutes (and 500 MB written data on that tape) everything hangs again .. so I restarted everything and disabled that tape - I mounted the next tape and started the backup again. After 7 GB of written data to that tape (and 5 successful backuped nodes) I got to bed. Until here, it lookes like the problems were truly caused by the tape. But this morning I got the following mail: 12-Dec 03:24 mw-mcs-sd: nfs-1.2005-12-12_02.15.08 Error: block.c:538 Write error at 12:5438 on device Drive-2 (/dev/nst1). ERR=Input/output error. 12-Dec 03:24 mw-mcs-sd: nfs-1.2005-12-12_02.15.08 Error: Error writing final EOF to tape. This Volume may not be readable. dev.c:1553 ioctl MTWEOF error on Drive-2 (/dev/nst1). ERR=No such device or address. 12-Dec 03:24 Unless you have 7GB tapes, this looks like a hardware problem: bad media, dirty tape drive, bad drive, bad SCSI cables (or improperly installed), bad SCSI card, ... These kinds of problems typically generate a number of kernel (SCSI) messages in the log. mw-mcs-sd: End of medium on Volume MW-MCS-1-12 Bytes=7,078,064,979 Blocks=109,722 at 12-Dec-2005 03:24. 12-Dec 03:24 mw-mcs-sd: 3301 Issuing autochanger loaded drive 1 command. 12-Dec 03:24 mw-mcs-sd: 3302 Autochanger loaded drive 1, result is Slot 12. 12-Dec 04:10 mw-mcs-sd: 3307 Issuing autochanger unload slot 12, drive 1 command. 12-Dec 04:14 mw-mcs-sd: 3995 Bad autochanger unload slot 13, drive 1: ERR=Child died from signal 15: Termination. This looks like you don't have your autochanger script properly configured as one user pointed out -- setting the sleep longer may help. However, I do not understand why in one message it says unload slot 12, then on the next line it says unload slot 13 ... ERR. There seems to be something missing as Bacula will normally issue a loaded drive or load a drive before unloading it for a second time. 12-Dec 04:14 mw-mcs-sd: Please mount Volume MW-MCS-1-13 on Storage Device Drive-2 (/dev/nst1) for Job nfs-1.2005-12-12_02.15.08 12-Dec 05:14 mw-mcs-sd: Please mount Volume MW-MCS-1-13 on Storage Device Drive-2 (/dev/nst1) for Job nfs-1.2005-12-12_02.15.08 12-Dec 07:14 mw-mcs-sd: Please mount Volume MW-MCS-1-13 on Storage Device Drive-2 (/dev/nst1) for Job
Re: [Bacula-users] problems with autopruning and volume reuse
On 2 Dec 2005 at 20:09, Martin Simmons wrote: I suggest you post the output of llist volumes. Did all that, no response :-( Anyhow, I still have to manually purge each tape, how do I get around this. -- Harondel J. Sibble Sibble Computer Consulting Creating solutions for the small business and home computer user. [EMAIL PROTECTED] (use pgp keyid 0x3AD5C11D) http://www.pdscc.com (604) 739-3709 (voice/fax) (604) 686-2253 (pager) --- This SF.net email is sponsored by: Splunk Inc. Do you grep through log files for problems? Stop! Download the new AJAX search engine that makes searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! http://ads.osdn.com/?ad_id=7637alloc_id=16865op=click ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] problems with autopruning and volume reuse
On 1 Dec 2005 at 10:37, Kern Sibbald wrote: Yes, a shorter Volume retention period will force pruning of job and file data, but as the manual and a recent email point out, after changing the .conf volume retention period, you must update the catalog data for volumes that already exist. A key point mentioned above, that can be a source of frustration, is that Bacula will only recycle purged Volumes if there is no other appendable Volume available, otherwise, it will always write to an appendable Volume before recycling even if there are Volume marked as Purged. This preserves your data as long as possible. So, if you wish to force Bacula to use a purged Volume, you must first ensure that no other Volume in the Pool is marked Append. If necessary, you can manually set a volume to Full. The reason for this is that Bacula wants to preserve the data on your old tapes (even though purged from the catalog) as long as absolutely possible before overwriting it. How do I work around this? Other than manually marking all the other tapes as full? Since we have 9 or 10 tapes in a rotation and a regular backup does not fill the tape completely, I'd have to manually mark the tapes each week, this does not lead to an automated backup system... I want something hands off, that does not require administrator attention unless something is broken (other than changing of the tapes that is). -- Harondel J. Sibble Sibble Computer Consulting Creating solutions for the small business and home computer user. [EMAIL PROTECTED] (use pgp keyid 0x3AD5C11D) http://www.pdscc.com (604) 739-3709 (voice/fax) (604) 686-2253 (pager) --- This SF.net email is sponsored by: Splunk Inc. Do you grep through log files for problems? Stop! Download the new AJAX search engine that makes searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! http://ads.osdn.com/?ad_id=7637alloc_id=16865op=click ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] Bacula Autochanger Problems
Hi, again. Hello everybody, I use Bacula 1.38.2 whith HP SSL 1016 Autoloader. SD Config: # This is the definition for a # HP SSL1016 Ultrium 460 Autoloader # Autochanger { Name = HP-Library Device = HP-SSL1016 Changer Device = /dev/sg2 Changer Command = /usr/local/bacula-1.38.3/etc/mtx-changer %c %o %S %a %d } Device { Name = HP-SSL1016 -- Drive Index = 0 -- these is the missing Line Drive Index = 0 so now it works Media Type = LTO2 Archive Device = /dev/nst0 # Changer Command = /usr/local/bacula-1.38.3/etc/mtx-changer %c %o %S %a %d AutoChanger = yes #Autoselect = yes; AutomaticMount = yes; # when device opened, read it LabelMedia = yes; AlwaysOpen = yes; } The only thing I actually miss is Drive Index = 0 in the device section. DIR Config: JobDefs { Name = Week-backup Type = Backup Level = Full Client = nas01-fd FileSet = Weekly Full Set Schedule = WeeklyCycle RunBeforeJob = /usr/local/sbin/tape clean RunAfter Job = /usr/local/sbin/tape unload Storage = HP-Library Messages = Standard Pool = WocheWochensicherung Priority = 10 } Then I tried to run a Backup. The mtx-changer scripts works fine. But bacula dosent load the tapes. The command update slots ends up in an error No slots in changer to scan Might be solved by Drive Index... If not - what does status sd on the console tell you? You might post the whole command output. Arno Arno that was the Solution many many thanks for Youre greate Job. -- IT-Service Lehmann al at its-lehmann.de Arno Lehmann http://www.its-lehmann.de Mit freundlichen Gren, Viktor Dihor