[Bacula-users] Got EOF on tape after reboot
Hello, I have installed bacula 2.4.3 from source. Now I am testing the tape-unit with mt and btape. The first time I do "btape -c /usr/local/bacula/bin/bacula-sd.conf /dev/nst0", the test runs properly. Then I unload the tape "mt -f /dev/nst0 rewoff" . I reboot the server. I load the same tape. At that moment I have the following error when I run the btape test command: # btape -c /usr/local/bacula/bin/bacula-sd.conf /dev/nst0 Tape block granularity is 1024 bytes. btape: butil.c:285 Using device: "/dev/nst0" for writing. btape: btape.c:372 open device "lto4drive" (/dev/nst0): OK *test === Write, rewind, and re-read test === I'm going to write 1000 records and an EOF then write 1000 records and an EOF, then rewind, and re-read the data to verify that it is correct. This is an *essential* feature ... btape: btape.c:831 Wrote 1000 blocks of 64412 bytes. btape: btape.c:505 Wrote 1 EOF to "lto4drive" (/dev/nst0) btape: btape.c:847 Wrote 1000 blocks of 64412 bytes. btape: btape.c:505 Wrote 1 EOF to "lto4drive" (/dev/nst0) btape: btape.c:856 Rewind OK. 1000 blocks re-read correctly. Got EOF on tape. 06-Nov 15:30 btape JobId 0: Error: block.c:999 Read error on fd=3 at file:blk 1:0 on device "lto4drive" (/dev/nst0). ERR=Input/output error. Got EOF on tape. Got EOF on tape. When I load a new tape, the test runs properly, until I reboot the server. Ik tried the following without success: mt -f /dev/nst0 defblksize 0 I am running Bacula 2.4.3 on CentOS 5.2. Harware Server: SunFire X2100 TapeUnit: HP StorageWorks 1760. This is my bacula-sd.conf: # # Default Bacula Storage Daemon Configuration file # # For Bacula release 2.4.3 (10 October 2008) -- redhat # # You may need to change the name of your tape drive # on the "Archive Device" directive in the Device # resource. If you change the Name and/or the # "Media Type" in the Device resource, please ensure # that dird.conf has corresponding changes. # Storage { # definition of myself Name = itshas-sv11-sd SDPort = 9103 # Director's port WorkingDirectory = "/usr/local/bacula/" Pid Directory = "/usr/local/bacula/bin/working" Maximum Concurrent Jobs = 20 } # # List Directors who are permitted to contact Storage daemon # Director { Name = itshas-sv11-dir Password = "xxx" } # # Restricted Director, used by tray-monitor to get the # status of the storage daemon # Director { Name = itshas-sv11-mon Password = "xxx" Monitor = yes } # # Devices supported by this Storage daemon # To connect, the Director's bacula-dir.conf must have the # same Name and MediaType. # # # A Linux or Solaris tape drive # Device { Name = lto4drive Media Type = LTO-4 Archive Device = /dev/nst0 AutomaticMount = yes; # when device opened, read it AlwaysOpen = no; RemovableMedia = yes; RandomAccess = no; Alert Command = "sh -c 'smartctl -H -l error %c'" } # # Send all messages to the Director, # mount messages also are sent to the email address # Messages { Name = Standard director = itshas-sv11-dir = all } There is no logging in the messages-file. - This SF.Net email is sponsored by the Moblin Your Move Developer's challenge Build the coolest Linux based applications with Moblin SDK & win great prizes Grand prize is a trip for two to an Open Source event anywhere in the world http://moblin-contest.org/redirect.php?banner_id=100&url=/ ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
[Bacula-users] Antw.: Re: Got EOF on tape after reboot
Hi Bruno, Thanks for the reply. I use new tapes. It is a very strange problem. Only the first test fails. If I do a second test it runs succesfull. After a reboot I have the same problem. At the moment I have bacula running and I can do a backup and restore. So I am ignoring the strange btape-behaviour. Van: "Bruno Friedmann" <[EMAIL PROTECTED]> Cc: bacula-users@lists.sourceforge.net Verzonden: Zondag 9 november 2008 13:47:31 GMT +01:00 Amsterdam / Berlijn / Bern / Rome / Stockholm / Wenen Onderwerp: Re: [Bacula-users] Got EOF on tape after reboot Hi Carlo Don't remember if it is necessary but I've alway assumed that btape need newly blank media. So I always before run, issue a mt -f /dev/nst0 weof Carlo Maesen wrote: > Hello, > > I have installed bacula 2.4.3 from source. > Now I am testing the tape-unit with mt and btape. > The first time I do "btape -c /usr/local/bacula/bin/bacula-sd.conf > /dev/nst0", the test runs properly. > Then I unload the tape "mt -f /dev/nst0 rewoff" . I reboot the server. I load > the same tape. At that moment I have the following error when I run the btape > test command: > # btape -c /usr/local/bacula/bin/bacula-sd.conf /dev/nst0 > Tape block granularity is 1024 bytes. > btape: butil.c:285 Using device: "/dev/nst0" for writing. > btape: btape.c:372 open device "lto4drive" (/dev/nst0): OK > *test > > === Write, rewind, and re-read test === > > I'm going to write 1000 records and an EOF > then write 1000 records and an EOF, then rewind, > and re-read the data to verify that it is correct. > > This is an *essential* feature ... > > btape: btape.c:831 Wrote 1000 blocks of 64412 bytes. > btape: btape.c:505 Wrote 1 EOF to "lto4drive" (/dev/nst0) > btape: btape.c:847 Wrote 1000 blocks of 64412 bytes. > btape: btape.c:505 Wrote 1 EOF to "lto4drive" (/dev/nst0) > btape: btape.c:856 Rewind OK. > 1000 blocks re-read correctly. > Got EOF on tape. > 06-Nov 15:30 btape JobId 0: Error: block.c:999 Read error on fd=3 at file:blk > 1:0 on device "lto4drive" (/dev/nst0). ERR=Input/output error. > Got EOF on tape. > Got EOF on tape. > > > When I load a new tape, the test runs properly, until I reboot the server. > Ik tried the following without success: > mt -f /dev/nst0 defblksize 0 > > I am running Bacula 2.4.3 on CentOS 5.2. > Harware Server: SunFire X2100 > TapeUnit: HP StorageWorks 1760. > > This is my bacula-sd.conf: > # > # Default Bacula Storage Daemon Configuration file > # > # For Bacula release 2.4.3 (10 October 2008) -- redhat > # > # You may need to change the name of your tape drive > # on the "Archive Device" directive in the Device > # resource. If you change the Name and/or the > # "Media Type" in the Device resource, please ensure > # that dird.conf has corresponding changes. > # > > Storage { # definition of myself > Name = itshas-sv11-sd > SDPort = 9103 # Director's port > WorkingDirectory = "/usr/local/bacula/" > Pid Directory = "/usr/local/bacula/bin/working" > Maximum Concurrent Jobs = 20 > } > > # > # List Directors who are permitted to contact Storage daemon > # > Director { > Name = itshas-sv11-dir > Password = "xxx" > } > > # > # Restricted Director, used by tray-monitor to get the > # status of the storage daemon > # > Director { > Name = itshas-sv11-mon > Password = "xxx" > Monitor = yes > } > > # > # Devices supported by this Storage daemon > # To connect, the Director's bacula-dir.conf must have the > # same Name and MediaType. > # > > # > # A Linux or Solaris tape drive > # > Device { > Name = lto4drive > Media Type = LTO-4 > Archive Device = /dev/nst0 > AutomaticMount = yes; # when device opened, read it > AlwaysOpen = no; > RemovableMedia = yes; > RandomAccess = no; > Alert Command = "sh -c 'smartctl -H -l error %c'" > } > > # > # Send all messages to the Director, > # mount messages also are sent to the email address > # > Messages { > Name = Standard > director = itshas-sv11-dir = all > } > > There is no logging in the messages-file. > > > > - > This SF.Net email is sponsored by the Moblin Your Move Developer's challenge > Build the coolest Linux based applications with Moblin SDK & win great prizes > Grand prize is a trip for two to an Open Source event anywhere in the world
[Bacula-users] question about schedules and retentions
I did read the bacula manual but, I have some questions about schedules. I creat the following schedule: Schedule { Name = aca-cycle Run = Level=Incremental Pool=aca mon-thu at 22:00 Run = Level=Full Pool=aca 1st-4th sat at 22:00 } I backup one client according this schedule, but each different run has also a different file and job retention. (Incr = 4 weeks, Full = 1 year) Do I have to create 2 different clients and jobs, one for the incemental backup and one for the full ? Because the file and job retenion is defined in the client-directive. - This SF.Net email is sponsored by the Moblin Your Move Developer's challenge Build the coolest Linux based applications with Moblin SDK & win great prizes Grand prize is a trip for two to an Open Source event anywhere in the world http://moblin-contest.org/redirect.php?banner_id=100&url=/ ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
[Bacula-users] Antw.: Re: question about schedules and retentions
- Oorspronkelijk bericht - Van: "Arno Lehmann" <[EMAIL PROTECTED]> Aan: "bacula-users" Verzonden: Dinsdag 11 november 2008 20:45:45 GMT +01:00 Amsterdam / Berlijn / Bern / Rome / Stockholm / Wenen Onderwerp: Re: [Bacula-users] question about schedules and retentions Hi, 11.11.2008 18:18, Carlo Maesen wrote: > I did read the bacula manual but, I have some questions about schedules. > I creat the following schedule: > Schedule { > Name = aca-cycle > Run = Level=Incremental Pool=aca mon-thu at 22:00 > Run = Level=Full Pool=aca 1st-4th sat at 22:00 > } > > I backup one client according this schedule, but each different run has also > a different file and job retention. (Incr = 4 weeks, Full = 1 year) > Do I have to create 2 different clients and jobs, one for the incemental > backup and one for the full ? > Because the file and job retenion is defined in the client-directive. If you actually need the job-specific retention times you are in trouble... An incremental can only be based on the latest full backup for the same job, and a job is defined by the unique combination of client and fileset. The better approach is to use distinct pools for full, differential, and incremental backups, where each pool has its own retention settings. When a job is purged from a pool volume, the accompanying file and job data is also removed. Typically, you'll keep the full backup longest, so in essence, the job and file retentions apply to full backups only, if they are longer than the retention times of the partial backup pools retention times. This, typically, is exactly what is needed - complete control when restoring from recent backups, and less control but also less database use for the long-term storage. Arno Hi Arno, After I create the 3 pools (with different retentions), I only have to create 1 client with a file/job retention of 1 year. When the volume retention of the incremental-pool expires (4weeks), the corresponding files/jobs will be pruned from the catalog. Because the shortest retention takes precendence. Is this correct ? - This SF.Net email is sponsored by the Moblin Your Move Developer's challenge Build the coolest Linux based applications with Moblin SDK & win great prizes Grand prize is a trip for two to an Open Source event anywhere in the world http://moblin-contest.org/redirect.php?banner_id=100&url=/ ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
[Bacula-users] retention time confusing
I am still confused about retentiontimes. I have defined a pool (aca) with a volume-retention of 6days. A client has a retention (job and file) of 365 days. I have the following schedule: # The ACA-SOLARIS-schedule Schedule { Name = aca-solaris-cycle Run = Level=Differential Pool=aca mon-thu at 22:30 Run = Level=Full Pool=aca-week 1st-4th fri at 22:30 Run = Level=Full Pool=aca-month 5th fri at 22:30 } Will the files/jobs of the differential backup been deleted from the database after 6 days ? Here the pools Pool { Name = aca Pool Type = Backup Volume Retention = 6 days Recycle = yes AutoPrune = yes } # Weekly ACA Pool Pool { Name = aca-week Pool Type = Backup Volume Retention = 4 weeks Recycle = yes AutoPrune = yes } # Monthly ACA Pool Pool { Name = aca-month Pool Type = Backup Volume Retention = 1 years Recycle = yes AutoPrune = yes } And here a client sample: # ITSHAS-SV04 Client { Name = itshas-sv04-fd Address = itshas-sv04 FDPort = 9102 Catalog = MyCatalog Password = xxx File Retention = 365 days Job Retention = 365 days AutoPrune = yes } - This SF.Net email is sponsored by the Moblin Your Move Developer's challenge Build the coolest Linux based applications with Moblin SDK & win great prizes Grand prize is a trip for two to an Open Source event anywhere in the world http://moblin-contest.org/redirect.php?banner_id=100&url=/ ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
[Bacula-users] Antw.: Re: retention time confusing
- Oorspronkelijk bericht - Van: "Dan Langille" <[EMAIL PROTECTED]> Aan: "Carlo Maesen" <[EMAIL PROTECTED]> Cc: "bacula-users" Verzonden: Zondag 23 november 2008 22:14:30 GMT +01:00 Amsterdam / Berlijn / Bern / Rome / Stockholm / Wenen Onderwerp: Re: [Bacula-users] retention time confusing On Nov 23, 2008, at 3:55 PM, Carlo Maesen wrote: > > I am still confused about retentiontimes. > I have defined a pool (aca) with a volume-retention of 6days. A > client has a retention (job and file) of 365 days. I have the > following schedule: > > # The ACA-SOLARIS-schedule > Schedule { > Name = aca-solaris-cycle > Run = Level=Differential Pool=aca mon-thu at 22:30 > Run = Level=Full Pool=aca-week 1st-4th fri at 22:30 > Run = Level=Full Pool=aca-month 5th fri at 22:30 > } > > Will the files/jobs of the differential backup been deleted from the > database after 6 days ? Usually when someone asks a question like that, they see conflicting information or have tried it and failed to get what they expect. Can you elaborate upon that now? I think if you read the documentation you may see that one overrides another. Sorry, I don't have time to check just now. Sorry that I did not mentioned the reason of posting this issue. When I do "list jobs" I still see older jobs. I als have look at "Automatic_Volume_Recycling"-chapter and "Automatic Pruning and Recycling Example" But the example doesn't show the client-config (file/job retention). I do not understand the concept of automatic pruning. According to me files/and jobs are deleted only when a volume is recycled. A volume will be recycled only when there is no appendable volume and the volume is expired. And that's the reason I see older jobs. Because the corresponding volume is still appendable. Is this correct ? By the way I am using bacula 2.4.3 on CentOs 5.2 > Here the pools > Pool { > Name = aca > Pool Type = Backup > Volume Retention = 6 days > Recycle = yes > AutoPrune = yes > } > > # Weekly ACA Pool > Pool { > Name = aca-week > Pool Type = Backup > Volume Retention = 4 weeks > Recycle = yes > AutoPrune = yes > } > > # Monthly ACA Pool > Pool { > Name = aca-month > Pool Type = Backup > Volume Retention = 1 years > Recycle = yes > AutoPrune = yes > } > > And here a client sample: > # ITSHAS-SV04 > Client { > Name = itshas-sv04-fd > Address = itshas-sv04 > FDPort = 9102 > Catalog = MyCatalog > Password = xxx > File Retention = 365 days > Job Retention = 365 days > AutoPrune = yes > } -- Dan Langille http://langille.org/ - This SF.Net email is sponsored by the Moblin Your Move Developer's challenge Build the coolest Linux based applications with Moblin SDK & win great prizes Grand prize is a trip for two to an Open Source event anywhere in the world http://moblin-contest.org/redirect.php?banner_id=100&url=/ ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
[Bacula-users] problem with webacula
I have installed bacula 2.4.3 on CentOS 5.2 and it's working fine. Now I am trying to configure webacula. I have read the INSTALL. But I can't open some of the pages, for example the director status page. I have the following error in my browser: ERROR: There was a problem executing bconsole. When I open the desktop page I have the following error in my browser: ERROR Command: no output. Check access to /usr/bin/sudo /usr/local/bacula/bin/bconsole -n -c /usr/local/bacula/bin/bconsole.conf As the apache user I can run: /usr/bin/sudo /usr/local/bacula/bin/bconsole -n -c /usr/local/bacula/bin/bconsole.conf I also see an error in the apache's error_log: Cannot open audit interface - aborting. Other pages are OK (last jobs - Pool/Volume ...) - This SF.Net email is sponsored by the Moblin Your Move Developer's challenge Build the coolest Linux based applications with Moblin SDK & win great prizes Grand prize is a trip for two to an Open Source event anywhere in the world http://moblin-contest.org/redirect.php?banner_id=100&url=/ ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
[Bacula-users] listing open files
I would like to have a listing of all open files in my backup-report or in the log-file. So I can put these in a exclude list. Is this posssible ? I use for each job: Messages = Standard bacula-dir.conf: Messages { Name = Standard mailcommand = "/usr/local/bacula/bin/bsmtp -h localhost -f \"\(Bacula\) \<%r\>\" -s \"Bacula: %t %e of %c %l\" %r" operatorcommand = "/usr/local/bacula/bin/bsmtp -h localhost -f \"\(Bacula\) \<%r\>\" -s \"Bacula: Intervention needed for %j\" %r" mail = [EMAIL PROTECTED] = all, !skipped operator = [EMAIL PROTECTED] = mount console = all, !skipped, !saved append = "/usr/local/bacula/log" = all, !skipped } Additional info: Bacula 2.4.3 on CentOS 5.2 -- SF.Net email is Sponsored by MIX09, March 18-20, 2009 in Las Vegas, Nevada. The future of the web can't happen without you. Join us at MIX09 to help pave the way to the Next Web now. Learn more and register at http://ad.doubleclick.net/clk;208669438;13503038;i?http://2009.visitmix.com/ ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
[Bacula-users] status director and scheduled jobs
I have several tapes in several pools. I use "Volume Use Duration = 23 hours" for each pool. Sometimes when I run the "stat dir" command within the bconsole, I see *unknown* for the scheduled jobs volume. That's because all volumes in the pool have status "used". The moment the backup starts, he changed the status to purged for the expired volumes. Is there a way I can display the correct output for scheduled jobs volume without manually pruning the expired volumes ?. It would be handy because I have no library. Bacula 2.4.3 CentOS 5.2 Thanks in advance -- ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] Compiling on Solaris 10 - storage daemon build dying
I also had issues compiling bacula on Solaris 10. Check if you have the following packages: * SUNWgcmn * SUNWbinutils * SUNWbinutilsS * SUNWarc * SUNWhea * SUNWGnutls * SUNWGnutls-devel * SUNWlibgcrypt * SUNWzlib * SUNWzlibS * SUNWGmakeS * SUNWlibm * SMCliconv * SMCgcc * SMClintl * SMCpcre * SMCgrep You vcan download the SMC-packages from www.sunfreeware.com. You need GNU grep ! Use the following settings before compiling: LD_LIBRARY_PATH=/usr/local/lib:$LD_LIBRARY_PATH PATH=$PATH:/usr/ccs/bin:/usr/sfw/bin:/usr/local/bin:/usr/local/sbin I hope this will help. - Oorspronkelijk bericht - Van: "Jeff MacDonald" Aan: bacula-users@lists.sourceforge.net Verzonden: Vrijdag 2 januari 2009 19:24:36 GMT +01:00 Amsterdam / Berlijn / Bern / Rome / Stockholm / Wenen Onderwerp: [Bacula-users] Compiling on Solaris 10 - storage daemon build dying Hi, I'm compiling on Solaris 10, consistantly the build fails on the storage daemon. Here is my configure line: #!/bin/sh ./configure \ --prefix=/opt/bacula \ --sbindir=/opt/bacula/bin \ --sysconfdir=/opt/bacula/etc \ --with-pid-dir=/opt/bacula/bin/working \ --with-subsys-dir=/opt/bacula/bin/working \ --enable-smartalloc \ --enable-batch-insert \ --enable-static-tools \ --without-mysql \ --without-sqlite \ --with-openssl \ --with-readline \ --with-postgresql=/usr/postgres/8.3 \ --with-working-dir=/opt/bacula/bin/working \ --with-dump-email=bign...@gmail.com \ --with-job-email=bign...@gmail.com \ --with-smtp-host=localhost exit 0 And here are the errors I get from my "make" = snip === /usr/sfw/bin/g++ -L../lib -o bacula-sd stored.o ansi_label.o autochanger.o acquire.o append.o askdir.o authenticate.o block.o butil.o dev.o device.o dircmd.o dvd.o ebcdic.o fd_cmds.o job.o label.o lock.o mac.o match_bsr.o mount.o parse_bsr.o pythonsd.o read.o read_record.o record.o reserve.o scan.o spool.o status.o stored_conf.o wait.o -lsec -lz \ -lbac -lm -lpthread -lresolv -lnsl -lsocket -lxnet -lintl - lresolv \ -lssl -lcrypto Compiling bls.c Compiling ld: fatal: library -lpthread: not found ld: fatal: library -lresolv: not found ld: fatal: library -lnsl: not found ld: fatal: library -lsocket: not found ld: fatal: library -lxnet: not found ld: fatal: library -lintl: not found ld: fatal: library -lresolv: not found ld: fatal: library -lssl: not found ld: fatal: library -lcrypto: not found ld: fatal: library -lm: not found ld: fatal: library -lc: not found ld: fatal: File processing errors. No output written to bls collect2: ld returned 1 exit status *** Error code 1 The following command caused the error: /usr/sfw/bin/g++ -static -L../lib -L../findlib -o bls bls.o block.o butil.o device.o dev.o label.o match_bsr.o ansi_label.o dvd.o ebcdic.o lock.o autochanger.o acquire.o mount.o parse_bsr.o record.oread_record.o reserve.o scan.o stored_conf.o spool.o wait.o -lfind \ -lbac -lm -lpthread -lresolv -lnsl -lsocket -lxnet -lintl -lresolv -lssl -lcrypto make: Fatal error: Command failed for target `bls' Current working directory /home/bacula/bacula-2.4.3/src/stored === snip The thing is, all of these libraries exist in /lib and /usr/lib and the headers are in /usr/include crle says /lib and /usr/lib are valid directories.. Can someone give me some direction please... I'm kinda lost with this stuff. Thanks Jeff -- ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users -- ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
[Bacula-users] Max Wait Time
Can someone explain why the job is canceled. I use Max Wait Time = 10 minutes Output of the log-file: 03-Jan 23:27 itshas-sv11-dir JobId 1317: Start Backup JobId 1317, Job=BCK-itshas-sv041.2009-01-03_23.23.09 03-Jan 23:27 itshas-sv11-dir JobId 1317: Using Device "lto4drive" 03-Jan 23:27 itshas-sv11-sd JobId 1317: Volume "Week4" previously written, moving to end of data. 03-Jan 23:27 itshas-sv11-sd JobId 1317: Ready to append to end of Volume "Week4" at file=6. itshas-sv041-fd JobId 1317: /dev is a different filesystem. Will not descend from / into /dev itshas-sv041-fd JobId 1317: /home is a different filesystem. Will not descend from / into /home 03-Jan 23:37 itshas-sv11-dir JobId 1317: Fatal error: Max wait time exceeded. Job canceled. ... Scheduled time: 03-Jan-2009 23:23:00 Start time: 03-Jan-2009 23:27:01 End time: 03-Jan-2009 23:38:10 ... "Ready to append to end of Volume Week4 ... " means the write-action is ready to start. At that moment there is no blocking of a resource. So why is the job canceled ? additional info: I also use Maximum Concurrent Jobs = 1 I have scheduled 4 jobs at 23:23. The first one is OK al the others are canceled. Running Bacula 2.4.3 on CentOS 5.2 -- ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] Compiling on Solaris 10 - storage daemon build dying
export LIBRARY_PATH=/usr/postgres/8.3/lib:$LD_LIBRARY_PATH will do the trick. You can always check with ldd if the necessary libs are found: ldd /opt/bacula/bin/bacula-dir -- Check out the new SourceForge.net Marketplace. It is the best place to buy or sell services for just about anything Open Source. http://p.sf.net/sfu/Xq1LFB ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
[Bacula-users] schedule issue
Can someone explain the following: (bacula 2.4.3 on Centos 5.2) Schedule { Name = aca-windows-cycle Run = Level=Differential Pool=aca tue-fri at 00:01 Run = Level=Full Pool=aca-week 1st-4th sat at 00:01 Run = Level=Full Pool=aca-month 5th sat at 00:01 } +-++---+-+-+--+--+-+--+---+---+-+ | MediaId | VolumeName | VolStatus | Enabled | VolBytes| VolFiles | VolRetention | Recycle | Slot | InChanger | MediaType | LastWritten | +-++---+-+-+--+--+-+--+---+---+-+ | 14 | Week1 | Used | 1 | 418,084,273,152 | 439 | 2,419,200 | 1 |0 | 0 | LTO-4 | 2008-12-20 09:37:28 | | 15 | Week2 | Recycle | 1 | 1 |0 | 2,419,200 | 1 |0 | 0 | LTO-4 | 2008-12-13 08:18:41 | | 16 | Week3 | Used | 1 | 432,468,513,792 | 453 | 2,419,200 | 1 |0 | 0 | LTO-4 | 2008-12-27 10:12:46 | | 17 | Week4 | Used | 1 | 34,931,764,224 | 39 | 2,419,200 | 1 |0 | 0 | LTO-4 | 2009-01-04 07:04:32 | +-++---+-+-+--+--+-+--+---+---+-+ So you can see I have 4 times a full backup (pool=aca-week). I expect the next backup needs a volume from the aca-month (5th sat). But that's not true. I had the following message: 10-Jan 00:01 itshas-sv11-sd JobId 1503: Job BCK-itshas-sv01.2009-01-10_00.01.03 waiting. Cannot find any appendable volumes. Please use the "label" command to create a new Volume for: Storage: "lto4drive" (/dev/st0) Pool: aca-week Media type: LTO-4 I read the manual about the schedules but I don't get it. The output of "show schedules" Schedule: name=aca-windows-cycle --> Run Level=Full hour=0 mday=0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 month=0 1 2 3 4 5 6 7 8 9 10 11 wday=6 wom=4 woy=0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 mins=1 --> Pool: name=aca-month PoolType=Backup use_cat=1 use_once=0 cat_files=1 max_vols=0 auto_prune=1 VolRetention=1 year VolUse=13 hours recycle=1 LabelFormat=*None* CleaningPrefix=*None* LabelType=0 RecyleOldest=0 PurgeOldest=0 MaxVolJobs=0 MaxVolFiles=0 MaxVolBytes=0 MigTime=0 secs MigHiBytes=0 MigLoBytes=0 --> Run Level=Full hour=23 mday=0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 month=0 1 2 3 4 5 6 7 8 9 10 11 wday=0 wom=0 1 2 3 woy=0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 mins=30 --> Pool: name=aca-week PoolType=Backup use_cat=1 use_once=0 cat_files=1 max_vols=0 auto_prune=1 VolRetention=28 days VolUse=13 hours recycle=1 LabelFormat=*None* CleaningPrefix=*None* LabelType=0 RecyleOldest=0 PurgeOldest=0 MaxVolJobs=0 MaxVolFiles=0 MaxVolBytes=0 MigTime=0 secs MigHiBytes=0 MigLoBytes=0 --> Run Level=Differential hour=0 mday=0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 month=0 1 2 3 4 5 6 7 8 9 10 11 wday=2 3 4 5 wom=0 1 2 3 4 woy=0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 mins=1 --> Pool: name=aca PoolType=Backup use_cat=1 use_once=0 cat_files=1 max_vols=0 auto_prune=1 VolRetention=5 days VolUse=13 hours recycle=1 LabelFormat=*None* CleaningPrefix=*None* LabelType=0 RecyleOldest=0 PurgeOldest=0 MaxVolJobs=0 MaxVolFiles=0 MaxVolBytes=0 MigTime=0 secs MigHiBytes=0 MigLoBytes=0 -- Check out the new SourceForge.net Marketplace. It is the best place to buy or sell services for just about anything Open Source. http://p.sf.net/sfu/Xq1LFB ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] schedule issue
Ok now I understand. I though the schedule was running 1234512345... Thanks for the response. - Oorspronkelijk bericht - Van: "Graham Keeling" Aan: bacula-users@lists.sourceforge.net Verzonden: Maandag 12 januari 2009 11:19:18 GMT +01:00 Amsterdam / Berlijn / Bern / Rome / Stockholm / Wenen Onderwerp: Re: [Bacula-users] schedule issue On Mon, Jan 12, 2009 at 11:04:49AM +0100, Carlo Maesen wrote: > Can someone explain the following: > (bacula 2.4.3 on Centos 5.2) > > Schedule { > Name = aca-windows-cycle > Run = Level=Differential Pool=aca tue-fri at 00:01 > Run = Level=Full Pool=aca-week 1st-4th sat at 00:01 > Run = Level=Full Pool=aca-month 5th sat at 00:01 > } I don't really understand the problem, but one thing I noticed is that there are not always five Saturdays in a month. For example, the end of 2008 looked like this: October November December Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa 1 2 3 4 1 1 2 3 4 5 6 5 6 7 8 9 10 11 2 3 4 5 6 7 8 7 8 9 10 11 12 13 12 13 14 15 16 17 18 9 10 11 12 13 14 15 14 15 16 17 18 19 20 19 20 21 22 23 24 25 16 17 18 19 20 21 22 21 22 23 24 25 26 27 26 27 28 29 30 31 23 24 25 26 27 28 29 28 29 30 31 So, October and December had four Saturdays each, and November had five. Perhaps your Schedule should be something like this..? Schedule { Name = aca-windows-cycle Run = Level=Differential Pool=aca tue-fri at 00:01 Run = Level=Full Pool=aca-week 2nd-5th sat at 00:01 Run = Level=Full Pool=aca-month 1st sat at 00:01 } -- Check out the new SourceForge.net Marketplace. It is the best place to buy or sell services for just about anything Open Source. http://p.sf.net/sfu/Xq1LFB ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users -- Check out the new SourceForge.net Marketplace. It is the best place to buy or sell services for just about anything Open Source. http://p.sf.net/sfu/Xq1LFB ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
[Bacula-users] Network error with FD during Backup
I have a problem with 1 client. It is a solaris zone. I don't have issues with the other zones on the same server, so I can't believe it's a hardware error or firewall problem. (The client is running in a DMZ-zone) I also have used "Heartbeat Interval = 60 seconds" for dir, sd and fd, but the problem stays. I am using Bacula 2.4.3 on centos 5.2. 07-May 17:09 itshas-sv11-dir JobId 4846: Fatal error: Network error with FD during Backup: ERR=Connection reset by peer 07-May 17:09 itshas-sv11-dir JobId 4846: Fatal error: No Job status returned from FD. 07-May 17:09 itshas-sv11-dir JobId 4846: Error: Bacula itshas-sv11-dir 2.4.3 (10Oct08): 07-May-2009 17:09:45 Build OS: x86_64-unknown-linux-gnu redhat JobId: 4846 Job:BCK-smtpserver.2009-05-07_16.51.03 Backup Level: Full Client: "smtpserver-fd" 2.4.3 (10Oct08) i386-pc-solaris2.10,solaris,5.10 FileSet:"smtpserver-fs" 2008-11-20 22:30:00 Pool: "aca-manual" (From Job resource) Storage:"lto4drive" (From Job resource) Scheduled time: 07-May-2009 16:51:26 Start time: 07-May-2009 16:51:31 End time: 07-May-2009 17:09:45 Elapsed time: 18 mins 14 secs Priority: 10 FD Files Written: 0 SD Files Written: 0 FD Bytes Written: 0 (0 B) SD Bytes Written: 0 (0 B) Rate: 0.0 KB/s Software Compression: None VSS:no Storage Encryption: no Volume name(s): Test Volume Session Id: 1 Volume Session Time:1241707867 Last Volume Bytes: 15,121,354,752 (15.12 GB) Non-fatal FD errors:0 SD Errors: 0 FD termination status: Error SD termination status: Error Termination:*** Backup Error *** -- The NEW KODAK i700 Series Scanners deliver under ANY circumstances! Your production scanning environment may not be a perfect world - but thanks to Kodak, there's a perfect scanner to get the job done! With the NEW KODAK i700 Series Scanner you'll get full speed at 300 dpi even with all image processing features enabled. http://p.sf.net/sfu/kodak-com ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users