Re: [Bacula-users] {SPAM?} Re: {SPAM?} ULTRIUM-HH6 Block Size Limits with Adaptec 78165 or Other Issue?
I have received a response back from Adaptec about block-size and tape drives. All of their controllers have a hard-limit of 256KB. This is an undocumented limit it appears... On 12/23/2016 05:05 PM, Alan Brown wrote: > On 22/12/16 17:49, Drew Von Spreecken wrote: >> Thank you Alan for the detailed response. >> >> I was unaware of the GPL violations, this is an issue for me and it >> will be removed promptly. >> >> The block size I am aiming for was 512KB-1MB and have done extensive >> research on tuning. This should also of course be within specification >> of bacula-sd. > 500k to 1Mb seems to be about the point where the gain ends. > >> Because I am currently writing at a 64KB block, I get around 70MB/s of >> throughput on a mix of compressible and non-compressible data. Even >> changing it to a working 256KB improves what I am writing by about >> 15MB/s. Changing to any other value causes an immediate Device Or >> Resource busy error. >> > What parameter are you using and where in the config file are you using it? > > >> The setup has worked well for years until my storage array doubled in >> size. It is taking over a week to do a backup. >> >> This is direct-attached storage so I don't believe the network buffer >> should have an effect, right? >> > Direct attached to what? Do you mean your disks, tape drives and bacula > are all running on the same system? If so, then you're probably right. > >> I'm not sure I completely understand this statement. >> >> " >> >> You cannot change maximum block sizes on a single volume (Tape). Doing >> so is extremely likely to result in an unreadable volume past the point >> where the block size has changed (reading older tapes with smaller >> maximum block sizes is still ok) >> >> " >> >> My plan was to overwrite all tapes that may have a smaller block size. >> It doesn't matter at this time if I temporarily don't have a tape >> backup and I have no old archives I am concerned with. > You don't need to do that. An older tape will smaller block size will > read just fine, but don't mix block sizes on any one tape. > > >> Rewinding the tape and writing an EOF marker and rewinding again >> should solve this correct? >> > Probably > >> To me it seems like a the SAS controller I am connected to but it just >> seems so unlikely it isn't able to pass blocks greater than 256KB but >> I have no way of verifying this. >> > What's your SAS controller chipset? Some of the older ones are downright > awful. > > > >> The more I write this the more I am convincing myself it probably has >> nothing to do with Bacula (rip-off version) or the configuration. I >> was just hoping someone has run into something similar before. >> >> Regards, >> >> drewv >> >> >> >> On 12/22/2016 11:15 AM, Alan Brown wrote: >>> On 22/12/16 14:31, Drew Von Spreecken wrote: Greetings, I have run into an issue and am looking for input. I have a SAS tape autoloader with an IBM HH-LTO6 drive running the newest firmware. It is currently connected to a Adaptec 78165 HBA/Raid controller via SAS. >>> The maximum block size for IBM LTO6 drives (HH or full height) is 8MB >>> (It's usually 16MB for HP drives) >>> The issue I am having is when I attempt to modify the block-size to anything over 256KB that BareOS writes to tape, it fails. To simplify troubleshooting I have opted to use a combination of btape and dd to test block-size adjustments. >>> Bareos is not Bacula >>> >>> (well, it _IS_ Bacula, which someone has tried to repackage and claim >>> credit for. It is not supported by the Bacula community - and given the >>> legal antics of the Bareos maintainer, (including egrarious GPL >>> breaches) I would advise you don't bring Bareos problems to this forum - >>> and further suggest that you should consider uninstalling Bareos.) >>> >>> >>> Back on the block size topic: >>> >>> >>> The maximum block size supported by bacula-SD is 2MB. That is sufficient >>> to get at least 140MB/s write speed on a LTO6 drive for non-compressible >>> data and upwards of 300MB/s for highly compressible data. >>> >>> device { >>> ... >>> Maximum block size = 2M >>> ... >>> } >>> >>> >>> Attempting to increase max block size past 2MB will result in errors. >>> There is no general benefit in doing so in any case (Tested and >>> verified. I feel that setting larger block sizes should be allowable >>> even if there's no benefit, as it causes apparent errors when the max >>> block size settable in Bacula is smaller than the max allowable block >>> size reported by the tape drives) >>> >>> >>> Make sure you understand the difference between the block size (writing >>> to the tape device) and the network buffer size (communications between >>> -df, -dir and -sd - and needs to be set in all three config files if you >>> change anything) >>> >>> device { >>> ... >>> Maximum Network Buffer Size = 262144 >>> ... >>> } >>> >>> Increasing the Network buffer size from the default 64kB
Re: [Bacula-users] {SPAM?} Re: {SPAM?} ULTRIUM-HH6 Block Size Limits with Adaptec 78165 or Other Issue?
On 22/12/16 17:49, Drew Von Spreecken wrote: > Thank you Alan for the detailed response. > > I was unaware of the GPL violations, this is an issue for me and it > will be removed promptly. > > The block size I am aiming for was 512KB-1MB and have done extensive > research on tuning. This should also of course be within specification > of bacula-sd. 500k to 1Mb seems to be about the point where the gain ends. > > Because I am currently writing at a 64KB block, I get around 70MB/s of > throughput on a mix of compressible and non-compressible data. Even > changing it to a working 256KB improves what I am writing by about > 15MB/s. Changing to any other value causes an immediate Device Or > Resource busy error. > What parameter are you using and where in the config file are you using it? > The setup has worked well for years until my storage array doubled in > size. It is taking over a week to do a backup. > > This is direct-attached storage so I don't believe the network buffer > should have an effect, right? > Direct attached to what? Do you mean your disks, tape drives and bacula are all running on the same system? If so, then you're probably right. > I'm not sure I completely understand this statement. > > " > > You cannot change maximum block sizes on a single volume (Tape). Doing > so is extremely likely to result in an unreadable volume past the point > where the block size has changed (reading older tapes with smaller > maximum block sizes is still ok) > > " > > My plan was to overwrite all tapes that may have a smaller block size. > It doesn't matter at this time if I temporarily don't have a tape > backup and I have no old archives I am concerned with. You don't need to do that. An older tape will smaller block size will read just fine, but don't mix block sizes on any one tape. > > Rewinding the tape and writing an EOF marker and rewinding again > should solve this correct? > Probably > To me it seems like a the SAS controller I am connected to but it just > seems so unlikely it isn't able to pass blocks greater than 256KB but > I have no way of verifying this. > What's your SAS controller chipset? Some of the older ones are downright awful. > The more I write this the more I am convincing myself it probably has > nothing to do with Bacula (rip-off version) or the configuration. I > was just hoping someone has run into something similar before. > > Regards, > > drewv > > > > On 12/22/2016 11:15 AM, Alan Brown wrote: >> On 22/12/16 14:31, Drew Von Spreecken wrote: >>> Greetings, >>> >>> I have run into an issue and am looking for input. I have a SAS tape >>> autoloader with an IBM HH-LTO6 drive running the newest firmware. >>> It is currently connected to a Adaptec 78165 HBA/Raid controller via >>> SAS. >> The maximum block size for IBM LTO6 drives (HH or full height) is 8MB >> (It's usually 16MB for HP drives) >> >>> The issue I am having is when I attempt to modify the block-size to >>> anything over 256KB that BareOS writes to tape, it fails. To simplify >>> troubleshooting I have opted to use a combination of btape and dd to >>> test block-size adjustments. >> Bareos is not Bacula >> >> (well, it _IS_ Bacula, which someone has tried to repackage and claim >> credit for. It is not supported by the Bacula community - and given the >> legal antics of the Bareos maintainer, (including egrarious GPL >> breaches) I would advise you don't bring Bareos problems to this forum - >> and further suggest that you should consider uninstalling Bareos.) >> >> >> Back on the block size topic: >> >> >> The maximum block size supported by bacula-SD is 2MB. That is sufficient >> to get at least 140MB/s write speed on a LTO6 drive for non-compressible >> data and upwards of 300MB/s for highly compressible data. >> >> device { >> ... >>Maximum block size = 2M >> ... >> } >> >> >> Attempting to increase max block size past 2MB will result in errors. >> There is no general benefit in doing so in any case (Tested and >> verified. I feel that setting larger block sizes should be allowable >> even if there's no benefit, as it causes apparent errors when the max >> block size settable in Bacula is smaller than the max allowable block >> size reported by the tape drives) >> >> >> Make sure you understand the difference between the block size (writing >> to the tape device) and the network buffer size (communications between >> -df, -dir and -sd - and needs to be set in all three config files if you >> change anything) >> >> device { >> ... >>Maximum Network Buffer Size = 262144 >> ... >> } >> >> Increasing the Network buffer size from the default 64kB is advisable to >> achieve higher transfer rates on Gb/s or faster networks but there is no >> discernable benefit going past 256kB - even on 10Gb/s networks. >> >> >> You cannot change maximum block sizes on a single volume (Tape). Doing >> so is extremely likely to result in an unreadable volume past the point >> where the block size has changed
Re: [Bacula-users] {SPAM?} ULTRIUM-HH6 Block Size Limits with Adaptec 78165 or Other Issue?
Andrew, I will try this tomorrow. Thank you,drewv-- Developer Access Program for Intel Xeon Phi Processors Access to Intel Xeon Phi processor-based developer platforms. With one year of Intel Parallel Studio XE. Training and support from Colfax. Order your platform today.http://sdm.link/intel___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] {SPAM?} ULTRIUM-HH6 Block Size Limits with Adaptec 78165 or Other Issue?
Hi Drew, Try installing IBM's tape driver rather than the default one in the Linux kernel. I have a LTO4 half-height SAS drive and it made a difference in transfer speeds for me. Andrew On 12/22/2016 12:49 PM, Drew Von Spreecken wrote: > Thank you Alan for the detailed response. > > I was unaware of the GPL violations, this is an issue for me and it will > be removed promptly. > > The block size I am aiming for was 512KB-1MB and have done extensive > research on tuning. This should also of course be within specification > of bacula-sd. > > Because I am currently writing at a 64KB block, I get around 70MB/s of > throughput on a mix of compressible and non-compressible data. Even > changing it to a working 256KB improves what I am writing by about > 15MB/s. Changing to any other value causes an immediate Device Or > Resource busy error. > > The setup has worked well for years until my storage array doubled in > size. It is taking over a week to do a backup. > > This is direct-attached storage so I don't believe the network buffer > should have an effect, right? > > I'm not sure I completely understand this statement. > > " > > You cannot change maximum block sizes on a single volume (Tape). Doing > so is extremely likely to result in an unreadable volume past the point > where the block size has changed (reading older tapes with smaller > maximum block sizes is still ok) > > " > > My plan was to overwrite all tapes that may have a smaller block size. > It doesn't matter at this time if I temporarily don't have a tape backup > and I have no old archives I am concerned with. > > Rewinding the tape and writing an EOF marker and rewinding again should > solve this correct? > > To me it seems like a the SAS controller I am connected to but it just > seems so unlikely it isn't able to pass blocks greater than 256KB but I > have no way of verifying this. > > The more I write this the more I am convincing myself it probably has > nothing to do with Bacula (rip-off version) or the configuration. I was > just hoping someone has run into something similar before. > > Regards, > > drewv > > > > On 12/22/2016 11:15 AM, Alan Brown wrote: >> On 22/12/16 14:31, Drew Von Spreecken wrote: >>> Greetings, >>> >>> I have run into an issue and am looking for input. I have a SAS tape >>> autoloader with an IBM HH-LTO6 drive running the newest firmware. >>> It is currently connected to a Adaptec 78165 HBA/Raid controller via SAS. >> The maximum block size for IBM LTO6 drives (HH or full height) is 8MB >> (It's usually 16MB for HP drives) >> >>> The issue I am having is when I attempt to modify the block-size to >>> anything over 256KB that BareOS writes to tape, it fails. To simplify >>> troubleshooting I have opted to use a combination of btape and dd to test >>> block-size adjustments. >> Bareos is not Bacula >> >> (well, it _IS_ Bacula, which someone has tried to repackage and claim >> credit for. It is not supported by the Bacula community - and given the >> legal antics of the Bareos maintainer, (including egrarious GPL >> breaches) I would advise you don't bring Bareos problems to this forum - >> and further suggest that you should consider uninstalling Bareos.) >> >> >> Back on the block size topic: >> >> >> The maximum block size supported by bacula-SD is 2MB. That is sufficient >> to get at least 140MB/s write speed on a LTO6 drive for non-compressible >> data and upwards of 300MB/s for highly compressible data. >> >> device { >> ... >>Maximum block size = 2M >> ... >> } >> >> >> Attempting to increase max block size past 2MB will result in errors. >> There is no general benefit in doing so in any case (Tested and >> verified. I feel that setting larger block sizes should be allowable >> even if there's no benefit, as it causes apparent errors when the max >> block size settable in Bacula is smaller than the max allowable block >> size reported by the tape drives) >> >> >> Make sure you understand the difference between the block size (writing >> to the tape device) and the network buffer size (communications between >> -df, -dir and -sd - and needs to be set in all three config files if you >> change anything) >> >> device { >> ... >>Maximum Network Buffer Size = 262144 >> ... >> } >> >> Increasing the Network buffer size from the default 64kB is advisable to >> achieve higher transfer rates on Gb/s or faster networks but there is no >> discernable benefit going past 256kB - even on 10Gb/s networks. >> >> >> You cannot change maximum block sizes on a single volume (Tape). Doing >> so is extremely likely to result in an unreadable volume past the point >> where the block size has changed (reading older tapes with smaller >> maximum block sizes is still ok) >> >> >> If changing block size in a working system, mark ALL open volumes as >> used _before_ attempting any more writes. >> >> >> Alan >> >> >> >>> Each test I perform I rewind the tape, write EOF and rewind again. I'm >>> not missing a step here, right?
Re: [Bacula-users] {SPAM?} ULTRIUM-HH6 Block Size Limits with Adaptec 78165 or Other Issue?
Thank you Alan for the detailed response. I was unaware of the GPL violations, this is an issue for me and it will be removed promptly. The block size I am aiming for was 512KB-1MB and have done extensive research on tuning. This should also of course be within specification of bacula-sd. Because I am currently writing at a 64KB block, I get around 70MB/s of throughput on a mix of compressible and non-compressible data. Even changing it to a working 256KB improves what I am writing by about 15MB/s. Changing to any other value causes an immediate Device Or Resource busy error. The setup has worked well for years until my storage array doubled in size. It is taking over a week to do a backup. This is direct-attached storage so I don't believe the network buffer should have an effect, right? I'm not sure I completely understand this statement. " You cannot change maximum block sizes on a single volume (Tape). Doing so is extremely likely to result in an unreadable volume past the point where the block size has changed (reading older tapes with smaller maximum block sizes is still ok) " My plan was to overwrite all tapes that may have a smaller block size. It doesn't matter at this time if I temporarily don't have a tape backup and I have no old archives I am concerned with. Rewinding the tape and writing an EOF marker and rewinding again should solve this correct? To me it seems like a the SAS controller I am connected to but it just seems so unlikely it isn't able to pass blocks greater than 256KB but I have no way of verifying this. The more I write this the more I am convincing myself it probably has nothing to do with Bacula (rip-off version) or the configuration. I was just hoping someone has run into something similar before. Regards, drewv On 12/22/2016 11:15 AM, Alan Brown wrote: > On 22/12/16 14:31, Drew Von Spreecken wrote: >> Greetings, >> >> I have run into an issue and am looking for input. I have a SAS tape >> autoloader with an IBM HH-LTO6 drive running the newest firmware. >> It is currently connected to a Adaptec 78165 HBA/Raid controller via SAS. > The maximum block size for IBM LTO6 drives (HH or full height) is 8MB > (It's usually 16MB for HP drives) > >> The issue I am having is when I attempt to modify the block-size to >> anything over 256KB that BareOS writes to tape, it fails. To simplify >> troubleshooting I have opted to use a combination of btape and dd to test >> block-size adjustments. > Bareos is not Bacula > > (well, it _IS_ Bacula, which someone has tried to repackage and claim > credit for. It is not supported by the Bacula community - and given the > legal antics of the Bareos maintainer, (including egrarious GPL > breaches) I would advise you don't bring Bareos problems to this forum - > and further suggest that you should consider uninstalling Bareos.) > > > Back on the block size topic: > > > The maximum block size supported by bacula-SD is 2MB. That is sufficient > to get at least 140MB/s write speed on a LTO6 drive for non-compressible > data and upwards of 300MB/s for highly compressible data. > > device { > ... >Maximum block size = 2M > ... > } > > > Attempting to increase max block size past 2MB will result in errors. > There is no general benefit in doing so in any case (Tested and > verified. I feel that setting larger block sizes should be allowable > even if there's no benefit, as it causes apparent errors when the max > block size settable in Bacula is smaller than the max allowable block > size reported by the tape drives) > > > Make sure you understand the difference between the block size (writing > to the tape device) and the network buffer size (communications between > -df, -dir and -sd - and needs to be set in all three config files if you > change anything) > > device { > ... >Maximum Network Buffer Size = 262144 > ... > } > > Increasing the Network buffer size from the default 64kB is advisable to > achieve higher transfer rates on Gb/s or faster networks but there is no > discernable benefit going past 256kB - even on 10Gb/s networks. > > > You cannot change maximum block sizes on a single volume (Tape). Doing > so is extremely likely to result in an unreadable volume past the point > where the block size has changed (reading older tapes with smaller > maximum block sizes is still ok) > > > If changing block size in a working system, mark ALL open volumes as > used _before_ attempting any more writes. > > > Alan > > > >> Each test I perform I rewind the tape, write EOF and rewind again. I'm >> not missing a step here, right? Should I be able to write to a tape in >> this way with a different block size if I've used it at a different >> (smaller) size before? >> >> I am querying the tape-drive in the autoloader directly, the autoloader >> should not be part of the problem. >> >> Writing at anything under and at 256Kb works fine but is slow. >> >> The output from tapeinfo is: >> >> tapeinfo -f /dev/nst0 >>
Re: [Bacula-users] {SPAM?} ULTRIUM-HH6 Block Size Limits with Adaptec 78165 or Other Issue?
On 22/12/16 14:31, Drew Von Spreecken wrote: > Greetings, > > I have run into an issue and am looking for input. I have a SAS tape > autoloader with an IBM HH-LTO6 drive running the newest firmware. > It is currently connected to a Adaptec 78165 HBA/Raid controller via SAS. The maximum block size for IBM LTO6 drives (HH or full height) is 8MB (It's usually 16MB for HP drives) > The issue I am having is when I attempt to modify the block-size to > anything over 256KB that BareOS writes to tape, it fails. To simplify > troubleshooting I have opted to use a combination of btape and dd to test > block-size adjustments. Bareos is not Bacula (well, it _IS_ Bacula, which someone has tried to repackage and claim credit for. It is not supported by the Bacula community - and given the legal antics of the Bareos maintainer, (including egregious GPL breaches) I would advise you don't bring Bareos problems to this forum - and further suggest that you should consider uninstalling Bareos.) Back on the block size topic: The maximum block size supported by bacula-SD is 2MB. That is sufficient to get at least 140MB/s write speed on a LTO6 drive for non-compressible data and upwards of 300MB/s for highly compressible data. device { ... Maximum block size = 2M ... } Attempting to increase max block size past 2MB will result in errors. There is no general benefit in doing so in any case (Tested and verified. I feel that setting larger block sizes should be allowable even if there's no benefit, as it causes apparent errors when the max block size settable in Bacula is smaller than the max allowable block size reported by the tape drives) Make sure you understand the difference between the block size (writing to the tape device) and the network buffer size (communications between -df, -dir and -sd - and needs to be set in all three config files if you change anything) device { ... Maximum Network Buffer Size = 262144 ... } Increasing the Network buffer size from the default 64kB is advisable to achieve higher transfer rates on Gb/s or faster networks but there is no discernable benefit going past 256kB - even on 10Gb/s networks. You cannot change maximum block sizes on a single volume (Tape). Doing so is extremely likely to result in an unreadable volume past the point where the block size has changed (reading older tapes with smaller maximum block sizes is still ok) If changing block size in a working system, mark ALL open volumes as used _before_ attempting any more writes. Alan > > Each test I perform I rewind the tape, write EOF and rewind again. I'm > not missing a step here, right? Should I be able to write to a tape in > this way with a different block size if I've used it at a different > (smaller) size before? > > I am querying the tape-drive in the autoloader directly, the autoloader > should not be part of the problem. > > Writing at anything under and at 256Kb works fine but is slow. > > The output from tapeinfo is: > > tapeinfo -f /dev/nst0 > Product Type: Tape Drive > Vendor ID: 'IBM ' > Product ID: 'ULTRIUM-HH6 ' > Revision: 'G9P1' > Attached Changer API: No > SerialNumber: '10WT077984' > MinBlock: 1 > MaxBlock: 8388608 > SCSI ID: 0 > SCSI LUN: 0 > Ready: yes > BufferedMode: yes > Medium Type: 0x68 > Density Code: 0x5a > BlockSize: 0 > DataCompEnabled: no > DataCompCapable: yes > DataDeCompEnabled: yes > CompType: > DeCompType: 0xff > Block Position: 5 > Partition 0 Remaining Kbytes: -1 > Partition 0 Size in Kbytes: -1 > ActivePartition: 0 > EarlyWarningSize: 0 > NumPartitions: 0 > MaxPartitions: 3 > > As you can see the block size limit for the drive itself is ~8MB... > > Here is an output from mt: > > mt -f /dev/nst0 status > SCSI 2 tape drive: > File number=0, block number=0, partition=0. > Tape block size 0 bytes. Density code 0x5a (no translation). > Soft error count since last status=0 > General status bits on (4101): > BOT ONLINE IM_REP_EN > > Here is an attempt to write at a 256K block: > > dd if=/dev/zero of=/dev/nst0 bs=256k count=1 > 1+0 records in > 1+0 records out > 262144 bytes (262 kB) copied, 1.9402 s, 135 kB/s > > Here is the failure at 512K: > > dd if=/dev/zero of=/dev/nst0 bs=512k count=1 > dd: error writing ‘/dev/nst0’: Device or resource busy > 1+0 records in > 0+0 records out > 0 bytes (0 B) copied, 1.56954 s, 0.0 kB/s > > > It will fail with anything over 256K, even 257K. > > There are no errors in my system logs. > > I suspect either I have a configuration error here or am missing > something simple OR the Adaptec 78165 raid controller is limiting the > block size before I write to tape. Adaptec support is unable to confirm > this. Is there a way I can prove this or does anyone have any guidance > on how to continue troubleshooting this issue? > > Thanks! > > --drewv > > > > > > -- > Developer Access Program for Intel Xeon Phi Processors > Access to Intel Xeon
Re: [Bacula-users] [SPAM] Deleting DIFFERENTIAL Jobs from Volumes
Larrybwoy> Thanks for the replies and good advice. The reason I Larrybwoy> thought of this backup plan is because what I need to back Larrybwoy> up are multiple dynamic file systems from abut 20 Larrybwoy> servers. These file systems contain data that is always Larrybwoy> changing since they contain various dynamic application Larrybwoy> files. I do not need to have backups that are old; in case Larrybwoy> of a disaster, I need to be able to bring back the data Larrybwoy> that was lost during the past hour at most, so that the Larrybwoy> people working with the applications only lose 1 hour of Larrybwoy> work in the worst case scenario. I have this set up and Larrybwoy> running pretty well on 2 hosts so far. Larry, You would be better off running a filesystem with snapshots and mirrored local disks for that type of data recovery. I would still do Bacula backups, but only daily so that you have something to send offsite. I mean, how do you expect to restore a system if it dies completely? Do you need to have them working again right away with no real downtime execpt and hour's loss of data? Does the end user just move to a new system? -- Mobile security can be enabling, not merely restricting. Employees who bring their own devices (BYOD) to work are irked by the imposition of MDM restrictions. Mobile Device Manager Plus allows you to control only the apps on BYO-devices by containerizing them, leaving personal data untouched! https://ad.doubleclick.net/ddm/clk/304595813;131938128;j ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] SPAM: Re: Restoring Files backed up on Windows Client to FreeBSD Client, files all wrong size.
On 2015-12-02 7:25 am, Uwe Schuerkamp wrote: > On Tue, Dec 01, 2015 at 12:29:53PM -0600, dweimer wrote: >> >> I will do some more test restores to the client now that its up, to >> see >> if its only when restoring to the freebsd client. I have verified >> restores to the FreeBSD client of itself restore correctly. just >> curious >> if someone else has seen this? >> > > Yep, we've seen the exact same issue on a few windows servers on one > bacula instance. At first I thought the tape was corrupt, but > restoring a Linux client from the same volume worked fine. > > We discovered the issue by pure chance, client used on the windows box > is 5.2.13, but we've also seen the issue on a client running a > licensed Enterprise 7 version. > > Cheers, Uwe The issue doesn't seem to appear on older versions of Bacula server, verified that I can restore to one we have running at work on a remote site. I was reviewing the documentation. In my case I am not worried about permissions, single user on the windows machine, however it seems to me that turning the portable=yes option on will disable the use of VSS. if I am reading that correctly? I don't care about permissions, but I do want to make sure I use VSS. If that is the case I will stick with it the way it is. I only have a single windows system and would prefer to be able to restore files if needed in event that it goes down, without waiting to get it back online. Not that I have had much downtime on it, recent versions of windows have definitely been more stable. The last rebuild was simply done to upgrade from a 120GB to 480GB SSD drive for the system disk, I put it off until the 1511 update came out so I could reinstall straight to windows 10 instead of installing 8 and updating to windows 10 -- Thanks, Dean E. Weimer http://www.dweimer.net/ -- Go from Idea to Many App Stores Faster with Intel(R) XDK Give your users amazing mobile app experiences with Intel(R) XDK. Use one codebase in this all-in-one HTML5 development environment. Design, debug & build mobile apps & 2D/3D high-impact games for multiple OSs. http://pubads.g.doubleclick.net/gampad/clk?id=254741911=/4140 ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] SPAM: Re: maximum client file size
On 05/21/2015 10:23 am, Devin Reade wrote: --On Thursday, May 21, 2015 09:06:41 AM +0200 Kern Sibbald k...@sibbald.com wrote: Bacula does keep 64 bit addresses. Excellent. Not surprisingly, I'm not dealing with file sizes near 2^63, but I *do* need to back up files that are in the 2^39 range (from filesystems that are in the 2^46 range onto virtual cartridges no larger than 2^43). No, these aren't database files, they're huge chunks of write-once data for which we need archival copies. I'm still debating whether Bacula is the right tool for the job in this case. Network-based copies to geographically different locations is a non-starter, so it's got to be a variant of sneaker-net. On the SD output end, if you do not limit your Volume size, there will surely be some problems at 2^63. Of course, who would ever want to write such a large volume? On that note, I've traditionally gone with volume sizes in the ~500MB (2^29) range (for disk stores), but in this case that can push the volume count in the catalog to more than 512k entries once a minimum number of offsite copies have been made. Have you seen installations with that many volumes? If so, are there any known issues other than catalog tuning? I'm thinking that a larger volume size (and consequently smaller volume count) could be warranted (at least for the full pool), but I'm wondering if there have been many that have passed volume sizes past 2GB or 4GB and if there have been any issues in doing so. My gut is saying to go with 2GB volume sizes, but I'm curious. (Considering that my first hard drive cost me $4000 and was 40MB, all the above just sounds crazy.) Devin I have three systems two of which are using disk backup, then copy to tape both of those are running on CentOS with 25G volume sizes with 100 volumes in the disk pool. Going on two years of service for both machines, haven't had an issue yet. The third system is using 46G File volumes been running for 4 years without a problem. This one just does disk only. I wouldn't worry about Bacula's ability, but more the capabilities of the file system and operating system its running on. My volume sizes were chosen to make sure I could handle my desired retention time, too large of a volume and the last job may take too long to expire allowing the volume to be recycled, before I run out of other volumes. -- Thanks, Dean E. Weimer http://www.dweimer.net/ -- One dashboard for servers and applications across Physical-Virtual-Cloud Widest out-of-the-box monitoring support with 50+ applications Performance metrics, stats and reports that give you Actionable Insights Deep dive visibility with transaction tracing using APM Insight. http://ad.doubleclick.net/ddm/clk/290420510;117567292;y ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] ****SPAM(7.5)**** Re: Why bacula does not mark job as failed after Write error on tape drive?
On Fri, 06 Dec 2013 11:14:25 +0100 Kern Sibbald k...@sibbald.com wrote: Hello, Kern! Thank you for explanation. And - yes, I do know the difference between may not and not =) It is a bit strange - at the moment of error bacula knows what job is being running but the only warning is in log-file. Of course log file can be periodically parsed, but why not to mark job with warning flag as at this point we can not be sure is job valid or not. Hello, The Bacula behavior is indeed correct (as programmed). This is an error, but it is not a fatal error. Note that the error message is in the conditional tense (maybe not obvious for non-mother tongue speakers), which means that the tape may not be readable, but that the only way to know for sure is to read the tape. This can happen if the end of the tape has been reached and the drive does not permit writing the final EOF. As mentioned above, the only way to know for sure is to try to read the last files written to the tape. Given the uncertainty, it is not really justified to mark everything as failed, because it is extremely unlikely that the whole tape is bad or unreadable. You may correctly have a different opinion, but in that case you will either need to change the source code or after the fact, mark the tape as being unusable using bconsole. Best regards, Kern On 12/05/2013 10:34 AM, Andrey Lyarskiy wrote: Greetings! Recently I have noticed that all jobs status are fine, independently of write errors on tape drive. bacula.log: 05-Dec 07:27 backup-sd-office JobId 66089: Error: block.c:577 Write error at 280:554 on device Drive-0 (/dev/nst0). ERR=Input/output error. 05-Dec 07:27 backup-sd-office JobId 66089: Error: Error writing final EOF to tape. This Volume may not be readable. dev.c:1557 ioctl MTWEOF error on Drive-0 (/dev/nst0). ERR=Input/output error. 05-Dec 07:27 backup-sd-office JobId 66089: End of medium on Volume 000842L5 Bytes=1,296,312,966,144 Blocks=20,094,136 at 05-Dec-2013 07:27. TapeLibrary - IBM 3573-TL with LTO5 tapes. Why bacula did not mark job #66089 as failed? IMHO after such error all the jobs on this tape should be marked as possible-unreadable or something like that. -- Sponsored by Intel(R) XDK Develop, test and display web and hybrid apps with a single code base. Download it for free now! http://pubads.g.doubleclick.net/gampad/clk?id=111408631iu=/4140/ostg.clktrk ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users -- Sponsored by Intel(R) XDK Develop, test and display web and hybrid apps with a single code base. Download it for free now! http://pubads.g.doubleclick.net/gampad/clk?id=111408631iu=/4140/ostg.clktrk ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] [SPAM] Re: Using HP Storageworks 1/8 G2 with Bacula
On Wednesday, 16 June 2010 07:48:22 +1200, richard wrote: Alan Brown wrote: A standalone single port SAS card isn't very expensive. It's worth trying to see if this is a driver issue. I can vouch for this controller, running an LTO4 drive: http://www-01.ibm.com/common/ssi/cgi-bin/ssialias?infotype=ansubtype=caappname=redbookshtmlfid=897/ENUS106-646 Could the P212 controller not work if I use it in a PC instead of a Proliant server? My question is because this controller is PCI Express and I suppose that this type of interface should be standard. Also this card has come with an large adapter (to be placed on a PC) as a short adapter (to be placed on a server). Thanks for your reply. Regards, Daniel -- Fingerprint: BFB3 08D6 B4D1 31B2 72B9 29CE 6696 BF1B 14E6 1D37 Powered by Debian GNU/Linux Lenny - Linux user #188.598 signature.asc Description: Digital signature -- ThinkGeek and WIRED's GeekDad team up for the Ultimate GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the lucky parental unit. See the prize list and enter to win: http://p.sf.net/sfu/thinkgeek-promo___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] [SPAM]Problem installing bacula 5.0.0
Simplify the configure script. You have everything and the kitchen sink (passwords) specified. Mehma === 2010/2/7 List Man list@bluejeantime.com I have tried several time to install bacula 5.0.0 on a brand new server and I keep getting this linking error for static-bacula-sd. I don’t know what else to do. I have searched and read the releases and no sugar. See below: Linking bacula-sd ... /usr/bin/g++ -L../lib -o bacula-sd stored.o ansi_label.o vtape.o autochanger.o acquire.o append.o askdir.o authenticate.o block.o butil.o dev.o device.o dircmd.o dvd.o ebcdic.o fd_cmds.o job.o label.o lock.o mac.o match_bsr.o mount.o parse_bsr.o pythonsd.o read.o read_record.o record.o reserve.o scan.o sd_plugins.o spool.o status.o stored_conf.o vol_mgr.o wait.o -lacl -lz \ -lbacpy -lbaccfg -lbac -lm -lpthread -ldl \ -lssl -lcrypto /usr/bin/g++ -static -L../lib -o static-bacula-sd stored.o ansi_label.o vtape.o autochanger.o acquire.o append.o askdir.o authenticate.o block.o butil.o dev.o device.o dircmd.o dvd.o ebcdic.o fd_cmds.o job.o label.o lock.o mac.o match_bsr.o mount.o parse_bsr.o pythonsd.o read.o read_record.o record.o reserve.o scan.o sd_plugins.o spool.o status.o stored_conf.o vol_mgr.o wait.o -lacl -lz \ -lbacpy -lbaccfg -lbac -lm -lpthread -ldl \ -lssl -lcrypto ../lib/libbac.a(plugins.o): In function `load_plugins(void*, void*, char const*, char const*, bool (*)(Plugin*))': /usr/src/bacula-5.0.0/src/lib/plugins.c:140: warning: Using 'dlopen' in statically linked applications requires at runtime the shared libraries from the glibc version used for linking ../lib/libbac.a(priv.o): In function `drop(char*, char*, bool)': /usr/src/bacula-5.0.0/src/lib/priv.c:92: warning: Using 'initgroups' in statically linked applications requires at runtime the shared libraries from the glibc version used for linking ../lib/libbac.a(guid_to_name.o): In function `get_gidname': /usr/src/bacula-5.0.0/src/lib/guid_to_name.c:122: warning: Using 'getgrgid' in statically linked applications requires at runtime the shared libraries from the glibc version used for linking ../lib/libbac.a(priv.o): In function `drop(char*, char*, bool)': /usr/src/bacula-5.0.0/src/lib/priv.c:85: warning: Using 'getgrnam' in statically linked applications requires at runtime the shared libraries from the glibc version used for linking /usr/src/bacula-5.0.0/src/lib/priv.c:66: warning: Using 'getpwnam' in statically linked applications requires at runtime the shared libraries from the glibc version used for linking ../lib/libbac.a(guid_to_name.o): In function `get_uidname': /usr/src/bacula-5.0.0/src/lib/guid_to_name.c:109: warning: Using 'getpwuid' in statically linked applications requires at runtime the shared libraries from the glibc version used for linking ../lib/libbac.a(bnet.o): In function `resolv_host': /usr/src/bacula-5.0.0/src/lib/bnet.c:424: warning: Using 'gethostbyname2' in statically linked applications requires at runtime the shared libraries from the glibc version used for linking ../lib/libbac.a(address_conf.o): In function `add_address': /usr/src/bacula-5.0.0/src/lib/address_conf.c:310: warning: Using 'getservbyname' in statically linked applications requires at runtime the shared libraries from the glibc version used for linking /usr/lib/gcc/i486-linux-gnu/4.3.2/../../../../lib/libcrypto.a(c_zlib.o): In function `zlib_stateful_expand_block': (.text+0x12c): undefined reference to `inflate' /usr/lib/gcc/i486-linux-gnu/4.3.2/../../../../lib/libcrypto.a(c_zlib.o): In function `zlib_stateful_compress_block': (.text+0x1af): undefined reference to `deflate' /usr/lib/gcc/i486-linux-gnu/4.3.2/../../../../lib/libcrypto.a(c_zlib.o): In function `zlib_stateful_finish': (.text+0x1fc): undefined reference to `inflateEnd' /usr/lib/gcc/i486-linux-gnu/4.3.2/../../../../lib/libcrypto.a(c_zlib.o): In function `zlib_stateful_finish': (.text+0x207): undefined reference to `deflateEnd' /usr/lib/gcc/i486-linux-gnu/4.3.2/../../../../lib/libcrypto.a(c_zlib.o): In function `zlib_stateful_init': (.text+0x2b0): undefined reference to `inflateInit_' /usr/lib/gcc/i486-linux-gnu/4.3.2/../../../../lib/libcrypto.a(c_zlib.o): In function `zlib_stateful_init': (.text+0x320): undefined reference to `deflateInit_' collect2: ld returned 1 exit status make[1]: *** [static-bacula-sd] Error 1 make[1]: Leaving directory `/usr/src/bacula-5.0.0/src/stored' The dir and fd linked just fine. I used the following script to install it. I am not a new user and this server is being used as an upgrade to an existing server. ./configure --with-working-dir=/var/bacula --sbindir=/usr/sbin --sysconfdir=/etc/bacula --with-mysql=/usr/local/mysql --with-dump-email=X --with-job-email=XXX --with-smtp-host=localhost --with-baseport=9101 --with-dir-password=XXX --with-fd-password=XX --with-dir-user=XXX --with-dir-group=XXX --with-sd-password=
Re: [Bacula-users] [SPAM] Re: [SPAM] Re: [SPAM] BaculaTimeMachinetype SOHOconfig?
Timo Neuvonen timo-n...@tee-en.net kirjoitti viestissä news:hfjoo9$ks...@ger.gmane.org... Simon J Mudd sjm...@pobox.com kirjoitti viestissä news:m3fx7mkdv3@mad06.wl0.org... timo-n...@tee-en.net (Timo Neuvonen) writes: Simon J Mudd sjm...@pobox.com kirjoitti viestissä ... Yes, but that's what I'm trying to avoid. I realise that I MUST have sufficient space really for at least 2 full backups plus some extra for incrementals but I don't want to worry about the details. Therefore I want to configure You said you don't want to worry about the details. However, one such very strong detail is the schedule you already have specified, it says to run a full backup once a month. Required retention time is closely related to this, and needs to be specified too. Again, I think you're missing the point. You are right, in a business environment you do want to decide to do X full backups every certain period of time, X incrementals etc. and then you need to do some calculations to work out how much disk space you need for this. This value of course changes and you may later need to add more storage or tapes or whatever to accommodate these changes. Think of the normal HOME user who may have an interest in Bacula to backup data. He has a unix PC with disks occupying say 100GB of space. So he buys himself a 1TB external USB disk and wants to use that for backups. If it's dedicated he'll want to use ALL the space for backups and keep as much as he can. So he's likely to want to do perhaps a single weekly or monthly backup followed by incrementals in between. Exactly how many backups he keeps is relatively unimportant. And for this type of scenario bacula is tricky (from what I can see) to setup. I've had multiple problems (due to misconfiguration) of bacula not labelling new disk devices in the pool and also when the disk starts to fill up of not removing the oldest backups. I'm not a backup administrator and have plenty of other distractions which prevent me properly working out how to get bacula running properly. That's why I suggested a recipe for the type of configuration I suggest might be extremely useful. Since now you haven't specified the volume retention, Bacula uses its internal default which is one year, 365 days. You have to specify a shorter volume retention time if you want to be able to recycle the volumes sooner. But I dont' want retention to depend on time, but disk usage. Bacula can use all disk space you allow it to use, that is controlled with volume size and maximum number of volumes, that you had set to reasonable values in the configuration. The volume retention time is just a minimum time limit; if your disk space will allow it, the old data in un-recycled volumes will still be available there after much longer time (in theory, forever). I think this is what you wanted, so I can't see any actual problem there. But if you absolutely don't want to change the default volume retention time to something that would fit to your application, there isn't much else to do, I think. Explicitly specifying the volume retention time is the only way to make Bacula recycle the volumes in less than a year, since 365 days is Bacula's internal default. An update / a correction to the statement above: setting Purge Oldest Volume = yes in the pool specification will override any retention period to recycle a volume when more space is needed. In general, it is a very dangerous option, but I think it is exacly what you are looking for. Still, I would vote for relying on the decently set volume retention, and forgetting the above option. ... Btw, you can use list media command to see the status of the existing volumes. so while you can define how many volumes to have and their sizes you can't get bacula to purge based on these values? ... the pool to auto purge if it fills up. New full or incremental backups will create new volumes as needed, and the older ones will get purged. Actually, Bacula will recycle the existing volumes, that is, discard the old data in the volume, and use the same recycled volume again. So the volume name won't change (unless this is possible due to some very new Bacula feature). That's fine. In the end I don't care howe the volumes are labelled, or if new ones are created or existing ones are reused. Within reasonable limits (reasonable amount of disk space available), this should be possible with Bacula. So it sounds part of my problem has been to misunderstand the precise terms used in Bacula. It sounds like I don't want to purge the disk volumes, but to recycle them. So how do I configure this: - A fixed number of disk volumes of a predetermined size which will be recycled when no more space is left? Ideally the recycling in this simple case would be based on a FIFO type principal. If you don't want to have _any_ minimum time limit for volume
Re: [Bacula-users] [SPAM] Re: [SPAM] Re: [SPAM] Bacula TimeMachinetype SOHOconfig?
Simon J Mudd sjm...@pobox.com kirjoitti viestissä news:m3fx7mkdv3@mad06.wl0.org... timo-n...@tee-en.net (Timo Neuvonen) writes: Simon J Mudd sjm...@pobox.com kirjoitti viestissä ... Yes, but that's what I'm trying to avoid. I realise that I MUST have sufficient space really for at least 2 full backups plus some extra for incrementals but I don't want to worry about the details. Therefore I want to configure You said you don't want to worry about the details. However, one such very strong detail is the schedule you already have specified, it says to run a full backup once a month. Required retention time is closely related to this, and needs to be specified too. Again, I think you're missing the point. You are right, in a business environment you do want to decide to do X full backups every certain period of time, X incrementals etc. and then you need to do some calculations to work out how much disk space you need for this. This value of course changes and you may later need to add more storage or tapes or whatever to accommodate these changes. Think of the normal HOME user who may have an interest in Bacula to backup data. He has a unix PC with disks occupying say 100GB of space. So he buys himself a 1TB external USB disk and wants to use that for backups. If it's dedicated he'll want to use ALL the space for backups and keep as much as he can. So he's likely to want to do perhaps a single weekly or monthly backup followed by incrementals in between. Exactly how many backups he keeps is relatively unimportant. And for this type of scenario bacula is tricky (from what I can see) to setup. I've had multiple problems (due to misconfiguration) of bacula not labelling new disk devices in the pool and also when the disk starts to fill up of not removing the oldest backups. I'm not a backup administrator and have plenty of other distractions which prevent me properly working out how to get bacula running properly. That's why I suggested a recipe for the type of configuration I suggest might be extremely useful. Since now you haven't specified the volume retention, Bacula uses its internal default which is one year, 365 days. You have to specify a shorter volume retention time if you want to be able to recycle the volumes sooner. But I dont' want retention to depend on time, but disk usage. Bacula can use all disk space you allow it to use, that is controlled with volume size and maximum number of volumes, that you had set to reasonable values in the configuration. The volume retention time is just a minimum time limit; if your disk space will allow it, the old data in un-recycled volumes will still be available there after much longer time (in theory, forever). I think this is what you wanted, so I can't see any actual problem there. But if you absolutely don't want to change the default volume retention time to something that would fit to your application, there isn't much else to do, I think. Explicitly specifying the volume retention time is the only way to make Bacula recycle the volumes in less than a year, since 365 days is Bacula's internal default. ... Btw, you can use list media command to see the status of the existing volumes. so while you can define how many volumes to have and their sizes you can't get bacula to purge based on these values? ... the pool to auto purge if it fills up. New full or incremental backups will create new volumes as needed, and the older ones will get purged. Actually, Bacula will recycle the existing volumes, that is, discard the old data in the volume, and use the same recycled volume again. So the volume name won't change (unless this is possible due to some very new Bacula feature). That's fine. In the end I don't care howe the volumes are labelled, or if new ones are created or existing ones are reused. Within reasonable limits (reasonable amount of disk space available), this should be possible with Bacula. So it sounds part of my problem has been to misunderstand the precise terms used in Bacula. It sounds like I don't want to purge the disk volumes, but to recycle them. So how do I configure this: - A fixed number of disk volumes of a predetermined size which will be recycled when no more space is left? Ideally the recycling in this simple case would be based on a FIFO type principal. If you don't want to have _any_ minimum time limit for volume retention, just set it to one second, which propably is the shortest value you can specify. In theory, this can result in a situation that if your one full backup would consume more space than is designated for backup use, and recycling of the first volume used for that backup would then happen before that backup is finished. But if you prefer this, instead of seeing an error message in this obvious case of malfunctioning, go for it. Seriously, a more reasonable value might be one
Re: [Bacula-users] [SPAM] Bacula TimeMachine type SOHO config?
Simon J Mudd sjm...@pobox.com kirjoitti viestissä news:20091206082523.ga...@mad06.wl0.org... Hello, I've been using Bacula for some time for home use, and trying to get a working TimeMachine type configuration working. That is I'd like to configure bacula to store to an external hard disk using a number of fixed sizes files, occupying up to a certain amount of disk space. In my case 100 x 2GB files. I'd like to auto label new files and purge old ones automatically to make space if needed. This sounds like a simple recipe which is appropriate for a large number of SOHO type situations. I know Bacula can do more, but to minimise intervention this looks nice. However, I don't quite get this to work. I've had issues with getting the auto-label to always work, and also the auto-expire. I wonder if anyone can look at my configuration or offer an alternative to do this? I guess Time Machine is something Apple-like, and I don't know nothing about it but the name. But if what you told is the essential, that is the disk usage strategy, it shouldn't be a problem though minor differences might exist. I didn't notice Volume Retention specified in your Pool config. I controls how soon after the last write the volumes can be recycled. Since you make full backup once a month, this should be set to the minimum of more than one month (eg. 40 days) to make sure you'll always have at least one (I'd seriously recommend more, at least two) full backup(s) available. After this period, if necessary, Bacula _can_ recycle the existing volumes. However, actual recycle won't happen until really needed to free the previously used volumes, but it can't happen before this time limit has expired. Also remember that modifying the pool parameters in the conf does not automatically apply to the existing volumes, only to the new ones created thereafter. To make the existing volumes to obey the new values, you'll need to use the update pool / update volumes from pool commands from the bacula console. Regards, Timo So currently I'm getting errors like this (taken a few days ago): 01-Dec 06:05 mad06-sd JobId 1869: Job mad06-job.2009-11-29_23.05.00_02 waiting. Cannot find any appendable volumes. Please use the label command to create a new Volume for: Storage: FileStorage1 (/bacula/2) Pool: DISK_POOL Media type: File *status dir mad06-dir Version: 3.0.1 (30 April 2009) x86_64-redhat-linux-gnu redhat Daemon started 29-Nov-09 09:53, 0 Jobs run since started. Heap: heap=249,856 smbytes=99,692 max_bytes=125,113 bufs=378 max_bufs=379 Scheduled Jobs: Level Type Pri Scheduled Name Volume === IncrementalBackup10 01-Dec-09 23:05mad06-job *unknown* Full Backup11 01-Dec-09 23:10BackupCatalog *unknown* IncrementalBackup10 02-Dec-09 11:05mad06-job *unknown* Running Jobs: Console connected at 01-Dec-09 19:50 JobId Level Name Status == 1869 Differe mad06-job.2009-11-29_23.05.00_02 is waiting for an appendable Volume 1870 FullBackupCatalog.2009-11-29_23.10.00_03 is waiting execution 1871 Increme mad06-job.2009-11-30_11.05.00_04 is waiting execution 1872 Increme mad06-job.2009-11-30_23.05.00_05 is waiting execution 1873 FullBackupCatalog.2009-11-30_23.10.00_06 is waiting execution 1874 Increme mad06-job.2009-12-01_11.05.00_07 is waiting execution Terminated Jobs: JobId LevelFiles Bytes Status FinishedName 1857 Full 0 0 Error24-Nov-09 08:35 BackupCatalog 1848 Incr 0 0 Error24-Nov-09 08:35 mad06-job 1854 Full 0 0 Error24-Nov-09 08:35 BackupCatalog 1853 Diff 0 0 Error24-Nov-09 08:35 mad06-job 1852 Full 0 0 Error24-Nov-09 08:35 BackupCatalog 1850 Incr 0 0 Error24-Nov-09 08:35 mad06-job 1849 Full 0 0 Error24-Nov-09 08:35 BackupCatalog 1851 Incr 0 0 Error24-Nov-09 08:35 mad06-job 1855 Incr 0 0 Error24-Nov-09 08:35 mad06-job 1856 Incr 0 0 Error24-Nov-09 08:35 mad06-job * # ls -l /bacula/2### this is where the disks are located, on an external NAS mounted by NFS. total 28705668 -rw-r-+ 1 baculadisk372483112 Nov 13 11:10 VOL-0372 -rw-r-+ 1 baculadisk386571201 Nov 13 23:11 VOL-0373 -rw-r-+ 1 baculadisk293973535 Nov 13 23:11 VOL-0374 -rw-r-+ 1 baculadisk 2147475063 Nov 13 23:49 VOL-0375 -rw-r-+ 1 baculadisk 2147475420 Nov 14 00:02 VOL-0376 -rw-r-+ 1 bacula
Re: [Bacula-users] [SPAM] Re: [SPAM] Bacula TimeMachine type SOHOconfig?
Simon J Mudd sjm...@pobox.com kirjoitti viestissä news:m3aaxw15sz@mad06.wl0.org... timo-n...@tee-en.net (Timo Neuvonen) writes: Simon J Mudd sjm...@pobox.com kirjoitti viestissä news:20091206082523.ga...@mad06.wl0.org... Hello, I've been using Bacula for some time for home use, and trying to get a working TimeMachine type configuration working. That is I'd like to configure bacula to store to an external hard disk using a number of fixed sizes files, occupying up to a certain amount of disk space. In my case 100 x 2GB files. I'd like to auto label new files and purge old ones automatically to make space if needed. This sounds like a simple recipe which is appropriate for a large number of SOHO type situations. I know Bacula can do more, but to minimise intervention this looks nice. However, I don't quite get this to work. I've had issues with getting the auto-label to always work, and also the auto-expire. I wonder if anyone can look at my configuration or offer an alternative to do this? I guess Time Machine is something Apple-like, and I don't know nothing about it but the name. But if what you told is the essential, that is the disk usage strategy, it shouldn't be a problem though minor differences might exist. Yes, sorry. It's basically a very simple application which comes with the Mac OS X. You plug in an external disk and it will backup as much as possible filling up the disk. The implementation is quite different but for most people that's all they want: to designate a location/size for backups, and then to keep as many backups as possible in that location. When the disk space fills up the oldest backups are automatically thrown away. I didn't notice Volume Retention specified in your Pool config. I controls how soon after the last write the volumes can be recycled. Since you make full backup once a month, this should be set to the minimum of more than one month (eg. 40 days) to make sure you'll always have at least one (I'd seriously recommend more, at least two) full backup(s) available. After this period, if necessary, Bacula _can_ recycle the existing volumes. However, actual recycle won't happen until really needed to free the previously used volumes, but it can't happen before this time limit has expired. Yes, but that's what I'm trying to avoid. I realise that I MUST have sufficient space really for at least 2 full backups plus some extra for incrementals but I don't want to worry about the details. Therefore I want to configure You said you don't want to worry about the details. However, one such very strong detail is the schedule you already have specified, it says to run a full backup once a month. Required retention time is closely related to this, and needs to be specified too. Since now you haven't specified the volume retention, Bacula uses its internal default which is one year, 365 days. You have to specify a shorter volume retention time if you want to be able to recycle the volumes sooner. I don't see why specifying this would be a thing to avoid. At least it would be a step getting closer to functionality what you want, unless I have seriously misunderstood something. Now it expects that you specifically want to forbid recycling volumes under one year of age, and therefore keeps asking fore more volumes. Just try setting it to 40 days, for example, update (using console update command) the existing volumes to obey this value, and give it a try. I guess then you would be very close to what you want: the oldest volumes will get recycled one by one when new storage space is needed, as long that volume is at least 40 days (or whatever you specify) old. If you can not use that long time (40 days), you could change your schedule to run full backup eg. weekly, and set volume retention time to 2 weeks, for example. This way you might need less space for the non-full backup storage. When playing with the volume retention, be careful not to specify it too close to the full-backup-cycle period. Just as an example, running a full weekly, and specifying volume retention to 8 days might sound like a working solution. But if something goes wrong with the full backup, there has to be enough time to solve the problem before the previous full exipires -otherwise it could get recycled before you have re-run the failed one, and in this case a system crash with no full backup at all might be a nightmare. Currently, there is no way to specify the retention time in terms of succesfull full backups, but I recall having seen this kind of feature request. Btw, you can use list media command to see the status of the existing volumes. There you can see the current volume retention times, and verify they'll get changed (after update command) according to the conf change. The times shown in the list are always in seconds, independently from the units you'll use to specify them
Re: [Bacula-users] ***SPAM*** Re: Dell TL2000 library control
Le jeudi 11 juin 2009 à 13:04 +0200, Arno Lehmann a écrit : Which user does the SD run as? bacula-sd runs as 'bacula' user : burp:/etc/bacula# ps x aux|grep sd bacula 19313 0.0 0.0 89768 2292 ?Ssl 06:17 0:00 /usr/sbin/bacula-sd -c /etc/bacula/bacula-sd.conf -u bacula -g tape Also do a 'ls -l /dev/nst0 /dev/sg4' to see if the user the SD runs as has permissions on those devices. (udev rules - a fun thing :-) looks ok for me : owner is root:tape, and bacula is tape member : burp:/etc/bacula# ll /dev/sg* /dev/nst* crw-rw 1 root tape 9, 128 jun 10 11:46 /dev/nst0 crw-rw 1 root tape 9, 224 jun 10 11:46 /dev/nst0a crw-rw 1 root tape 9, 160 jun 10 11:46 /dev/nst0l crw-rw 1 root tape 9, 192 jun 10 11:46 /dev/nst0m crw-rw 1 root tape 9, 129 jun 10 11:46 /dev/nst1 crw-rw 1 root tape 9, 225 jun 10 11:46 /dev/nst1a crw-rw 1 root tape 9, 161 jun 10 11:46 /dev/nst1l crw-rw 1 root tape 9, 193 jun 10 11:46 /dev/nst1m crw-rw 1 root root 21, 0 jun 10 11:46 /dev/sg0 crw-rw 1 root root 21, 1 jun 10 11:46 /dev/sg1 crw-rw 1 root root 21, 10 jun 10 11:46 /dev/sg10 crw-rw 1 root root 21, 11 jun 10 11:46 /dev/sg11 crw-rw 1 root root 21, 12 jun 10 11:46 /dev/sg12 crw-rw 1 root root 21, 13 jun 10 11:46 /dev/sg13 crw-rw 1 root root 21, 14 jun 10 11:46 /dev/sg14 crw-rw 1 root root 21, 2 jun 10 11:46 /dev/sg2 crw-rw 1 root tape 21, 3 jun 10 11:46 /dev/sg3 crw-rw 1 root tape 21, 4 jun 10 11:46 /dev/sg4 crw-rw 1 root tape 21, 5 jun 10 11:46 /dev/sg5 crw-rw 1 root tape 21, 6 jun 10 11:46 /dev/sg6 crw-rw 1 root root 21, 7 jun 10 11:46 /dev/sg7 crw-rw 1 root root 21, 8 jun 10 11:46 /dev/sg8 crw-rw 1 root root 21, 9 jun 10 11:46 /dev/sg9 burp:/etc/bacula# grep bacula /etc/group tape:x:26:bacula bacula:x:108: Anyway you made me on the right way : burp:/etc/bacula# sudo -u bacula /etc/bacula/scripts/mtx-changer /dev/sg4 list 10 /dev/nst0 0 touch: ne peut faire un touch sur `/var/lib/bacula/mtx.log': Permission non accordée /etc/bacula/scripts/mtx-changer: line 62: /var/lib/bacula/mtx.log: Permission non accordée /etc/bacula/scripts/mtx-changer: line 62: /var/lib/bacula/mtx.log: Permission non accordée 1:BK0001L4 2:BK0004L4 5:BK0002L4 9:BK0003L4 12:CL0001L4 As I launched the script as root, it created logfile owned by root, so it wrote garbage when launched as bacula ! Now it works well :) Thanks ! -- Eric Belhomme - EVE SA - www.eve-team.com 2bis, voie la cardon, 91120 PALAISEAU, FRANCE Tel: +33 1 64 53 27 30,Fax: +33 1 64 53 27 40 smime.p7s Description: S/MIME cryptographic signature -- Crystal Reports - New Free Runtime and 30 Day Trial Check out the new simplified licensing option that enables unlimited royalty-free distribution of the report engine for externally facing server and web deployment. http://p.sf.net/sfu/businessobjects___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] SPAM-LOW: Re: incremental backup of hard links
On Wed, 16 Jan 2008, Ingo Jochim wrote: Hi Dan, a hard link looks like the original file. So it always should have the same date. Below you will see that the hard link (second line) has the same date at the original file. I created the hard link today but it also has the date from november the 23th. What about the Mtime? - This SF.net email is sponsored by: Microsoft Defy all challenges. Microsoft(R) Visual Studio 2008. http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/ ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] SPAM-LOW: Re: incremental backup of hard links
Hi Dan, a hard link looks like the original file. So it always should have the same date. Below you will see that the hard link (second line) has the same date at the original file. I created the hard link today but it also has the date from november the 23th. Ingo [EMAIL PROTECTED] ~]# ls -l file.JPG -rw-rw-r-- 2 root root 314099 23. Nov 15:58 file.JPG [EMAIL PROTECTED] ~]# ls -l backup/file.JPG -rw-rw-r-- 2 mother mother 314099 23. Nov 15:58 backup/file.JPG Dan Langille schrieb: Ingo Jochim wrote: I create hard links for the files I want to backup so that while I do the backup the files can get deleted by someone else. The problem is that on an incremental backup I get a backup of all the files again like I got on a full backup. I create all the hard links right before the backup. A hard link points to the same file with the same date and so. Why does bacula backup all the files again? Because the dates on the hard links are newer than the previous backup. -- Ingo Jochim QuerySoft GmbH Blasewitzer Str. 41 01307 Dresden Tel.: 0351/ 450 4205 Fax: 0351/ 450 4200 Mail: [EMAIL PROTECTED] Web: www.QuerySoft.de Geschäftsführer: Michael Jochim Amtsgericht Dresden, HRB 24167 - This SF.net email is sponsored by: Microsoft Defy all challenges. Microsoft(R) Visual Studio 2008. http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/ ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] SPAM-LOW: Re: incremental backup of hard links
Reply has been rearranged to retain chronological order. On Jan 16, 2008, at 9:22 AM, Ingo Jochim wrote: Dan Langille schrieb: Ingo Jochim wrote: I create hard links for the files I want to backup so that while I do the backup the files can get deleted by someone else. The problem is that on an incremental backup I get a backup of all the files again like I got on a full backup. I create all the hard links right before the backup. A hard link points to the same file with the same date and so. Why does bacula backup all the files again? Because the dates on the hard links are newer than the previous backup. Hi Dan, a hard link looks like the original file. So it always should have the same date. Below you will see that the hard link (second line) has the same date at the original file. I created the hard link today but it also has the date from november the 23th. Ingo [EMAIL PROTECTED] ~]# ls -l file.JPG -rw-rw-r-- 2 root root 314099 23. Nov 15:58 file.JPG [EMAIL PROTECTED] ~]# ls -l backup/file.JPG -rw-rw-r-- 2 mother mother 314099 23. Nov 15:58 backup/file.JPG You create these hard links just before you run each backup. Is that correct? -- Dan Langille -- http://www.langille.org/ [EMAIL PROTECTED] - This SF.net email is sponsored by: Microsoft Defy all challenges. Microsoft(R) Visual Studio 2008. http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/ ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] SPAM-LOW: Re: SPAM-LOW: Re: incremental backup of hard links
Dan Langille schrieb: Reply has been rearranged to retain chronological order. On Jan 16, 2008, at 9:22 AM, Ingo Jochim wrote: Dan Langille schrieb: Ingo Jochim wrote: I create hard links for the files I want to backup so that while I do the backup the files can get deleted by someone else. The problem is that on an incremental backup I get a backup of all the files again like I got on a full backup. I create all the hard links right before the backup. A hard link points to the same file with the same date and so. Why does bacula backup all the files again? Because the dates on the hard links are newer than the previous backup. Hi Dan, a hard link looks like the original file. So it always should have the same date. Below you will see that the hard link (second line) has the same date at the original file. I created the hard link today but it also has the date from november the 23th. Ingo [EMAIL PROTECTED] ~]# ls -l file.JPG -rw-rw-r-- 2 root root 314099 23. Nov 15:58 file.JPG [EMAIL PROTECTED] ~]# ls -l backup/file.JPG -rw-rw-r-- 2 mother mother 314099 23. Nov 15:58 backup/file.JPG You create these hard links just before you run each backup. Is that correct? Correct. So I do like a snapshot. The original files can get deleted after the snapshot and I'm still able to finish my backup to tape. Ingo - This SF.net email is sponsored by: Microsoft Defy all challenges. Microsoft(R) Visual Studio 2008. http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/ ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] SPAM-LOW: Re: SPAM-LOW: Re: incremental backup of hard links
On Wednesday 16 January 2008 08:54, Ingo Jochim wrote: Dan Langille schrieb: Reply has been rearranged to retain chronological order. On Jan 16, 2008, at 9:22 AM, Ingo Jochim wrote: Dan Langille schrieb: Ingo Jochim wrote: I create hard links for the files I want to backup so that while I do the backup the files can get deleted by someone else. The problem is that on an incremental backup I get a backup of all the files again like I got on a full backup. I create all the hard links right before the backup. A hard link points to the same file with the same date and so. Why does bacula backup all the files again? Because the dates on the hard links are newer than the previous backup. Hi Dan, a hard link looks like the original file. So it always should have the same date. Below you will see that the hard link (second line) has the same date at the original file. I created the hard link today but it also has the date from november the 23th. Ingo [EMAIL PROTECTED] ~]# ls -l file.JPG -rw-rw-r-- 2 root root 314099 23. Nov 15:58 file.JPG [EMAIL PROTECTED] ~]# ls -l backup/file.JPG -rw-rw-r-- 2 mother mother 314099 23. Nov 15:58 backup/file.JPG You create these hard links just before you run each backup. Is that correct? Correct. So I do like a snapshot. The original files can get deleted after the snapshot and I'm still able to finish my backup to tape. Ingo Ingo, While the modification time is that of the original file, the access time creation time are that of the link. cmr -- Debian 'Etch' - Registered Linux User #241964 More laws, less justice. -- Marcus Tullius Ciceroca, 42 BC - This SF.net email is sponsored by: Microsoft Defy all challenges. Microsoft(R) Visual Studio 2008. http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/ ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] SPAM-LOW: Re: SPAM-LOW: Re: SPAM-LOW: Re: incremental backup of hard links
C M Reinehr schrieb: On Wednesday 16 January 2008 08:54, Ingo Jochim wrote: Dan Langille schrieb: Reply has been rearranged to retain chronological order. On Jan 16, 2008, at 9:22 AM, Ingo Jochim wrote: Dan Langille schrieb: Ingo Jochim wrote: I create hard links for the files I want to backup so that while I do the backup the files can get deleted by someone else. The problem is that on an incremental backup I get a backup of all the files again like I got on a full backup. I create all the hard links right before the backup. A hard link points to the same file with the same date and so. Why does bacula backup all the files again? Because the dates on the hard links are newer than the previous backup. Hi Dan, a hard link looks like the original file. So it always should have the same date. Below you will see that the hard link (second line) has the same date at the original file. I created the hard link today but it also has the date from november the 23th. Ingo [EMAIL PROTECTED] ~]# ls -l file.JPG -rw-rw-r-- 2 root root 314099 23. Nov 15:58 file.JPG [EMAIL PROTECTED] ~]# ls -l backup/file.JPG -rw-rw-r-- 2 mother mother 314099 23. Nov 15:58 backup/file.JPG You create these hard links just before you run each backup. Is that correct? Correct. So I do like a snapshot. The original files can get deleted after the snapshot and I'm still able to finish my backup to tape. Ingo Ingo, While the modification time is that of the original file, the access time creation time are that of the link. cmr Ok. Got it. But how can I avoid that I get a new date? Is there a presereve parameter like on cp? Ingo - This SF.net email is sponsored by: Microsoft Defy all challenges. Microsoft(R) Visual Studio 2008. http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/ ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] SPAM-LOW: Re: SPAM-LOW: Re: SPAM-LOW: Re: incremental backup of hard links
On Wednesday 16 January 2008 09:49, Ingo Jochim wrote: C M Reinehr schrieb: On Wednesday 16 January 2008 08:54, Ingo Jochim wrote: Dan Langille schrieb: Reply has been rearranged to retain chronological order. On Jan 16, 2008, at 9:22 AM, Ingo Jochim wrote: Dan Langille schrieb: Ingo Jochim wrote: I create hard links for the files I want to backup so that while I do the backup the files can get deleted by someone else. The problem is that on an incremental backup I get a backup of all the files again like I got on a full backup. I create all the hard links right before the backup. A hard link points to the same file with the same date and so. Why does bacula backup all the files again? Because the dates on the hard links are newer than the previous backup. Hi Dan, a hard link looks like the original file. So it always should have the same date. Below you will see that the hard link (second line) has the same date at the original file. I created the hard link today but it also has the date from november the 23th. Ingo [EMAIL PROTECTED] ~]# ls -l file.JPG -rw-rw-r-- 2 root root 314099 23. Nov 15:58 file.JPG [EMAIL PROTECTED] ~]# ls -l backup/file.JPG -rw-rw-r-- 2 mother mother 314099 23. Nov 15:58 backup/file.JPG You create these hard links just before you run each backup. Is that correct? Correct. So I do like a snapshot. The original files can get deleted after the snapshot and I'm still able to finish my backup to tape. Ingo Ingo, While the modification time is that of the original file, the access time creation time are that of the link. cmr Ok. Got it. But how can I avoid that I get a new date? Is there a presereve parameter like on cp? Ingo The only thing that comes to mind would be to use 'touch' to modify the times of the link to those of the original file. If you are using a shell script to create your hard links you could modify it to obtain the times and then execute touch with the appropriate options. cmr -- Debian 'Etch' - Registered Linux User #241964 More laws, less justice. -- Marcus Tullius Ciceroca, 42 BC - This SF.net email is sponsored by: Microsoft Defy all challenges. Microsoft(R) Visual Studio 2008. http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/ ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] SPAM-LOW: Re: SPAM-LOW: Re: SPAM-LOW: Re: SPAM-LOW: Re: incremental backup of hard links
C M Reinehr schrieb: On Wednesday 16 January 2008 09:49, Ingo Jochim wrote: C M Reinehr schrieb: On Wednesday 16 January 2008 08:54, Ingo Jochim wrote: Dan Langille schrieb: Reply has been rearranged to retain chronological order. On Jan 16, 2008, at 9:22 AM, Ingo Jochim wrote: Dan Langille schrieb: Ingo Jochim wrote: I create hard links for the files I want to backup so that while I do the backup the files can get deleted by someone else. The problem is that on an incremental backup I get a backup of all the files again like I got on a full backup. I create all the hard links right before the backup. A hard link points to the same file with the same date and so. Why does bacula backup all the files again? Because the dates on the hard links are newer than the previous backup. Hi Dan, a hard link looks like the original file. So it always should have the same date. Below you will see that the hard link (second line) has the same date at the original file. I created the hard link today but it also has the date from november the 23th. Ingo [EMAIL PROTECTED] ~]# ls -l file.JPG -rw-rw-r-- 2 root root 314099 23. Nov 15:58 file.JPG [EMAIL PROTECTED] ~]# ls -l backup/file.JPG -rw-rw-r-- 2 mother mother 314099 23. Nov 15:58 backup/file.JPG You create these hard links just before you run each backup. Is that correct? Correct. So I do like a snapshot. The original files can get deleted after the snapshot and I'm still able to finish my backup to tape. Ingo Ingo, While the modification time is that of the original file, the access time creation time are that of the link. cmr Ok. Got it. But how can I avoid that I get a new date? Is there a presereve parameter like on cp? Ingo The only thing that comes to mind would be to use 'touch' to modify the times of the link to those of the original file. If you are using a shell script to create your hard links you could modify it to obtain the times and then execute touch with the appropriate options. cmr Good idea. So what is important for the backup? ctime or mtime? I't not able to reset the ctime. How can I have ls give me the time format I will need for touch to reset the time? 200012311800 --- 31. Dez 2000 Ingo - This SF.net email is sponsored by: Microsoft Defy all challenges. Microsoft(R) Visual Studio 2008. http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/ ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] [SPAM] Re: Single tape drive visible to multiple hosts
Thanks Alan and John Network backups is a possibility, we have an excellent network, I'm just looking for the most efficient solution. Thanks for your help. Regards Alan Alan Brown wrote: On Tue, 6 Feb 2007, John Drescher wrote: Bacula sends the data directly from the filedaemon on each client to the storage daemon with out going through the director (main bacula server). If network speed is an issue, one can always configure IP-over-san... - Using Tomcat but need to do more? Need to support web services, security? Get stuff done quickly with pre-integrated technology to make your job easier. Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo http://sel.as-us.falkag.net/sel?cmd=lnkkid=120709bid=263057dat=121642 ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] [SPAM: 8.456] Can't figure out how to use rpmbuild
Brad Peterson wrote: Hey all, I'm getting stuck trying to install bacula 2.0.0 on a fc5 box. I've decided to try the rpm method of installation this time. I tried to follow the manual ( http://www.bacula.org/dev-manual/Bacula_RPM_Packaging_FAQ.html ), and when I ran my rpmbuild command, it looks like it's doing it's thing for a few minutes...and then...nothing. It doesn't appear to install anything to run. I'm new to working with rpms (gotta love yum), so I wonder if I'm just making some simple mistake somewhere. In the manual, it says to run two lines. But the first command doesn't seem to work. The following was copied from my console: # rpmbuild -ba --define build_fc5 1 --define build_mysql5 1 bacula.spec error: failed to stat /var/tmp/bacula.spec: No such file or directory So, I ignord that, and went to the second line. Here is what I ran: # rpmbuild --rebuild --define build_fc5 1 --define build_mysql5 1 bacula-2.0.0-1.src.rpm This is the one which is the one that appeared to do its thing, but then ended without having anything to run. From the manual, my only guess is that I should have something in /var/bacula. But nothing is there. The only bacula filepaths I have on this system is a bunch of stuff in the /usr/src/ directory. Any idea what I need to do to get this working? Brad Peterson [EMAIL PROTECTED] Spam detection software, running on the system mail.appraiseutah.com, has identified this incoming email as possible spam. The original message has been attached to this so you can view it (if it isn't spam) or label similar future email. If you have any questions, see the administrator of that system for details. Content preview: Hey all, I'm getting stuck trying to install bacula 2.0.0 on a fc5 box. I've decided to try the rpm method of installation this time. I tried to follow the manual ( http://www.bacula.org/dev-manual/Bacula_RPM_Packaging_FAQ.html ), and when I ran my rpmbuild command, it looks like it's doing it's thing for a few minutes...and then...nothing. It doesn't appear to install anything to run. [...] Content analysis details: (8.5 points, 5.0 required) pts rule name description -- -- 1.8 FORGED_YAHOO_RCVD 'From' yahoo.com does not match 'Received' headers -0.7 BAYES_20 BODY: Bayesian spam probability is 5 to 20% [score: 0.1391] 0.2 DNS_FROM_RFC_ABUSE RBL: Envelope sender in abuse.rfc-ignorant.org 1.4 DNS_FROM_RFC_WHOIS RBL: Envelope sender in whois.rfc-ignorant.org 2.0 RCVD_IN_SORBS_DUL RBL: SORBS: sent directly from dynamic IP address [24.10.159.195 listed in dnsbl.sorbs.net] 1.7 DNS_FROM_RFC_POST RBL: Envelope sender in postmaster.rfc-ignorant.org 1.9 RCVD_IN_NJABL_DUL RBL: NJABL: dialup sender did non-local SMTP [24.10.159.195 listed in combined.njabl.org] - Take Surveys. Earn Cash. Influence the Future of IT Join SourceForge.net's Techsay panel and you'll get the chance to share your opinions on IT business topics through brief surveys - and earn cash http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users If there is no error after rpmbuild is done, you have rpm in /usr/src/redhat/RPMS/i386/ need to install it with rpm -Uvh name.rpm - Take Surveys. Earn Cash. Influence the Future of IT Join SourceForge.net's Techsay panel and you'll get the chance to share your opinions on IT business topics through brief surveys - and earn cash http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] [SPAM: 8.456] Re: Can't figure out how to use rpmbuild
Brad Peterson wrote: Hristo Benev wrote: Brad Peterson wrote: Hey all, I'm getting stuck trying to install bacula 2.0.0 on a fc5 box. I've decided to try the rpm method of installation this time. I tried to follow the manual ( http://www.bacula.org/dev-manual/Bacula_RPM_Packaging_FAQ.html ), and when I ran my rpmbuild command, it looks like it's doing it's thing for a few minutes...and then...nothing. It doesn't appear to install anything to run. I'm new to working with rpms (gotta love yum), so I wonder if I'm just making some simple mistake somewhere. In the manual, it says to run two lines. But the first command doesn't seem to work. The following was copied from my console: # rpmbuild -ba --define build_fc5 1 --define build_mysql5 1 bacula.spec error: failed to stat /var/tmp/bacula.spec: No such file or directory So, I ignord that, and went to the second line. Here is what I ran: # rpmbuild --rebuild --define build_fc5 1 --define build_mysql5 1 bacula-2.0.0-1.src.rpm This is the one which is the one that appeared to do its thing, but then ended without having anything to run. From the manual, my only guess is that I should have something in /var/bacula. But nothing is there. The only bacula filepaths I have on this system is a bunch of stuff in the /usr/src/ directory. Any idea what I need to do to get this working? Brad Peterson [EMAIL PROTECTED] - Take Surveys. Earn Cash. Influence the Future of IT Join SourceForge.net's Techsay panel and you'll get the chance to share your opinions on IT business topics through brief surveys - and earn cash http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users If there is no error after rpmbuild is done, you have rpm in /usr/src/redhat/RPMS/i386/ need to install it with rpm -Uvh name.rpm Hey! Thanks! That's all it was. Glad to know it was a simple problem of just realizing that it build rpms in the /usr/src folder. I was able to get it up and running. Next step, using a more normal email system so I don't get a huge spam score. :) Brad Peterson Spam detection software, running on the system mail.appraiseutah.com, has identified this incoming email as possible spam. The original message has been attached to this so you can view it (if it isn't spam) or label similar future email. If you have any questions, see the administrator of that system for details. Content preview: Hristo Benev wrote: Brad Peterson wrote: Hey all, I'm getting stuck trying to install bacula 2.0.0 on a fc5 box. I've decided to try the rpm method of installation this time. I tried to follow the manual ( http://www.bacula.org/dev-manual/Bacula_RPM_Packaging_FAQ.html ), and when I ran my rpmbuild command, it looks like it's doing it's thing for a few minutes...and then...nothing. It doesn't appear to install anything to run. I'm new to working with rpms (gotta love yum), so I wonder if I'm just making some simple mistake somewhere. In the manual, it says to run two lines. But the first command doesn't seem to work. The following was copied from my console: # rpmbuild -ba --define build_fc5 1 --define build_mysql5 1 bacula.spec error: failed to stat /var/tmp/bacula.spec: No such file or directory So, I ignord that, and went to the second line. Here is what I ran: # rpmbuild --rebuild --define build_fc5 1 --define build_mysql5 1 bacula-2.0.0-1.src.rpm This is the one which is the one that appeared to do its thing, but then ended without having anything to run. From the manual, my only guess is that I should have something in /var/bacula. But nothing is there. The only bacula filepaths I have on this system is a bunch of stuff in the /usr/src/ directory. Any idea what I need to do to get this working? Brad Peterson [EMAIL PROTECTED] Take Surveys. Earn Cash. Influence the Future of IT Join SourceForge.net's Techsay panel and you'll get the chance to share your opinions on IT business topics through brief surveys - and earn cash http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV Bacula-users mailing list Bacula-users@lists.sourceforge.net [...] Content analysis details: (8.5 points, 5.0 required) pts rule name description -- -- 1.8 FORGED_YAHOO_RCVD 'From' yahoo.com does not match 'Received' headers -0.7 BAYES_20 BODY: Bayesian spam
Re: [Bacula-users] !! SPAM Suspect : SPAM-URL-DBL !! Restore Pb
I can also give you an ls output of /home/bacula/server-prod where INC-server-prod.18 is stored : -rw-r- 1 bacula-sd bacula 6.9G Dec 14 01:34 INC-server-prod.18 Is there a permission problem ? Regards. Le jeudi 14 décembre 2006 17:08, Arnaud Mombrial a écrit : Hi, Here is the output of a restore command. Can someone tell me more about this error ? *restore Using default Catalog name=BackupDB DB=bacula First you select one or more JobIds that contain files to be restored. You will be presented several methods of specifying the JobIds. Then you will be allowed to select which files from those JobIds are to be restored. To select the JobIds, you have the following choices: 1: List last 20 Jobs run 2: List Jobs where a given File is saved 3: Enter list of comma separated JobIds to select 4: Enter SQL list command 5: Select the most recent backup for a client 6: Select backup for a client before a specified time 7: Enter a list of files to restore 8: Enter a list of files to restore before a specified time 9: Find the JobIds of the most recent backup for a client 10: Find the JobIds for a backup for a client before a specified time 11: Enter a list of directories to restore for found JobIds 12: Cancel Select item: (1-12): 2 Defined Clients: 1: server-ldap 2: poste-gael 3: server-compta 4: server-prod 5: poste-sylvia 6: poste-jjb 7: Backup 8: server-printer 9: firewall 10: poste-julien 11: Poste-Arnaud Select the Client (1-11): 4 Enter Filename (no path):Adriano Falconi.xls +---+-- ---+-+-+ ---+--+-+ | JobId | Name | StartTime | JobType | JobStatus | JobFiles | JobBytes| +---+-- ---+-+-+ ---+--+-+ | 2331 | /home/prod/production/01.LABELS/NOSE/PRIVATE/1 REALISATEURS NOSE/1-Liste indiv real/Adriano Falconi.xls | 2006-12-14 01:16:25 | B | T | 3602 | 7300702838 | | 2311 | /home/prod/production/01.LABELS/NOSE/PRIVATE/1 REALISATEURS NOSE/1-Liste indiv real/Adriano Falconi.xls | 2006-12-12 01:16:16 | B | T | 4619 | 6479532516 | | 2299 | /home/prod/production/01.LABELS/NOSE/PRIVATE/1 REALISATEURS NOSE/1-Liste indiv real/Adriano Falconi.xls | 2006-12-09 01:51:54 | B | T | 254653 | 421636256545| | 2289 | /home/prod/production/01.LABELS/NOSE/PRIVATE/1 REALISATEURS NOSE/1-Liste indiv real/Adriano Falconi.xls | 2006-12-08 01:16:12 | B | T | 3997 | 5048002815 | | 2251 | /home/prod/production/01.LABELS/NOSE/PRIVATE/1 REALISATEURS NOSE/1-Liste indiv real/Adriano Falconi.xls | 2006-12-04 20:31:17 | B | T | 6489 | 7970069483 | | 2248 | /home/prod/production/01.LABELS/NOSE/PRIVATE/1 REALISATEURS NOSE/1-Liste indiv real/Adriano Falconi.xls | 2006-12-03 02:17:57 | B | T | 483179 | 770560251941| | 2238 | /home/prod/production/01.LABELS/NOSE/PRIVATE/1 REALISATEURS NOSE/1-Liste indiv real/Adriano Falconi.xls | 2006-12-02 01:46:12 | B | T | 252482 | 43711371| | 2186 | /home/prod/production/01.LABELS/NOSE/PRIVATE/1 REALISATEURS NOSE/1-Liste indiv real/Adriano Falconi.xls | 2006-11-25 01:39:05 | B | T | 253309 | 504115064958| | 2134 | /home/prod/production/01.LABELS/NOSE/PRIVATE/1 REALISATEURS NOSE/1-Liste indiv real/Adriano Falconi.xls | 2006-11-18 01:39:37 | B | R | 0| 0 | | 1618 | /home/prod/production/01.LABELS/NOSE/PRIVATE/1 REALISATEURS NOSE/1-Liste indiv real/Adriano Falconi.xls | 2006-09-10 01:54:13 | B | T | 402518 | 434908105271| | 1415 | /home/prod/production/01.LABELS/NOSE/PRIVATE/1 REALISATEURS NOSE/1-Liste indiv real/Adriano Falconi.xls | 2006-08-13 01:52:29 | B | T | 307270 | 377062951244| | 1198 | /home/prod/production/01.LABELS/NOSE/PRIVATE/1 REALISATEURS NOSE/1-Liste indiv real/Adriano Falconi.xls | 2006-07-09 01:55:34 | B | T | 283944 | 320343598057| +---+-- ---+-+-+ ---+--+-+ To select the JobIds, you have the following choices: 1: List last 20 Jobs run 2: List Jobs where a given File is saved 3: Enter list of comma separated JobIds to select 4: Enter SQL list command 5: Select the
Re: [Bacula-users] !! SPAM Suspect : SPAM-URL-DBL !! Re: !! SPAM Suspect : SPAM-URL-DBL !! Restore Pb
Sorry for this post. Don't take care of it. Regards Le jeudi 14 décembre 2006 17:28, Arnaud Mombrial a écrit : I can also give you an ls output of /home/bacula/server-prod where INC-server-prod.18 is stored : -rw-r- 1 bacula-sd bacula 6.9G Dec 14 01:34 INC-server-prod.18 Is there a permission problem ? Regards. Le jeudi 14 décembre 2006 17:08, Arnaud Mombrial a écrit : Hi, Here is the output of a restore command. Can someone tell me more about this error ? *restore Using default Catalog name=BackupDB DB=bacula First you select one or more JobIds that contain files to be restored. You will be presented several methods of specifying the JobIds. Then you will be allowed to select which files from those JobIds are to be restored. To select the JobIds, you have the following choices: 1: List last 20 Jobs run 2: List Jobs where a given File is saved 3: Enter list of comma separated JobIds to select 4: Enter SQL list command 5: Select the most recent backup for a client 6: Select backup for a client before a specified time 7: Enter a list of files to restore 8: Enter a list of files to restore before a specified time 9: Find the JobIds of the most recent backup for a client 10: Find the JobIds for a backup for a client before a specified time 11: Enter a list of directories to restore for found JobIds 12: Cancel Select item: (1-12): 2 Defined Clients: 1: server-ldap 2: poste-gael 3: server-compta 4: server-prod 5: poste-sylvia 6: poste-jjb 7: Backup 8: server-printer 9: firewall 10: poste-julien 11: Poste-Arnaud Select the Client (1-11): 4 Enter Filename (no path):Adriano Falconi.xls +---+ -- ---+-+-+- --- ---+--+-+ | JobId | Name | StartTime | JobType | JobStatus | JobFiles | JobBytes| +---+ -- ---+-+-+- --- ---+--+-+ | 2331 | /home/prod/production/01.LABELS/NOSE/PRIVATE/1 REALISATEURS NOSE/1-Liste indiv real/Adriano Falconi.xls | 2006-12-14 01:16:25 | B | T | 3602 | 7300702838 | | | 2311 | /home/prod/production/01.LABELS/NOSE/PRIVATE/1 REALISATEURS NOSE/1-Liste indiv real/Adriano Falconi.xls | 2006-12-12 01:16:16 | B | T | 4619 | 6479532516 | | | 2299 | /home/prod/production/01.LABELS/NOSE/PRIVATE/1 REALISATEURS NOSE/1-Liste indiv real/Adriano Falconi.xls | 2006-12-09 01:51:54 | B | T | 254653 | 421636256545| | | 2289 | /home/prod/production/01.LABELS/NOSE/PRIVATE/1 REALISATEURS NOSE/1-Liste indiv real/Adriano Falconi.xls | 2006-12-08 01:16:12 | B | T | 3997 | 5048002815 | | | 2251 | /home/prod/production/01.LABELS/NOSE/PRIVATE/1 REALISATEURS NOSE/1-Liste indiv real/Adriano Falconi.xls | 2006-12-04 20:31:17 | B | T | 6489 | 7970069483 | | | 2248 | /home/prod/production/01.LABELS/NOSE/PRIVATE/1 REALISATEURS NOSE/1-Liste indiv real/Adriano Falconi.xls | 2006-12-03 02:17:57 | B | T | 483179 | 770560251941| | | 2238 | /home/prod/production/01.LABELS/NOSE/PRIVATE/1 REALISATEURS NOSE/1-Liste indiv real/Adriano Falconi.xls | 2006-12-02 01:46:12 | B | T | 252482 | 43711371| | | 2186 | /home/prod/production/01.LABELS/NOSE/PRIVATE/1 REALISATEURS NOSE/1-Liste indiv real/Adriano Falconi.xls | 2006-11-25 01:39:05 | B | T | 253309 | 504115064958| | | 2134 | /home/prod/production/01.LABELS/NOSE/PRIVATE/1 REALISATEURS NOSE/1-Liste indiv real/Adriano Falconi.xls | 2006-11-18 01:39:37 | B | R | 0| 0 | | | 1618 | /home/prod/production/01.LABELS/NOSE/PRIVATE/1 REALISATEURS NOSE/1-Liste indiv real/Adriano Falconi.xls | 2006-09-10 01:54:13 | B | T | 402518 | 434908105271| | | 1415 | /home/prod/production/01.LABELS/NOSE/PRIVATE/1 REALISATEURS NOSE/1-Liste indiv real/Adriano Falconi.xls | 2006-08-13 01:52:29 | B | T | 307270 | 377062951244| | | 1198 | /home/prod/production/01.LABELS/NOSE/PRIVATE/1 REALISATEURS NOSE/1-Liste indiv real/Adriano Falconi.xls | 2006-07-09 01:55:34 | B | T | 283944 | 320343598057| +---+ -- ---+-+-+- --- ---+--+-+ To select the JobIds, you have the following choices: 1: List last 20 Jobs
Re: [Bacula-users] [SPAM: 7.427] labelmedia=yes isn't working
Hello, On 12/7/2006 8:11 AM, Brad Peterson wrote: I'm trying to set up disk volumes and have them automatically labeled. According to the manual, this should be possible. It definitely is. So I made sure my storage daemon had this line in it: Label media = yes; # lets Bacula label unlabeled media I then did a ./bacula restart. I then tried to run a job that specified a pool for which no volumes had yet been created. I was doing this to test if it would auto create a volume for me. Instead, I keep getting this message: 07-Dec 00:05 brad-sd: Job NightlySave.2006-12-07_00.04.46 waiting. Cannot find any appendable volumes. Please use the label command to create a new Volume for: Any ideas why this is? Shouldn't Label media=yes do the trick? I've been trying for days to get this thing set up and running, and I'm getting frustrated feeling like I've run into a dead end. Look into the manual, configuring the director, pools... you'll find a setting called Label Format. Which is what you need, because currently, Bacula does not know what sort of label to create. Also, when using automatic labeling, you should make sure the space the volumes can use is limited. I'd recommend to limit the volume file size, and the number of volumes in the pool. Arno Spam detection software, running on the system mail.appraiseutah.com, has identified this incoming email as possible spam. The original message has been attached to this so you can view it (if it isn't spam) or label similar future email. If you have any questions, see the administrator of that system for details. Content preview: I'm trying to set up disk volumes and have them automatically labeled. According to the manual, this should be possible. So I made sure my storage daemon had this line in it: Label media = yes; # lets Bacula label unlabeled media [...] Content analysis details: (7.4 points, 5.0 required) pts rule name description -- -- 0.9 FORGED_YAHOO_RCVD 'From' yahoo.com does not match 'Received' headers 0.5 DNS_FROM_RFC_ABUSE RBL: Envelope sender in abuse.rfc-ignorant.org 0.9 DNS_FROM_RFC_WHOIS RBL: Envelope sender in whois.rfc-ignorant.org 2.0 RCVD_IN_SORBS_DUL RBL: SORBS: sent directly from dynamic IP address [24.10.159.195 listed in dnsbl.sorbs.net] 1.4 DNS_FROM_RFC_POST RBL: Envelope sender in postmaster.rfc-ignorant.org 1.7 RCVD_IN_NJABL_DUL RBL: NJABL: dialup sender did non-local SMTP [24.10.159.195 listed in combined.njabl.org] - Take Surveys. Earn Cash. Influence the Future of IT Join SourceForge.net's Techsay panel and you'll get the chance to share your opinions on IT business topics through brief surveys - and earn cash http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users -- IT-Service Lehmann[EMAIL PROTECTED] Arno Lehmann http://www.its-lehmann.de - Take Surveys. Earn Cash. Influence the Future of IT Join SourceForge.net's Techsay panel and you'll get the chance to share your opinions on IT business topics through brief surveys - and earn cash http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] !! SPAM Suspect : SPAM-URL-DBL !! Fwd: How to understand this error
Le lundi 4 décembre 2006 15:17, John Drescher a écrit : -- Forwarded message -- From: John Drescher [EMAIL PROTECTED] Date: Dec 4, 2006 9:17 AM Subject: Re: [Bacula-users] How to understand this error To: Arnaud Mombrial [EMAIL PROTECTED] I'm afraid Things go wrong with my monthly backup. I use a Sony SAIT1-500 Cartridge and don't know if compression is active or not. Here is a sample of /var/log/bacula/bacula.log.1 : ## # # I've only post the parts that are relevant to my Sony SDZ-100 device # # I've add some comments for myself to check if I understand well # what Bacula tells me # ## # Job Name 03-Dec 01:00 Backup: Start Backup JobId 2247, Job=server-compta-archive.2006-12-03_01.00.00 # Before job's actions for the Win Server : 03-Dec 01:00 server-compta: ClientRunBeforeJob: 03-Dec 01:00 server-compta: ClientRunBeforeJob: C:\WINDOWS\system32REM STOP MSSQL SERVEUR 2000 03-Dec 01:00 server-compta: ClientRunBeforeJob: 03-Dec 01:00 server-compta: ClientRunBeforeJob: C:\WINDOWS\system32NET STOP SQLSERVERAGENT 03-Dec 01:00 server-compta: ClientRunBeforeJob: Le service SQLSERVERAGENT s'arrte. 03-Dec 01:00 server-compta: ClientRunBeforeJob: Le service SQLSERVERAGENT a t arrt. 03-Dec 01:00 server-compta: ClientRunBeforeJob: 03-Dec 01:00 server-compta: ClientRunBeforeJob: 03-Dec 01:00 server-compta: ClientRunBeforeJob: C:\WINDOWS\system32NET STOP MSSQLServer 03-Dec 01:00 server-compta: ClientRunBeforeJob: Le service MSSQLSERVER s'arrte.. 03-Dec 01:00 server-compta: ClientRunBeforeJob: Le service MSSQLSERVER a t arrt. 03-Dec 01:00 server-compta: ClientRunBeforeJob: # The previous Error I mentioned in the last Post # Quote : John Drescher [EMAIL PROTECTED] # I believe I have this error with every drive I have when a new blank tape is # inserted basically # this means that bacula tried to read a tape with no data on it and that was # an error. 03-Dec 01:00 Storage: server-compta-archive.2006-12-03_01.00.00 Error: block.c:940 Read error at file:blk 0:0 on device Sony SDZ-100 (/dev/st0). ERR=Input/output error. As I said before this is normal during the label command if the tape is blank. # A label Archives.3 is created # I've to check what is a label... Sorry for being anoying Every tape that bacula uses must be labeled so bacula can identify it so it will not overwrite your important data. 03-Dec 01:01 Storage: Labeled new Volume archives.3 on device Sony SDZ-100 (/dev/st0). 03-Dec 01:01 Storage: Wrote label to prelabeled Volume archives.3 on device Sony SDZ-100 (/dev/st0) 03-Dec 01:01 Storage: Spooling data ... 03-Dec 01:59 Storage: Committing spooled data to Volume archives.3. Despooling 28,217,078,749 bytes ... 03-Dec 02:17 Storage: Sending spooled attrs to the Director. Despooling 14,492,414 bytes ... # After job's actions for the Win Server : 03-Dec 02:17 server-compta: ClientRunAfterJob: 03-Dec 02:17 server-compta: ClientRunAfterJob: C:\WINDOWS\system32REM DEM MSSQL SERVEUR 2000 03-Dec 02:17 server-compta: ClientRunAfterJob: 03-Dec 02:17 server-compta: ClientRunAfterJob: C:\WINDOWS\system32NET START MSSQLServer 03-Dec 02:17 server-compta: ClientRunAfterJob: Le service MSSQLSERVER dmarre... 03-Dec 02:17 server-compta: ClientRunAfterJob: Le service MSSQLSERVER a dmarr. 03-Dec 02:17 server-compta: ClientRunAfterJob: 03-Dec 02:17 server-compta: ClientRunAfterJob: 03-Dec 02:17 server-compta: ClientRunAfterJob: C:\WINDOWS\system32NET START SQLSERVERAGENT 03-Dec 02:17 server-compta: ClientRunAfterJob: Le service SQLSERVERAGENT dmarre. 03-Dec 02:17 server-compta: ClientRunAfterJob: Le service SQLSERVERAGENT a dmarr. 03-Dec 02:17 server-compta: ClientRunAfterJob: # Here is the description of the Job ? 03-Dec 02:17 Backup: Bacula 1.38.5 (18Jan06): 03-Dec-2006 02:17:55 JobId: 2247 Job:server-compta-archive.2006-12-03_01.00.00 Backup Level: Full Client: server-compta Windows Server 2003,MVS,NT 5.2.3790 FileSet:server-compta-archive 2006-07-02 01:00:02 Pool: archives Storage:Sony SDZ-100 Scheduled time: 03-Dec-2006 01:00:00 Start time: 03-Dec-2006 01:00:02 End time: 03-Dec-2006 02:17:55 Priority: 40 FD Files Written: 39,595 SD Files Written: 39,595 FD Bytes Written: 28,182,421,035 SD Bytes Written: 28,189,631,833 Rate: 6030.9 KB/s Software Compression: None Volume name(s): archives.3 Volume Session Id: 11 Volume Session Time:1164994961 Last Volume Bytes: 28,211,831,006 Non-fatal FD errors:0 SD Errors:
Re: [Bacula-users] !! SPAM Suspect : SPAM-URL-DBL !! Fwd: How to understand this error
How to know a job is ended ? Is there a way to tell Bacula to use a compression something for the tape, since they are 1.300 Gbytes capable ? If you are putting more than 500 GB on a tape compression is definitely on. So the question is why don't you get 1.3 TB? The answer is that the drive is a 500GB drive and you will get 1.3TB only if your data compressed 2.6:1 which in the real world is very unlikely to actually happen unless you are backing up only text files. I typically get 1.5:1 with my data because a lot of my data is already compressed and compressing data a second time does not usually yield a good compression rate on the second go. In a lot of cases it actually works against you and you get expansion. John - Take Surveys. Earn Cash. Influence the Future of IT Join SourceForge.net's Techsay panel and you'll get the chance to share your opinions on IT business topics through brief surveys - and earn cash http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] spam filters
On Wednesday 05 July 2006 23:04, Dan Langille wrote: On 5 Jul 2006 at 23:01, Kern Sibbald wrote: I'm not really too thrilled about finding, installing, and learning how to use another spam filter. I use spamassassin, clamav, and postfix. I wouldn't have to learn another. Yes, well all three of those are new to me, but probably the obvious packages of choice. I use sendmail, procmail, and annoyance-filter. I'm not too thrilled about either getting 200 spams per day in my inbox or learning the above as I would rather keep working on getting the next version of Bacula out ... :-) Using Tomcat but need to do more? Need to support web services, security? Get stuff done quickly with pre-integrated technology to make your job easier Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo http://sel.as-us.falkag.net/sel?cmd=lnkkid=120709bid=263057dat=121642 ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] Spam on this list
Kern Sibbald ([EMAIL PROTECTED]) wrote: What I don't like about this is that some users (such as myself) don't want to subscribe to lists even to get help If you got software for free, and you can't even be bothered to do something as simple as subscribe to a free mailing list to receive free help, then IMO you don't deserve to get that help. If you're hungry, and you can't be bothered getting off your fat butt to go get something to eat - guess what? You starve. No magical genie will pop out of the computer and feed you. It appears the lists are run by mailman, subscribers can simply tell the list to not send them any mail if they don't want it, or unsubscribe even more easily than they easily subscribed (as the unsubscribe link comes in every message). -- Matt PS: please do not break the list by munging the reply-to headers. Ta. pgpEtqz9fdXnS.pgp Description: PGP signature
Re: [Bacula-users] Spam on this list
Hello, On Wednesday 18 May 2005 09:37, Matthew Hawkins wrote: Kern Sibbald ([EMAIL PROTECTED]) wrote: What I don't like about this is that some users (such as myself) don't want to subscribe to lists even to get help If you got software for free, and you can't even be bothered to do something as simple as subscribe to a free mailing list to receive free help, then IMO you don't deserve to get that help. If you're hungry, and you can't be bothered getting off your fat butt to go get something to eat - guess what? You starve. No magical genie will pop out of the computer and feed you. Well, everyone is entitled to his opinion. In my case, it is not that I cannot be bothered to subscribe as you seem suggest. This should be obvious from the amount of time and effort I put into Bacula. Rather, what holds me back from subscribing to other lists is overloading myself with even more email -- so I appreciate open lists, and would like to keep Bacula operating that way. It appears the lists are run by mailman, subscribers can simply tell the list to not send them any mail if they don't want it, or unsubscribe even more easily than they easily subscribed (as the unsubscribe link comes in every message). There are a lot of users out there who are struggling to learn Linux, and it is not always so obvious for them how to subscribe/unsubscribe -- at least judging by the number of illfated attempts by some to remove themselves from the lists I maintain (9 or 10). -- Best regards, Kern ( /\ V_V --- This SF.Net email is sponsored by Oracle Space Sweepstakes Want to be the first software developer in space? Enter now for the Oracle Space Sweepstakes! http://ads.osdn.com/?ad_id=7412alloc_id=16344op=click ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] Spam on this list
Kern Sibbald ([EMAIL PROTECTED]) wrote: Well, everyone is entitled to his opinion. And thanks to the internet, we can all express it ;) In my case, it is not that I cannot be bothered to subscribe as you seem suggest. This should be obvious from the amount of time and effort I put into Bacula. It was a general 'you' (aka 'someone') and not a personal 'you' (ie, 'Kern') - sorry I didn't make that clearer. I certainly very much appreciate all the time and effort you have placed into bacula. Thank you very much. -- Matt pgpAdkk9OWAyD.pgp Description: PGP signature
Re: [Bacula-users] Spam on this list
tir, 17,.05.2005 kl. 11.50 +0200, skrev Kern Sibbald: Hello, Notwithstanding my previous remarks about spam, I have noticed that there is more and more on this list. After thinking about it for a while, I can implement a script that automatically rejects all email by nonsubscribed users (it also rejects all other email held for administrative approval). So, what I am suggesting as a *possibility* is: - Modify the bacula-users (and probably bacula-devel) list to be subscriber only. - Nightly purge all email held for administrative approval -- i.e. all emails from nonsubscribers. This would be done blindly by a script. What I don't like about this is that some users (such as myself) don't want to subscribe to lists even to get help, so their email will probably be lost, or at best, they will be confused and frustrated. In a sense, it is reducing our service ... At the current levels, the spam does not really bother me, but I can understand that it does bother a good number of you, so I have no problem changing the list as noted above. Before embarking on this change, I would appreciate some feedback from those who are concerned -- you. It _seems_ that we have to choose between two evils here. Beeing able to post to lists without subscribing is nice, however my view on this whole situation is this: 1 - Spam is bad. 2 - Forcing ppl to subscribe is not always good, BUT if a person gets help on a list, or wants help, he/she should be prepared to give something back. If a person subscribes and recieves mail regularly, not only does his or her skills improve with the knowledge provided, but it also increases the chance that he/she will reply to someone and help out. It's sort of a The more, the merrier-principle. =) I opt for only subscribers beeing able to post only. - Christopher --- This SF.Net email is sponsored by Oracle Space Sweepstakes Want to be the first software developer in space? Enter now for the Oracle Space Sweepstakes! http://ads.osdn.com/?ad_id=7412alloc_id=16344op=click ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] Spam on this list
On Tuesday 17 May 2005 12:18, Christoph Haas wrote: On Tue, May 17, 2005 at 11:50:20AM +0200, Kern Sibbald wrote: So, what I am suggesting as a *possibility* is: - Modify the bacula-users (and probably bacula-devel) list to be subscriber only. - Nightly purge all email held for administrative approval -- i.e. all emails from nonsubscribers. This would be done blindly by a script. Can't you just set 'generic_nonmember_action' to 'reject'? That's what I do with my mailman lists. It just rejects emails from non-subscribers without administrative intervention being required. Are you administering any lists on Source Forge? If so, how does one set this option with Mailman? The only variable I seem to be able to set is member_only_posting, which unless I am mistake, holds all non-member posts for admin approval. Christoph -- Best regards, Kern ( /\ V_V --- This SF.Net email is sponsored by Oracle Space Sweepstakes Want to be the first software developer in space? Enter now for the Oracle Space Sweepstakes! http://ads.osdn.com/?ad_id=7412alloc_id=16344op=click ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] Spam on this list
Kern, I vote for the subscriber only list. Generally speaking I do not subscribe to lists due to all the volume, but this seems like a pretty good lists, I have gotten many more questions resolved just by being subscribed, than posting to the forums and checking every few days or so. In this case it is a benefit being subscribed... - Thank you, Grant Della Vecchia System Administrator ~ A I S M e d i a , I n c . - We Build eBusinesses 7000 Central Parkway, Suite 1700 Atlanta, GA 30328 Tel: 770.350.7998 ext. 506 / Fax: 770.350.9409 http://www.aismedia.com / [EMAIL PROTECTED] ~ --- This SF.Net email is sponsored by Oracle Space Sweepstakes Want to be the first software developer in space? Enter now for the Oracle Space Sweepstakes! http://ads.osdn.com/?ad_id=7412alloc_id=16344op=click ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] Spam on this list
On Tue, May 17, 2005 at 01:54:39PM +0200, Kern Sibbald wrote: On Tuesday 17 May 2005 12:18, Christoph Haas wrote: Can't you just set 'generic_nonmember_action' to 'reject'? That's what I do with my mailman lists. It just rejects emails from non-subscribers without administrative intervention being required. Are you administering any lists on Source Forge? If so, how does one set this option with Mailman? Sorry, I run the mailing lists on my own server outside of SF. I'm not perfectly familiar with what SF offers. At least there is such an option and perhaps it's worth asking the SF ops. Christoph -- ~ ~ .signature [Modified] 3 lines --100%--3,41 All --- This SF.Net email is sponsored by Oracle Space Sweepstakes Want to be the first software developer in space? Enter now for the Oracle Space Sweepstakes! http://ads.osdn.com/?ad_id=7412alloc_id=16344op=click ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] Spam on this list
Henry Yen wrote: The two opposing positions are: http://www.unicom.com/pw/reply-to-harmful.html http://marc.merlins.org/netrants/reply-to-useful.html I have no particular opinion one way or another, but popular sentiment (including mailman's default settings) very much favor the first of the two. A fun read proving that it's always possible to debate a position from two sides. I will never ever ask for this again. Not on this list or on any other. Not after having received 3 copies of the same mail from Dan, who I respect a lot. I will train myself into using the Reply to All button and maybe I'll take out the adresses that are not relevant, to limit bandwidth and maybe I won't . So, sorry, sorry, sorry for bringing this up. It was totally out of place and I shouldn't have done it. I'm also reconsidering my position on making the list subscriber only. I will set up my own private sendmail with a spamkilling content filter soon. So spam, virusmails, phishing and the like won't bother me anymore and I'll be able to read my mail from anywhere using IMAP, POP3 and maybe even webmail. Jo --- This SF.Net email is sponsored by Oracle Space Sweepstakes Want to be the first software developer in space? Enter now for the Oracle Space Sweepstakes! http://ads.osdn.com/?ad_id=7412alloc_id=16344op=click ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] Spam on this list
Hello, I get a good number of requests to change the list behavior to mung the Reply-To. It might perhaps help me avoid getting two copies of most emails I send that get answered, and help those that forget to use the Respond to all, but as the first of the articles points out, it can create a number of problems. So, I don't currently plan to change the way the list works in processing the headers. On Tuesday 17 May 2005 14:34, Henry Yen wrote: ... While we're at it. It would be nice to have the reply-to field set to the list where the mail came from and not to the person who sent the mail. Even though I switched to a mail client that has a send to all button, I constantly forget that I have to press on that one to get mail delivered correctly for this list. The two opposing positions are: http://www.unicom.com/pw/reply-to-harmful.html http://marc.merlins.org/netrants/reply-to-useful.html I have no particular opinion one way or another, but popular sentiment (including mailman's default settings) very much favor the first of the two. -- Best regards, Kern ( /\ V_V --- This SF.Net email is sponsored by Oracle Space Sweepstakes Want to be the first software developer in space? Enter now for the Oracle Space Sweepstakes! http://ads.osdn.com/?ad_id=7412alloc_id=16344op=click ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] Spam on this list
On Tuesday 17 May 2005 16:36, Jo wrote: Henry Yen wrote: The two opposing positions are: http://www.unicom.com/pw/reply-to-harmful.html http://marc.merlins.org/netrants/reply-to-useful.html I have no particular opinion one way or another, but popular sentiment (including mailman's default settings) very much favor the first of the two. A fun read proving that it's always possible to debate a position from two sides. I will never ever ask for this again. Not on this list or on any other. Not after having received 3 copies of the same mail from Dan, who I respect a lot. I will train myself into using the Reply to All button and maybe I'll take out the adresses that are not relevant, to limit bandwidth and maybe I won't . So, sorry, sorry, sorry for bringing this up. It was totally out of place and I shouldn't have done it. No need to be so sorry. Dan just wanted you to change the subject line so that the Spam email remained spam and this started a new thread. I made the same mistake by responding to you (twice) on the old thread. No one is criticizing you for the question -- as you can see, there are two sides to every question ... The subject comes up a lot, and I'm pleased that you brought it up so that Henry could finally clarify the subject with the pro/cons ... I'm also reconsidering my position on making the list subscriber only. I will set up my own private sendmail with a spamkilling content filter soon. So spam, virusmails, phishing and the like won't bother me anymore and I'll be able to read my mail from anywhere using IMAP, POP3 and maybe even webmail. You may not need to go to all the trouble of setting up your own spam killer at least not for the Bacula lists. Russell Howe has offered to do the administration, which means we will probably make the list subscriber only, but Russell will let through the non-spam. The best of both worlds for everyone except Russell. I'd like to wait to hear all the input from those who want to respond, before making the final decision. Jo --- This SF.Net email is sponsored by Oracle Space Sweepstakes Want to be the first software developer in space? Enter now for the Oracle Space Sweepstakes! http://ads.osdn.com/?ad_id=7412alloc_id=16344op=click ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users -- Best regards, Kern ( /\ V_V --- This SF.Net email is sponsored by Oracle Space Sweepstakes Want to be the first software developer in space? Enter now for the Oracle Space Sweepstakes! http://ads.osdn.com/?ad_id=7412alloc_id=16344op=click ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] SPAM
On Monday 09 May 2005 21:16, David Clymer wrote: On Mon, 2005-05-09 at 19:01 +0200, Kern Sibbald wrote: On Monday 09 May 2005 16:43, Alan Brown wrote: On Mon, 9 May 2005, Kern Sibbald wrote: On Monday 09 May 2005 14:11, Alan Brown wrote: Can we PLEASE have the list switched to members-posting only? I don't see any SPAM because I use a Bayesian filter. Perhaps the Bayesian filter should be in front of the submission address That would be nice, but SF doesn't let me do it. In addition, Bayesian filters must be trained on what *your* good/junk email is. I guess the junk part would work for everyone, but the good email is particular to each person ... For another opinion: I really can't remember any previous instances of spam on the list. I'm using sitewide spam filtering here, and I go through a lot of our spam manually to check for false positives. The only messages from this list that I remember seeing there _were_ false positives. I may have forgotten a few, but the point is that i dont believe spam is a common occurance on bacula-users. Thanks. This helps confirm my observation. I looked though my spam folder where I keep it in quarantine for about a month before blowing it away, but didn't find much. Prior to having a spam filter I'd found that pressing the delete key works pretty well and amazingly fast. Now with the spam filter, I need fodder for training, so for the .1% that slip through, I have to drag them from my inbox to my spam folder bringing my annoyance level up from 0 to .001. It might have gone higher, but there is a certain pleasure at knowing that that particular form of spam will never again get through. -davidc -- Personally I'm always ready to learn, although I do not always like being taught. -Winston Churchill --- This SF.Net email is sponsored by: NEC IT Guy Games. Get your fingers limbered up and give it your best shot. 4 great events, 4 opportunities to win big! Highest score wins.NEC IT Guy Games. Play to win an NEC 61 plasma display. Visit http://www.necitguy.com/?r=20 ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users -- Best regards, Kern ( /\ V_V --- This SF.Net email is sponsored by Oracle Space Sweepstakes Want to be the first software developer in space? Enter now for the Oracle Space Sweepstakes! http://ads.osdn.com/?ad_id=7393alloc_id=16281op=click ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] SPAM
On Tue, 10 May 2005, Kern Sibbald wrote: Prior to having a spam filter I'd found that pressing the delete key works pretty well and amazingly fast. The dangers of deleting legitimate mail unread should be obvious. As are the dangers of having any filter which accepts then dumps mail, or tags it for you to delete or worse, bounces it to the purported (forged) sender. Let's not even go into the systems which issue challenges - I get about 50 of those a day on my personal account for mail I didn't send. If SF won't put filtering out front the easiest solution is to pass mail through another system first and put as much processing in the SMTP transaction as possible. AB --- This SF.Net email is sponsored by Oracle Space Sweepstakes Want to be the first software developer in space? Enter now for the Oracle Space Sweepstakes! http://ads.osdn.com/?ad_id=7393alloc_id=16281op=click ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] SPAM
On Monday 09 May 2005 14:11, Alan Brown wrote: Can we PLEASE have the list switched to members-posting only? I don't see any SPAM because I use a Bayesian filter. However, I'll be happy to switch to members only posting, *provided* that someone can show me how to make the Source Forge Mailman program automatically return email from nonsubscribers. Everytime I make an email list members only, the mail stacks up at Source Forge requiring my attention to release it, and I can tell you one thing -- Mailman is very insistant about clearing out held mail. This is too much administration for me. --- This SF.Net email is sponsored by: NEC IT Guy Games. Get your fingers limbered up and give it your best shot. 4 great events, 4 opportunities to win big! Highest score wins.NEC IT Guy Games. Play to win an NEC 61 plasma display. Visit http://www.necitguy.com/?r=20 ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users -- Best regards, Kern ( /\ V_V --- This SF.Net email is sponsored by: NEC IT Guy Games. Get your fingers limbered up and give it your best shot. 4 great events, 4 opportunities to win big! Highest score wins.NEC IT Guy Games. Play to win an NEC 61 plasma display. Visit http://www.necitguy.com/?r=20 ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] SPAM
On Monday 09 May 2005 16:43, Alan Brown wrote: On Mon, 9 May 2005, Kern Sibbald wrote: On Monday 09 May 2005 14:11, Alan Brown wrote: Can we PLEASE have the list switched to members-posting only? I don't see any SPAM because I use a Bayesian filter. Perhaps the Bayesian filter should be in front of the submission address That would be nice, but SF doesn't let me do it. In addition, Bayesian filters must be trained on what *your* good/junk email is. I guess the junk part would work for everyone, but the good email is particular to each person ... -- Best regards, Kern ( /\ V_V --- This SF.Net email is sponsored by: NEC IT Guy Games. Get your fingers limbered up and give it your best shot. 4 great events, 4 opportunities to win big! Highest score wins.NEC IT Guy Games. Play to win an NEC 61 plasma display. Visit http://www.necitguy.com/?r=20 ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] SPAM
man, 09,.05.2005 kl. 13.11 +0100, skrev Alan Brown: Can we PLEASE have the list switched to members-posting only? That would be very nice (considering recent events). :p - Christopher --- This SF.Net email is sponsored by: NEC IT Guy Games. Get your fingers limbered up and give it your best shot. 4 great events, 4 opportunities to win big! Highest score wins.NEC IT Guy Games. Play to win an NEC 61 plasma display. Visit http://www.necitguy.com/?r=20 ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users