amreport broken between 2.4.1p1 and 2.4.2
The email output of amreport seems to have been broken somewhere between 2.4.1p1 and 2.4.2. The column widths are incorrectly determined and the output is particularly messy with large file systems (and/or large lengths of time, etc). It would also make more sense (to me, at least) for the output figures to be represented in Mb and Mb/sec rather than Kb and Kb/sec. This would probably help the email readability. Comments, thoughts? g.
Re: amreport broken between 2.4.1p1 and 2.4.2
The email output of amreport seems to have been broken somewhere between 2.4.1p1 and 2.4.2. What you call "broken", others call a new "feature" :-). Look for "columnspec" in "man amanda". FYI, here's what I use: columnspec "OrigKB=1:8,OutKB=1:8,DumpRate=0:7,TapeRate=0:7" It would also make more sense (to me, at least) for the output figures to be represented in Mb and Mb/sec rather than Kb and Kb/sec. This would probably help the email readability. Patches are welcome. Probably as some extension to columnspec (maybe a multiplication factor and another field for the column heading text?). g. John R. Jackson, Technical Software Specialist, [EMAIL PROTECTED]
Re: why | ufsrestore?
I have always wondered .. why does amanda pipe ufsdump output to ufsrestore before sending it to the tape device? It's collecting the index data. The dump (or tar) output pipeline is rather complicated. The image data goes back to sendbackup who in turn tee's it to the restore program to gather the index information (if indexing is enabled) as well as sending the raw data (possibly through a compression program) back on the network to a dumper process on the server side. The restore program also feeds its results back through sendbackup to be sent to the dumper on a different socket (as I recall). So sendbackup is multiplexing five data streams: * reading the dump image coming in from the backup program * writing the image out to the index (restore) process * writing the image out the socket connected to dumper on the server or to a compression program * reading the output of the index process * writing the index data to another socket back to dumper If I ufsdump direct to tape, eg. ufsdump 0f /dev/rmt/0n / I consistently achieve 3mb/second (Exabyte mammoth). If amanda is dumping direct to tape (file systems that are bigger than the holding disk), I'm lucky if i get 1mb/second. If it's going from the holding disk to tape, I get 3mb/second, as expected. But you're comparing apples and oranges. As you've noted, going from disk to tape on the same machine gets 3 MBytes/s whether you are using ufsdump or Amanda is using taper to copy a holding disk image. But that's not what happens when Amanda is dumping a client direct to tape. The data has to go across the network (even if it's all on the local machine it still goes through the kernel network stack). And, probably even more important, Amanda does compression when dumping, not when writing to tape. So a dump to holding disk would be "slow" but the corresponding holding disk to tape operation would be "fast". But a direct to tape backup would pay the penalty and show the speed loss due to compression even though the tape I/O portion is going as fast as it is given data. You didn't mention what kind of dump rates Amanda reports. Those should more or less match your direct to tape numbers for large enough images to get a good sample and with similar clients. Note that I'm not saying something isn't wrong in Amanda. Just that we need to narrow down the list of culprits. g. John R. Jackson, Technical Software Specialist, [EMAIL PROTECTED]
RE: why | ufsrestore?
I have always wondered .. why does amanda pipe ufsdump output to ufsrestore before sending it to the tape device? It's collecting the index data. John, thanks for clarifying... If amanda is dumping direct to tape (file systems that are bigger than the holding disk), I'm lucky if i get 1mb/second. If it's going from the holding disk to tape, I get 3mb/second, as expected. But you're comparing apples and oranges. As you've noted, going from disk to tape on the same machine gets 3 MBytes/s whether you are using ufsdump or Amanda is using taper to copy a holding disk image. But that's not what happens when Amanda is dumping a client direct to tape. The data has to go across the network (even if it's all on the local machine it still goes through the kernel network stack). And, probably even more important, Amanda does compression when dumping, not when writing to tape. So a dump to holding disk would be "slow" but the corresponding holding disk to tape operation would be "fast". But a direct to tape backup would pay the penalty and show the speed loss due to compression even though the tape I/O portion is going as fast as it is given data. I should have mentioned, we have several ~10Gb file systems (on the same system as the tape drive, Amanda server), and none of these are dumped with compression (for speed reasons). You didn't mention what kind of dump rates Amanda reports. Those should more or less match your direct to tape numbers for large enough images to get a good sample and with similar clients. The dump rates reported by Amanda are around 1.0-1.5mb/second (without compression). A direct ufsdump to tape without Amanda on the same file systems run at 3mb/second (which is the fastest that the tape drive can accept the data). Given the complexity of the Amanda sendbackup process this doesn't exactly surprise me, but I wondered if I could do anything to speed things up. The system is a little underpowered and perhaps when we get our much needed upgrade (a nice new dual cpu E220R), things will improve... g.
Re: DDS4 parameters
On Sat, 27 Jan 2001, Jason Winchell wrote: Does anyone know the parameters for DDS4, 150m, 20GB/40GB tapes? I can offer the following: define tapetype DDS4 { comment "DDS 4 tapes 150 m" length 19400 mbytes filemark 32 kbytes speed 2700 kbytes } This has been evaluated with a Seagate Scorpion 240. Martin