Re: Amanda README Draft; Request for comments
On Tue, 8 Dec 2009 at 5:02pm, ckotil wrote With my LTO3 drive, and 2x500GB sata 7200rpm drives in raid1. When I write from disk to tape I can hit 55MB/sec max. However my dumps are often much larger than the holding disk. I am forced to stream data to the tape drive which slows the write. Typical write to tape speed for a dump averages 20MB/sec. When writing to tape I can dump 100GB in roughly 1 hr and 45 min. Im sure with a proper config I could make my dumps more efficient. In the amanda.conf, tape type is accurate. I think the flush-threshold settings could use some tweaking to better utilize the limited holding disk. LTO3's native speed is 80MB/s. AFAIK, it can only throttle down to half that. Any slower than that and you are shoe-shining your drive, which is bad for your tapes and drive. You *really* want to either a) increase your holding space to accomodate your biggest DLE or b) split your over-large DLEs into multiple holding-disk-sized DLEs. -- Joshua Baker-LePain QB3 Shared Cluster Sysadmin UCSF
Re: [Amanda-users] Cloud Backup...but to my own Data Center
On Wed, 3 Jun 2009 at 1:46pm, Hopifan wrote Can you point me into right direction? If Amanda is the one to go with then sure, why not. I need to know the pricing structure, compression ratio, other compabilities like VSS support. If you want to correspond directly with me, I am at marek.plas...@veoliatransportation.com Erm, methinks that some time spent with google would answer most of these questions for you. I also wonder why you are using a mailing list dedicated to one particular piece of backup software to try to research backup software in general. -- Joshua Baker-LePain QB3 Shared Cluster Sysadmin UCSF
Re: Tape library with hardware encryption
On Tue, 10 Feb 2009 at 8:51am, Nicki Messerschmidt wrote does anyone know a good tape library which supports hardware encryption under linux with amanda? I thought about an lto-4 drive but there seems no linux support for the encryption part und gpg is too slow on this machine... ;) To second the other response, you need a fast server (with fast disks) to drive an LTO3/4 tape drive. Each drive in my LTO3 library has a dedicated 4 disk RAID0 made up of 10K RPM disks to feed it. And I still need to use XFS over ext3 on those RAIDs to get decent tape speed. You can't drive LTO with a low-end server. -- Joshua Baker-LePain QB3 Shared Cluster Sysadmin UCSF
Re: LTO drives
On Fri, 7 Nov 2008 at 10:51am, Nick Brockner wrote LTO4 does not shoe-shine, it has variable speed motors. But there's generally a lower limit on those speeds. With LTO3, at least, the lower limit is 1/2 the native rate. LTO3's native rate is 80MB/s, so anything below 40MB/s and you're in trouble. Has that changed with LTO4? -- Joshua Baker-LePain QB3 Shared Cluster Sysadmin UCSF
Re: LTO-4 tapetype and blocksize
On Tue, 21 Oct 2008 at 12:22pm, Jean-Francois Malouin wrote Just got a new HP LTO-4 tape drive and I've done some testing to get a tapetype entry for it and I don't see much of a difference between the default (32k) and higher values like 512k, 1024k and 2048k (see below). What are your experiences wrt to a specific choice of a blocksize for such a drive? I tested an LTO-3 drive using tar back in the day, and the speed increased from 41MB/s at 32KB blocks to 60MB/s using 2MB blocks. Also: I have LTO-3 tapes written with a blocksize of 32k: will I be able to extract data from them using a different blocksize? From 'man amrestore': OPTIONS -b Set the blocksize used to read the tape or holding file. All holding files must be read with a blocksize of 32 KBytes. Amre- store should normally be able to determine the blocksize for tapes on its own and not need this parameter. -- Joshua Baker-LePain QB3 Shared Cluster Sysadmin UCSF
Re: Performance issues
On Thu, 25 Sep 2008 at 4:42pm, Jamie Penman-Smithson wrote The holding disk is on a local [lowly] IDE drive. The data being backed up is on the whole comprised relatively large (couple of gig) files stored on SAN over fibre. At first I thought that the adverse Have you benchmarked the SAN from this host independent of amanda? Try something like bonnie++ or even just 'tar cO /foo | cat > /dev/null'. performance was due to the IDE disk, however after disabling the holding disk it actually takes 4 hours longer to complete. I've If you can't read from the SAN fast enough to keep the tape streaming, then it *would* take even longer without a holding disk. considered using memory (/dev/shm) as a holding disk, however this is only 500 MB and I'm not sure if having such a small holding disk would make any difference. Nope. The holding disk needs to be big enough to store the whole backup image. The tape drive is part of an IBM TotalStorage 3582 tape library. LTO1? -- Joshua Baker-LePain QB3 Shared Cluster Sysadmin UCSF
Re: Performance issues
On Wed, 24 Sep 2008 at 6:10pm, Jamie Penman-Smithson wrote I'm trying to understand why amanda (v2.6.0p2, on RHEL4) is taking so long to backup just 60 Gb, the actual taping appears to only take a fraction of the time. I've double checked that compression is not enabled and in the report (see below) the estimation takes a grand total of 0 minutes. Actually writing the data to tape only took 35 minutes, the dump time was over 4 hours, over 11 hours without using a holding disk (holdingdisk never). The tape server and client are one and the same, it's backing up a local filesystem. What kind of hardware are we talking about here? How exactly are your disks set up? Where is the holding disk in relation to the filesystem being backed up? What does the data look like (i.e., are there a few big files, or *lots* of small files)? What type of tape drive? -- Joshua Baker-LePain QB3 Shared Cluster Sysadmin UCSF
Re: Dumps cannot be retrieved from tape...
On Tue, 23 Sep 2008 at 12:30pm, Dan Brown wrote amrestore doesn't have the sort of granularity I'd like but the shotgun approach works as well as a pinpoint approach. I was hoping I wouldn't have to restore 178GB of data for a 10MB file. The granularity of amrestore actually depends on the granularity of your disklist. When you're using tar as your dumper (and, with smb, you are), you *have* to read the whole tape file. (Note that this may also be the case with dump images -- I just haven't used dump in a long while). That's just how tar works. So if the DLE you need that 10MB file from is 178GB in size, then, yeah, you're going to read the whole thing. OTOH, as was pointed out, you *can* use amadmin to make sure you're only reading the 1 or 2 tape files you need. -- Joshua Baker-LePain QB3 Shared Cluster Sysadmin UCSF
Re: Dumps cannot be retrieved from tape...
On Tue, 23 Sep 2008 at 9:37am, Dan Brown wrote 2. How do I retrieve my file off of tape not using amrecover? This one's easy -- 'man amrestore'. -- Joshua Baker-LePain QB3 Shared Cluster Sysadmin UCSF
Re: Parameters problem
On Mon, 28 Jul 2008 at 7:22am, Marc Muehlfeld wrote [EMAIL PROTECTED] schrieb: I want to offer to users the ability to recover any file (or version of file) for 10 days. What parameters are best suited for doing the trick ? dumpcycle 10 days Or do nothing. 10 days is default for dumpcycle. Actually, the more important criteria is tapecycle. For example: dumpcycle 7 days runspercycle 7 tapecycle 10 would also work. -- Joshua Baker-LePain QB3 Shared Cluster Sysadmin UCSF
Re: Is there a log that tells me the block size of a Backup Job
On Fri, 25 Jul 2008 at 4:33pm, Doyle Collings wrote I put two more ISO images in my backup folder. I am now backing up 5.4783 GB. My previous math of the 5.5 GB a minute using tar was flawed. With tar I am able to backup the 5.4783 GB of files in under two minutes. I ran the Amanda "fullback" backup with the new configuration...sitting front of the server with my watch in hand. Because the holding disk is larger than my backup size, Amanda backed the job up to the holding disk before the tape drive started writing. It took nine minutes to write to the holding disk. Then the tape started writing. It took only about three minutes to go from holding disk to tape. 12 minutes total for 5.5 gigabytes. It appears that the problem is not in the tape write speed, but what is happening on the server to prepare and backup the data for the holding disk. As an aside, why are you timing this by hand? Amanda should be sending you an email with all the details in it (including speed in dumping to holding disk and speed writing to tape). You *want* those emails. -- Joshua Baker-LePain QB3 Shared Cluster Sysadmin UCSF
Re: howto fix over-write email information
On Thu, 24 Jul 2008 at 3:56am, Snorre Stamnes wrote As you can see below, I am getting bad readings because the MM:SS and KB/s are overwriting eachother. Is there a way to fix this? 'man amanda', search for 'columnspec'. Also I would rather have numbers expressed in MB/s (not Mb/s), if possible! Err, what? Look at 'displayunit' on the same man page. Both of these questions are addressed several times in the list archives... -- Joshua Baker-LePain QB3 Shared Cluster Sysadmin UCSF
Re: Is there a log that tells me the block size of a Backup Job
On Tue, 22 Jul 2008 at 3:06pm, Doyle Collings wrote I am using a Tandberg Data 1x7 Magnum LTO4 Autoloader. When I use gnutar with a block size of 2048 (tar -b 4096) I can back up a 2.6 gig iso file in 37 seconds. When I use amanda, the same file takes 4 minutes. I used the following configure line when I compiled my amanda installation. /downloads/amandasource/amanda-2.6.0p1/configure --with-maxtapeblocksize=2048 --with-user=amandabackup --with-group=disk --with-configdir=/etc/amanda I then added the blocksize line to my amanda.conf define tapetype LTO4 { comment "LTO4 Library" length 802816 mbytes blocksize 2048 kbytes filemark 0 kbytes On a whim, try "blocksize 2048". 'man amanda.conf' says that no modifier is necessary, so maybe it's confusing the parser somehow. From a quick grep through my logs, I can't see the blocksize used reported anywhere. -- Joshua Baker-LePain QB3 Shared Cluster Sysadmin UCSF
Re: I want to stop receiving this mail
On Fri, 25 Jan 2008 at 9:24am, Steve Newcomb wrote Anyway, I don't think I have anything more to contribute to this list, or it to me, and so I think it would be good for me to stop receiving this mail. Unfortunately, it's not obvious how to do that, which is the reason for this note. Erm, is <http://www.amanda.org/support/mailinglists.php> that hard to find? -- Joshua Baker-LePain QB3 Shared Cluster Sysadmin UCSF
Re: vacation programs seem problematic on this list
On Fri, 28 Dec 2007 at 2:12pm, Greg Troxel wrote Every time I post to an amanda list, I get several notices that people are on vacation. This is in violation of RFC 3834: http://tools.ietf.org/html/rfc3834 The problem seems much worse on this list compared to others (perhaps the same on -hackers), almost to the point where I am disinclined to reply to people's questions, except that I don't post often enough to remember this the next time. I'm curious if others are having this problem, and whether a policy of summarily unsubscribing people with misconfigured mail systems is in order. It's a problem on most lists I'm on (the various RedHat lists, for me, are worse than the amanda lists), and, if you dig, the culprits are almost *always* Exchange or Lotus Notes users. I know, big shock. -- Joshua Baker-LePain QB3 Shared Cluster Sysadmin UCSF
Re: SAS/Fibre Channel interface for LTO4
On Tue, 11 Dec 2007 at 4:43pm, Gavin Henry wrote We're looking at an LTO4 device and have an option for SAS or fibre channel interface for hooking up to the server and Amanda. Never dealt with this kind of interface before. How would it appear to a *nix server and Amanda? I haven't played with any such hardware yet, but I'd be shocked if a SAS device appeared as anything other than a "scsi" device. Fibre I can't help you with. -- Joshua Baker-LePain QB3 Shared Cluster Sysadmin UCSF
RE: two tape drives
On Wed, 31 Oct 2007 at 1:12pm, Krahn, Anderson wrote That would almost be my case, except both of my config's would run off the same tape changer. I guess I will find out tomorrow if it works. First off, please clean up your quoting -- it's nearly impossible in your replies to figure out who said what. Secondly, yes, this can work. I've been doing this for years. I run 2 configs on the same server. Each config has its own set of slots in the loader and its own drive. I stagger the start times by 5 minutes to try to keep them from competing for the robotics. -- Joshua Baker-LePain QB3 Shared Cluster Sysadmin UCSF
Re: [WAY OT] LTO Libraries
On Tue, 18 Sep 2007 at 1:03pm, Nicholas Brockner wrote Does anyone have experience with either the qualstar RLS-8236 or the Overland ArcVault 48 (both with LTO3 drives?). Specifically I am looking for comments related to reliability of the hardware, although it may not hurt to know about amanda compatibility as well. I have 2 Overland libraries (an AIT3 Library Pro and an LTO3 Neo2000), and they work exceedingly well. Each has had 1 drive failure, both of which were handled quickly and efficiently by Overland. I've had no issues with the robotics at all. They both work flawlessly with amanda and chg-zd-mtx. -- Joshua Baker-LePain QB3 Shared Cluster Sysadmin UCSF
Re: smbclient backups on RHEL/centos 5
On Thu, 9 Aug 2007 at 12:27pm, Joshua Baker-LePain wrote On Thu, 9 Aug 2007 at 5:52pm, Paul Bijnens wrote On 2007-08-09 17:18, Joshua Baker-LePain wrote: I haven't been able to get smbclient backups to work via a centos-5 "client". I've had them working for a long while with centos-4, but the exact same config just plain doesn't work in centos-5. I've tried the version of amanda included in centos-5 (2.5.0p2), my old standby (2.4.5p1), and the most recent (2.5.2p1). They all fail fundamentally in the same way, with a message saying: samba access error: //$WINHOST/$SHARE: session setup failed: NT_STATUS_LOGON_FAILURE: returned 1 I'm starting to suspect the samba version (3.0.23c) to be at fault. Has anyone else encountered this? Anybody worked around it? I have no problem using the centos-5 smbclient to connect to PC's. (I did have some problems to connect to Vista-PC's, even with older samba-versions; but that's easily solved by tweaking the registry.) $ smbclient --version Version 3.0.23c-2.el5.2.0.2 And using it through the command line: smbclient //host/share -U username -W workgroup does it give the same error? (and you're sure the password etc is correct?) Hrmph. I was sure I'd tried that before and had it work, but now it's not working. And doing pointed me at some issues in the samba setup that may be getting in the way. *sigh* Sorry for the noise. OK, so I'm *not* going crazy. This is a CentOS-5 Linux client and a XPSP2 'doze client. The amanda version on the Linux client is 2.5.2p1. smbclient by itself works fine: [EMAIL PROTECTED] ~]$ smbclient "buck\\das" -U amanda -E -d0 Password: Domain=[BUCK] OS=[Windows 5.1] Server=[Windows 2000 LAN Manager] smb: \> But if I try it with the exact same commands I see in the amanda debug log it doesn't work: [EMAIL PROTECTED] ~]$ smbclient "buck\\das" -U amanda -E -d0 -TXqca - ./RECYCLER > /dev/null session setup failed: NT_STATUS_LOGON_FAILURE The same thing happens if I naively put a "echo $SMBPASSWD" piped into the smbclient command. Any ideas? -- Joshua Baker-LePain Department of Biomedical Engineering Duke University
Re: smbclient backups on RHEL/centos 5
On Thu, 9 Aug 2007 at 5:52pm, Paul Bijnens wrote On 2007-08-09 17:18, Joshua Baker-LePain wrote: I haven't been able to get smbclient backups to work via a centos-5 "client". I've had them working for a long while with centos-4, but the exact same config just plain doesn't work in centos-5. I've tried the version of amanda included in centos-5 (2.5.0p2), my old standby (2.4.5p1), and the most recent (2.5.2p1). They all fail fundamentally in the same way, with a message saying: samba access error: //$WINHOST/$SHARE: session setup failed: NT_STATUS_LOGON_FAILURE: returned 1 I'm starting to suspect the samba version (3.0.23c) to be at fault. Has anyone else encountered this? Anybody worked around it? I have no problem using the centos-5 smbclient to connect to PC's. (I did have some problems to connect to Vista-PC's, even with older samba-versions; but that's easily solved by tweaking the registry.) $ smbclient --version Version 3.0.23c-2.el5.2.0.2 And using it through the command line: smbclient //host/share -U username -W workgroup does it give the same error? (and you're sure the password etc is correct?) Hrmph. I was sure I'd tried that before and had it work, but now it's not working. And doing pointed me at some issues in the samba setup that may be getting in the way. *sigh* Sorry for the noise. -- Joshua Baker-LePain Department of Biomedical Engineering Duke University
smbclient backups on RHEL/centos 5
I haven't been able to get smbclient backups to work via a centos-5 "client". I've had them working for a long while with centos-4, but the exact same config just plain doesn't work in centos-5. I've tried the version of amanda included in centos-5 (2.5.0p2), my old standby (2.4.5p1), and the most recent (2.5.2p1). They all fail fundamentally in the same way, with a message saying: samba access error: //$WINHOST/$SHARE: session setup failed: NT_STATUS_LOGON_FAILURE: returned 1 I'm starting to suspect the samba version (3.0.23c) to be at fault. Has anyone else encountered this? Anybody worked around it? Thanks! -- Joshua Baker-LePain Department of Biomedical Engineering Duke University
Re: LTO-3: optimizing blocksize
On Wed, 1 Aug 2007 at 11:29am, Jean-Francois Malouin wrote Hardware: I've setup the default access device to the drives as non-compressing. The library is hooked through a LSI U320 PCI-X scsi card to a 4 DualCore2 Xeon with 8GB of RAM running Debian/Etch running a 64bit kernel 2.6.21.5-i686-64-smp. It should be beefy enough :) If you ever want to use both drives simultaneously, you'll need a dual channel SCSI card and each drive will need to be on its own channel. Trust me on this one -- I tried every trick I could think of with 2 drives on one channel (there should be plenty of bandwidth, right!?), but couldn't get decent speeds when using both drives. Running amtapetype with different blocksize gives me ~386MB for capacity (close enough to 400MB) but I never seem to get close to streaming: bs= speed= 32k 50482 kps 128k50531 kps 256k50508 kps 512k50521 kps 1024k 50512 kps 2048k 15780 kps 4096k 15875 kps Any hint on what I should try next? Interesting. What if you try with dd or tar rather than amtapetype? On my LTO3 drives testing with tar bs=32k yielded 41MiB/s while bs=2048k yielded 60MiB/s. Note also that LTO drives can throttle back to half of their native rate (80MB/s for LTO3) without shoe-shining. -- Joshua Baker-LePain Department of Biomedical Engineering Duke University
Re: Use multiple drives in tape robot
On Tue, 26 Jun 2007 at 12:57pm, Jean-Francois Malouin wrote I'll just add that you must also configure amanda such that both configs use different ports with '--with-testing=config1' and '--with-testing=config2' and add those to /etc/services like amanda-config110080/tcp amanda-config110080/udp amanda-config210081/tcp amanda-config210081/udp Depending on the authentication scheme you decide to use you might have to specify the udp and tcp port range so that both config don't overlap. Look for '--with-tcpportrange=' and '--with-udpportrange=' Hmm. Is that on the server or the client side? I didn't do any such magic, but I don't have any clients that are in both configs. Also, I'm still using 2.4.5p1. -- Joshua Baker-LePain Department of Biomedical Engineering Duke University
Re: Use multiple drives in tape robot
On Tue, 26 Jun 2007 at 4:58pm, Weber, Philip wrote Is it possible to have 2 configs using 2 separate drives in a tape robot, i.e. can they share the changer interface? I don't mean use of RAIT. I have successfully set up 1 config with chg-zd-mtx & then tried duplicating that to a 2nd config using chg-zd-mtx with a different set of tapes in the robot, and the other drive. This was successful, except one of the amdumps failed on the first calling saying no writable tape could be found; it worked on the 2nd calling so I suspect there is a clash with 2 configs trying to access the changer. I do exactly this, and have never seen a collision. I kick off the 2 amdumps 5 minutes apart. Be sure to assign each config its own set of slots. -- Joshua Baker-LePain Department of Biomedical Engineering Duke University
Re: Multiple tape drives in a library?
On Mon, 14 May 2007 at 11:18pm, Jordan Desroches wrote Hi all. We have a pretty big library with multiple tape drives. I was wondering if there was a way to setup chg-zd-mtx to use multiple drives, or if I have to go learn how to use chg-juke? The way I use my multi-drive library is to run 2 simultaneous 'amdump's on the server, each (obviously) with its own config and split the clients between the 2 dumps. It means a little more admin overhead, but you get the benefits of using the drives in parallel. Note that doing this with LTO3 drives requires a fairly beefy server to keep both drives happy. -- Joshua Baker-LePain Department of Biomedical Engineering Duke University
Re: dead processes
On Mon, 23 Apr 2007 at 1:53pm, Don Murray wrote Note the "selfchecks" that are running with "D" process state - meaning they are sleeping in the kernel and are uninterruptible and therefore unkillable. So - it looks like I need to reboot my client before I can get a backup from it again, which is a little harsh. I was wondering whether anyone knows why Amanda client 2.4.4 would get wedged like that, is there something I can do to minimize the problem? Also, if anyone has ideas about avoiding the estimate issues all together, I would appreciate any advice. Look in /tmp/amanda on the clients for the *debug files relating to the hung processes. They should have more details on what went wrong. Also, the alternate estimate methods went in before 2.5 -- I'm running 2.4.5p1 and 'man amanda.conf' says "estimate client|calcsize|server". -- Joshua Baker-LePain Department of Biomedical Engineering Duke University
Re: Trouble with Quantum DLT-S4 tapedrive
On Thu, 19 Apr 2007 at 12:10pm, Richard Stockton wrote At 10:58 AM 4/19/2007, Toomas Aas wrote: Are you able to write any significant amount of data (more than Amanda's 32 kB label) to tape with utilities such as dump or tar? No. tar gives "Write error: Operation not permitted". That's a different error than amanda is getting. What user did you try this as? Try again with root and/or the amanda user. I'm at a loss here, is anyone else using the Quantum DLT-S4 with amanda? I would appreciate any enlightenment or ideas of what to try next. So far this looks like an OS/hardware issue, *not* an amanda issue. Work on being able to read/write (with dump or tar) large amounts of data from/to your tape drive. Once you've got that working without issue, bring amanda back into the equation. As to the original system logs you posted, I agree with Toomas that it looks awfully like a hardware or driver issue. Perform all the usual SCSI voodoo (do *not* skip the goat sacrifice), and possibly swap out any components/cables you can. -- Joshua Baker-LePain Department of Biomedical Engineering Duke University
Re: tape usage algorithm
On Tue, 17 Apr 2007 at 1:23pm, Brian Cuttler wrote I'm sure this is addressed somewhere but I've never seen it (perhaps because I missed it) explicitely discussed on the list. My assumption on tape filling is that if dumps are still in progress that amanda will try to write each DLE to tape as it completes. I have no idea what the algorithm is for DLE taping if there are multiple completed DLEs in the work area. I have never been able to figure out the tape ordering in when amflush was being run. Is there any sort of taper delay algorithm to optimize tape usage ? Look in amanda(8) for dumporder and taperalgo as ways to try to optimize tape usage. -- Joshua Baker-LePain Department of Biomedical Engineering Duke University
Re: Amanda Support for Tape Libraries
On Wed, 4 Apr 2007 at 10:42pm, Jacques VB Voris IV wrote I have been searching through the resources that I can to answers this question, but haven't found a definitive yes or no: Does Amanda support tape libraries with robotics such as the StorageTek L700 or the new Sun C4? I have seen reference that imply it might, but I want to be sure Well, keep in mind that amanda is, at its heart, simply a(n exceptionally good) backup scheduler. It uses native OS tools to perform most tasks (like using tar to actually get bits off the disks). Thus, if your OS can make the library work, amanda can use it. That being said, many folks use amanda with libraries. I have both an Overland Library Pro and an Overland Neo 2000 working rather well with amanda, using Linux servers. -- Joshua Baker-LePain Department of Biomedical Engineering Duke University
Re: Compression
On Wed, 4 Apr 2007 at 11:39am, Jon LaBadie wrote On Wed, Apr 04, 2007 at 11:01:20AM +0200, Sebastian Henrich wrote: How can I put the ratio in the dumptype spec? Can't. The history of last 3 dumps at full and incremental levels is recorded in the curinfo file for the DLE after a dump. I wonder (don't know) if you could clear or edit that file or the relevant lines. Actually, you *can* control the initial guess at compressibility (on a DLE specific basis, even). Look in 'man amanda.conf' -- what you're looking for is "comprate". -- Joshua Baker-LePain Department of Biomedical Engineering Duke University
Re: Question about backup schedules
On Wed, 4 Apr 2007 at 2:48pm, Michael Keightley wrote I have setup Amanda to backup to vtape. I want to do a full backup once per week, incremental other days (weekdays only), but keep 1 month of backups. I'm a bit confused about how to set this up in amanda.conf. Would this work? dumpcycle 1 weeks runspercycle 5 tapecycle 20 tapes That's exactly right. Will having 20 vtapes mean it does 20 backups? Yep. It won't ask for the first vtape again until it's used all 20. -- Joshua Baker-LePain Department of Biomedical Engineering Duke University
Re: questions on VXA-2/X23 tape drive configuration
On Mon, 2 Apr 2007 at 4:23pm, Jon LaBadie wrote On Mon, Apr 02, 2007 at 03:24:17PM -0400, Freels, James D. wrote: I have a new set of X23 tapes from Exabyte (now Tandberg) that I want to make sure I get the most out of with my AMANDA backup system. The tapes are rated at 80/160 GB (uncompressed/compressed) for my VXA-2 drive. A few questions: 1) compresstion I read where hardware compression in the tape drive should not be used with AMANDA. If you want compression, use the computer software (gzip) to do this in the AMANDA configuration. Is this still true ? Lots of reasons on both sides to choose one or the other. I'd guess more than half of amanda installations use software compression. But make sure you don't attempt to use both. The one caveat to this is that you *can* mix and match hardware and software compression iff your tape drive's hardware compressor is smart and recognizes uncompressible data. LTO does this, as does (I believe) S-AIT (but not regular AIT). -- Joshua Baker-LePain Department of Biomedical Engineering Duke University
Re: Delaying tape flush ?
On Fri, 30 Mar 2007 at 3:15pm, Guy Dallaire wrote I noticed that my LTO-2 tape drive was "starving" for data and shoe-shinning. I was dumping to tape at 12Mb/sec and the minimum speed to prevent shoe-shinning on an LTO-2 drive is 19Mb/sec. So I did a check by timing a 'dd' of 10 Gb to the tape drive. It was indeed topped at 12 Mb/sec. It took 13m51s What exactly was the dd command you used to test with? Specifically, what blocksize did you use? Now, I replaced the antedeluvian adaptec 2940 with an adaptec 29160 ultra 160 car and redid the exercice. This time, it took 6m51s, so I was running at about 24 Mb/Sec. That was ok. Not very fast, bot OK. Now, when I looked at my amanda log this morning, I was very sad to see the following: *snip* Avg Tp Write Rate (k/s) 17549.520734.512469.2 *snip* Now, I figure the problem is that when I did the "dd" test, all my holding disk had to do is read the files. Now, it has to read the files whil amanda is pounding at it, wrinting dump files coming from the clients... and unfortunately, it seems my single SATA disk can't keep up with all those I/Os Quite possible. What are my options ? We can't afford to buy a beefy server just to do backups. We are uning a centos beige commodity box. Even then, the server seems to be I/O bound. What could I do anyway ? Buy a faster disk subsystem ? What kind of subsystem ? The first thing to look into is if a larger blocksize will help you. On my LTO3, going to 2MB blocksize significantly sped things up. Test with larger blocksizes (via dd or tar) and, if they help, recompile amanda to allow you to use a bigger blocksize. In terms of beefing up your disk system, if you could just add a drive or two and do a software RAID0, that would speed things up. But try the blocksize fix first. -- Joshua Baker-LePain Department of Biomedical Engineering Duke University
Re: LTO Tape Bar Codes...
On Tue, 20 Mar 2007 at 1:44pm, Chris Hoogendyk wrote Byarlay, Wayne A. wrote: Greetings Amanda users, Somewhat off-topic since it concerns hardware, but, hey, it's still a backup/archive question... Does anybody know of a utility which allows me to print my own LTO3 bar code labels? Man, these things are a RIP-OFF to buy! One sheet of 20 stickers, like $70! Sounds like a rip-off. Are you getting custom labels? Insight Public Sector has prices of about $50 for sheets of 100, and $98 for sheets of 200. They are probably just a bit higher than that for those who are not public sector. Or, do what I do. Buy full sheet label paper, use GNU barcode to make the labels, print 'em, and then cut to size. -- Joshua Baker-LePain Department of Biomedical Engineering Duke University
Re: problem with client on 64bit machine
On Wed, 14 Mar 2007 at 1:17pm, Kenneth Kalan wrote To answer an earlier question, nothing in /tmp/amanda, doesn't exist. *snip* Just to double check once more, I've killed iptables on both boxes (server & client), still not working. What's the output of /sbin/chkconfig, specifically the bits relating to xinetd? Also, you mentioned earlier that you have a script which puts the amanda file in /etc/xinetd.d. Did you modify that file to reflect the location of the amandad binary on the 64bit client vs. on 32bit clients? -- Joshua Baker-LePain Department of Biomedical Engineering Duke University
Re: problem with client on 64bit machine
On Wed, 14 Mar 2007 at 8:36am, Kenneth Kalan wrote I cannot get the 64 bit box to backup. When I run amcheck it' replies with selfcheck request time out. Tried turning off the firewall, but no help. I install my boxes the same way, a script puts the server info into /var/lib/amanda/.amandahosts as well as putting the amanda file into xinetd.d. This works fine on all the 32 bit boxes, even ones configured after the 64 bit box. Does anything get created in /tmp/amanda on the 64 bit client? If so, let's see it. If not, have you fired up tcpdump and/or wireshark to see what's happening to the amanda traffic? I've got 32 bit servers backing up 64 bit clients (and vice versa), so this can work. -- Joshua Baker-LePain Department of Biomedical Engineering Duke University
Re: Detecting shoe shining with modern libraries?
On Thu, 8 Mar 2007 at 4:35pm, Michael Loftis wrote --On March 8, 2007 4:57:51 PM -0500 Jon LaBadie <[EMAIL PROTECTED]> wrote: Does your drive have an activity light. When I first put in my LTO-1, the card was really old and gave low rates. A newer card more than doubled it. Another difference was the activity light. It is on nearly solid now, it was on/off/on/off on the old slower scsi card. Buried in the library. Can't view the drives at all with any of the Spectra libraries. The T50 if you open it, and remove a storage cartridge you can kinda see the drives but they're pretty obscured by the loader mechanism. My Overland Neo2K has a fairly large touchscreen on the front which presents all sorts of info, including drive status. When I was trying to drive 2 drives on 1 SCSI channel, I could see the drives going from Read or Write to Idle and back again. -- Joshua Baker-LePain Department of Biomedical Engineering Duke University
Re: SCSI card recommendations?
On Thu, 8 Mar 2007 at 2:07pm, Guy Dallaire wrote There's no easy way to tell wether it's shoeshinning or not. The performance is fine for us. Were not backing up a lot of data. I should probably put a better SCSI card in the server though. It never occured to me that the LTO could not throttle to lower than 20 Mb/sec. The performance may be fine, but shoeshining is Bad for both media and drive life. -- Joshua Baker-LePain Department of Biomedical Engineering Duke University
Re: SCSI card recommendations?
On Thu, 8 Mar 2007 at 1:26pm, Guy Dallaire wrote We have an OVERLAD dataloader Xpress LTO2 library (The 108 Gb/Hour model) connected through an old adaptec AHA2940U adapter and have no problem whatsoever. Running under centos 4. We get about 16 Mb/sec to the tape on average. Erm, the rated speed of LTO2 is 40MB/s. AIUI, LTO can throttle to half its rated speed, but below that it's forced to shoeshine. Are you seeing that at all? -- Joshua Baker-LePain Department of Biomedical Engineering Duke University
Re: SCSI card recommendations?
On Thu, 8 Mar 2007 at 12:43pm, Greg Troxel wrote I am about to buy a couple of LTO-2 drives to replace my DDS3 drives which are no longer big enough. I'm looking at HP and IBM in particular, but it seems they are all Ultra160 or Ultra320. My understanding is that these are all both wide and LVD, and use either the HD 68-pin or the VHDCI connector, and that I can cable either of these to any ultra160/ultra320 controller (and probably ultra2 wide lvd). Is it likely that an Adaptec 2940-U2W would work with such a drive? It's said to be LVD, so the only issue should be topping out at 80 MB/s. The drive would be the only thing on the bus. The rated speed of LTO2 is 40MB/s. Note, however, that I found it impossible to drive 2 LTO3 drives (80MB/s rated) over a single U320 channel, which, theoretically, should have had plenty of bandwidth. So, if it were me, I'd just get a U320 card. Can anyone recommend a SCSI card to use with LTO-2 drives that will fit in a normal PCI slot work with NetBSD (netbsd-4 branch, preferably) I've had good luck with LSI boards on Linux. I know nothing of the BSDs, though. LSI's single channel U320 board (cunningly named the LSIU320) is pretty affordable, IIRC. Also, comments about media reliability would be appreciated. It seems the LTO concept is that everything just 100% works, but I'd be interested to hear "stay away from Brand X tapes; they are flaky" comments. My library vendor recommended Fuji media (this was about a year ago), and I haven't had any issues with them. -- Joshua Baker-LePain Department of Biomedical Engineering Duke University
Re: amdump performance problem
On Wed, 7 Mar 2007 at 2:30pm, Kenneth Berry wrote ---Sequential Output ---Sequential Input-- --Random-- -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks--- MachineMB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec % CPU /sec %CPU 1000 31610 98.7 62640 33.1 52888 21.3 37878 100.0 933421 100.0 44261.0 192.5 For any sort of disk benchmarking, your test set should be at least 4X the amount of RAM in the system,. So, unless this system only has 256MB of RAM, you need to re-run bonnie with a bigger working set. -- Joshua Baker-LePain Department of Biomedical Engineering Duke University
Re: amdump performance problem
On Wed, 7 Mar 2007 at 11:45am, Kenneth Berry wrote The computer mbthome is both the amanda server and client, it is a Dell PE2650 dual Xeon processor, 2GB Ram. Tape unit is LTO3 on a dedicated SCSI controller. Internal HDD's are four SCSI ultra320 10K configured as hardware RAID5. On top of this 100GB RAID5 disk is LVM Volume00 allocated as shown below. One additional internal SCSI ultra320 10K drive of 300GB capacity is allocated to LVM Volume02 and usage can be seen below. All the internal HDD share a common percraid controller. I have an external RAID5 set of drives on an third SCSI controller which LVM Volume01 resides. It should not be a factor currently it is unused. Over the past weekend I migrated all the /home data from the external set of drives to the single large internal drive, thinking the external drives were the problem. But this reorganization made not difference. Some quick things to look into: 1) Benchmark your various volumes with both bonnie++ (single thread streaming) and tiobench (multithreaded). This will give you some baseline numbers. You can also play with multiple instances of dd. I have no idea how good those perc controllers are, as Dull tends to obfuscate them compared to their LSI underpinnings. 2) LTO3 is *fast*. Reading from a 4 disk RAID5 should be able to keep it streaming, but simultaneously writing to that same RAID5 is likely to be *very* slow... Actually, looking at your tape write speeds in the original mail, they're barely where they should be. LTO3 can throttle to 1/2 its native speed of 80MB/s before it has to resort to drive/tape-lifetime-degrading stutter-stop-restart behavior. One thing that significantly helps is increasing your blocksize from amanda's standard 32KB -- I use 2MB. This requires recompiling amanda. /dev/sdd1141003764 36925740 96915448 28% /amanda Erm, where does sdd fit in in terms of the LVM volumes? Also, both of Jon's suggestions (spindle numbers and filesystem profile on /home) are good ones to look into. Again, you can do your own benchmarking outside of amanda to get a more controlled look at what's going on. -- Joshua Baker-LePain Department of Biomedical Engineering Duke University
Re: amdump performance problem
On Wed, 7 Mar 2007 at 9:58am, Kenneth Berry wrote I have been working on a performance problem for about a month and I am out of ideas. I hope someone can help me. What is happening is that backups of large DLE that start very early seem to dump at a very slow rate. Where as dumps of DLE on the same host, physical drive, etc. that start hours latter dump at an expected rate. The amstatus snapshot below was taken at 9:30:00. The backups start at midnight and notice that the dump rate for mbthome:/home/public is about 830K/sec. But DLE mbthome:/home_a_i which has been running for 21 mins. is at a dump rate of about 9500K/sec. What is the hardware and OS setup on this client? Please be very detailed. What about the server? *snip* -- - mbtas00 / 0 2177990 816539 37.5 8:121659.0 1:0313057.1 *snip* *Please* look into the columnspec parameter in your amanda.conf to make that line readable. It'd make things a lot easier. -- Joshua Baker-LePain Department of Biomedical Engineering Duke University
RE: SDLT-4 compareded to LTO-3
On Fri, 2 Mar 2007 at 1:07pm, Anthony Worrall wrote Sorry a bit of dyslexia I meant DLT-S4 not SDLT-4 Here is a reference http://www.quantum.com/Products/TapeDrives/DLT/DLT-S4/Index.asp Well, that was 404 for me but I found it eventually. I have no experience with DLT so I can only comment on the specs. 800GB native is nice (obviously), but filling that at "only" 60MB/s is going to take a while, so keep your backup window in mind. LTO-3 (400GB native) is rated at 80MB/s and LTO-4 (800GB native, drives due 1H07) is rated at 120MB/s. Also, be sure there's a good upgrade path. LTO is rather linear (erm, by definition I guess), while DLT looks (to these unfamiliar eyes) like a bit of a mess. Another nice feature of LTO is the "smart" hardware compression (i.e. it won't try to compress (and thus expand) already compressed data). Back when I was looking, I was told that S-AIT had that as well. I'd check that about DLT-S4 as well. -- Joshua Baker-LePain Department of Biomedical Engineering Duke University
Re: SDLT-4 compareded to LTO-3
On Fri, 2 Mar 2007 at 10:31am, Anthony Worrall wrote This is not strictly an amanda question but I thought I would see if any one has any views on SDLT-4 compared to LTO-3. We are currently looking at replacing our tape devices an are looking at SDLT-4 which seems to be about the same price as LTO-3 but offer twice the capacity. Has anyone got any experience of these drives. I am told by our supplier that they are selling many more LTO-3 than SDLT-4. Is it just that SDLT-4 is newer is there some reason? Maybe I'm just not looking hard enough, but all I can find is SDLT-II, which is 300GB native -- got any links? IIRC, LTO-4 has been announced, but I can't find any products yet. The reason I like LTO is that it's an open standard with a well defined roadmap and an emphasis on compatibility among generations. -- Joshua Baker-LePain Department of Biomedical Engineering Duke University
Re: Query about a backup failure
On Wed, 28 Feb 2007 at 6:39am, Yogesh Hasabnis wrote FAILURE AND STRANGE DUMP SUMMARY: /vol/vol1/home lev 2 FAILED [data timeout] /vol/vol1/home lev 2 FAILED [dump to tape failed] Amanda was performing a backup directly to tape (i.e. no holding disk was involved), and stopped receiving data from the stream for 'dtimeout' length of time. So it killed that particular dump and told you about it. /-- /vol/vol1/home lev 2 FAILED [data timeout] sendbackup: start [:/vol/vol1/home level 2] sendbackup: info BACKUP=/bin/gtar sendbackup: info RECOVER_CMD=/bin/gtar -f... - sendbackup: info end \ I would like to know what the above messages from Amanda mean and what may have caused the failure. For more details on why the dump failed you'll need to look at the debug files in /tmp/amanda on the client and/or in the client's system logs. Secondly, since I have a limited number of tape media, I wanted to use the same tape media on which the backup failed. So as per the "Quickstart" document, I used the commands $ amrmtape daily Daily-03 $ amlabel -f daily Daily-03 and again executed the same backup. I would like to know whether this is the right way of doing things or I should compulsarily use a new tape media and then execute the backup. If that was the only image on that tape, then that's a fine strategy. -- Joshua Baker-LePain Department of Biomedical Engineering Duke University
Re: Windows clients
On Wed, 28 Feb 2007 at 12:42pm, [EMAIL PROTECTED] wrote I am going to install (well, if all goes well, that is ;-) ) Amanda on an Open Suse 10.2 the day after tomorrow; I know Novell have dropped smb support from the kernel. Does anyone know whether there are any pitfalls if SMB support is not compiled in in the Linux kernel as far as Amanda is concerned ? Amanda uses smbclient to backup Windows hosts, which (on RH based systems, at least) is part of the samba-client package. It neither has nor needs any kernel support. -- Joshua Baker-LePain Department of Biomedical Engineering Duke University
Re: Hardware Compatibility Question...
On Fri, 9 Feb 2007 at 5:25pm, Michael Loftis wrote The LSI cards I've seen recently, in Linux land, are usign the symbios drivers. Specifically the sym53c8xx_2 drivers in 2.6, and the sym53c8xx in 2.4, both equally bad. 2.6 Linux has other problems, like inabiltiy to Erm, all of the U320 LSI boards use the Fusion MPT drivers. change/set compression without a tape loaded, same problem if you issue a rewind or...i think anything other than a status command via mt to a tape drive, if there's no tape loaded it'll sit there forever, no CTRL+C response or anything. Might be debian specific, not really sure. Compression etc can be tied to devices with stinit. -- Joshua Baker-LePain Department of Biomedical Engineering Duke University
Re: Hardware Compatibility Question...
On Fri, 9 Feb 2007 at 3:22pm, Michael Loftis wrote And I don't believe with those SATA drives you'll be able to run the library even with one tape drive at full speed. LTO-3 peaks out at 80mbyte/secI see 60mbyte/sec routinely in production, currently my tape host can't keep up with that. SATA drives stream pretty well but AMANDA's spool area access resembles random I/O not really streaming I/O and it's really hard to keep tape drives fed at that rate. 10K rpm spindles might be able to even on SATA The first server I had my Neo2K library on had 4 7200 RPM SATA drives in a RAID0, and it could easily keep up with 1 amdump run to an LTO3 drive (I saw 60-70MB/s to the tape). I didn't get a 2 channel card until after I'd upgraded the server, so i don't know how well 4 spindles could've dealt w/ 2 simultaneous amdumps. -- Joshua Baker-LePain Department of Biomedical Engineering Duke University
Re: Hardware Compatibility Question...
On Thu, 8 Feb 2007 at 3:26pm, A R wrote Server: OS: Debian Linux 2U - 6 SATA HD slots Pentium dual core processor (anyone have trouble with dual core?) 2GB RAM (too much? to little?) 2TB of HD spooling space, RAID0 w/four 500GB drives 120GB of operating system space on RAID1 with two 120GB drives LSI Logic LSI22320 Ultra320 SCSI Dual Channel PCIx card I've no experience with Debian, so I can't comment on that. Dual core works just fine. For future proofing and power reasons, I'd recommend either Core2 Duo or Xeon 51xx (or Opteron) over anything Pentium at this point, but that doesn't have too much to do with amanda (which doesn't use much CPU in generaal). Make sure your motherboard has enough PCI busses and bandwidth for all the traffic amanda will generate. 2GB RAM is plenty. How you set up your holding disk space depends on how you intend on running amanda. I run 2 simultaneous amdump's on my server (1 to each of the LTO3 drives in my Neo2K), and so I have 2 separate RAID0 arrays, each with 4 spindles. 4 spindles/array may be overkill, but disk is certainly not a bottleneck in my setup. Loader: Overland ARCvault24 w/ two LTO-3 tape drives I have both an AIT3 Powerloader and a LTO3 Neo2K working quite well with amanda. I see no reason why the ARCvault shouldn't work as well (although, admittedly, I haven't looked too hard at it)... After looking at the manual, I see that the robotics are on the same SCSI ID as the first drive, but a separate LUN. RH derived distros require an option in /etc/modprobe.conf to get that working. I'm not sure about Debian. One note -- a quick look at the ARCvault24 docs doesn't show details on the cabling for a 2 drive setup. To run both LTO3 drives at the same time, they *need* to be on their own SCSI channels. If you can't do that with the 24 and you need that capability, you may need to look at the Neo2K. Good luck. -- Joshua Baker-LePain Department of Biomedical Engineering Duke University
Re: strange failure with an lto-3
On Wed, 7 Feb 2007 at 5:18pm, Kai Zimmer wrote as far as i know the 53c1030 is a ROMB (Raid on motherboard) controller. Raid controllers are designed to work with disks, not tape drives. Can you disable the raid functionality in the server bios? If not, try another controller - your error messages seem very hardware specific. I use the same chip (on a LSI 22320 controller) for my LTO3 drives with no issues. -- Joshua Baker-LePain Department of Biomedical Engineering Duke University
Re: A TAPE ERROR OCCURRED
On Sat, 3 Feb 2007 at 12:21pm, Adriana Liendo wrote In the system log there are a lot of message related to amanda, but all of them have to do with sendmail, I guest it has to do with the message I receive, but I don't know what else I should look at. Look for messages regarding the tape subsystem/device. It should tell you what the error was. Use the email from amanda to tell you what time the error occurred, and look in the log around that time stamp. -- Joshua Baker-LePain Department of Biomedical Engineering Duke University
Re: A TAPE ERROR OCCURRED
I already sent a response to this message, last time you sent it: \begin{quote} Date: Wed, 31 Jan 2007 09:06:53 -0500 (EST) From: Joshua Baker-LePain <[EMAIL PROTECTED]> To: Adriana Liendo <[EMAIL PROTECTED]> Cc: amanda-users@amanda.org Bcc: [EMAIL PROTECTED] Subject: Re: A TAPE ERROR OCCURRED On Wed, 31 Jan 2007 at 8:33am, Adriana Liendo wrote FAILURE AND STRANGE DUMP SUMMARY: humbolt /sda/seis lev 0 FAILED [out of tape] humbolt /sda/seis lev 0 FAILED ["data write: Broken pipe"] humbolt /sda/seis lev 0 FAILED [dump to tape failed] These messages indicate that the dump was going straight to tape (i.e. you weren't using a holding disk). Therefore, there wouldn't be anything in to amflush. The system log should have more info as to why you got the "short write" error. \end{quote} -- Joshua Baker-LePain Department of Biomedical Engineering Duke University
Re: A TAPE ERROR OCCURRED
On Wed, 31 Jan 2007 at 8:33am, Adriana Liendo wrote FAILURE AND STRANGE DUMP SUMMARY: humbolt /sda/seis lev 0 FAILED [out of tape] humbolt /sda/seis lev 0 FAILED ["data write: Broken pipe"] humbolt /sda/seis lev 0 FAILED [dump to tape failed] These messages indicate that the dump was going straight to tape (i.e. you weren't using a holding disk). Therefore, there wouldn't be anything in to amflush. The system log should have more info as to why you got the "short write" error. -- Joshua Baker-LePain Department of Biomedical Engineering Duke University
Re: Sony AIT5 tapetype
On Fri, 19 Jan 2007 at 3:10pm, Chris Hoogendyk wrote Do you achieve 80MB/s on your LTO3? The numbers in the amdump reports range from 55-70MB/s. And that's running backups to both drives in my library. -- Joshua Baker-LePain Department of Biomedical Engineering Duke University
Re: Sony AIT5 tapetype
On Fri, 19 Jan 2007 at 11:01am, Chris Hoogendyk wrote define tapetype SONY-AIT5 { comment "SONY AIT5 8mm tape drive" # data provided by Chris Hoogendyk <[EMAIL PROTECTED]> # produced by whacking it for 10 hrs or so with amtapetype # on a Sun E250 with a Dual Ultra320 LVD SCSI PCI card length 389120 mbytes filemark 0 kbytes speed 24401 kps } I'm out of the AIT loop -- what's the rated speed? 24MB/s doesn't seem all that fast (then again, I'm used to LTO3), and I wonder if you could increase that with some tweaking. -- Joshua Baker-LePain Department of Biomedical Engineering Duke University
Re: Media rotation and backup scheduling
On Fri, 19 Jan 2007 at 3:05am, Yogesh Hasabnis wrote We have data which is approximately 220GB in size. We have got an HP Ultrium 960 tape device for our backups with 10 numbers of media (400GB LTO3). I have a few queries about media rotation. I plan to use a dumpcycle of 5 days (5 working days of a week). If you're only going to running 'amdump' on weekdays, use dumpcycle=1 week, not 5 days. Otherwise the weekend will confuse amanda. So, for your model: dumpcycle 1 week runspercycle 5 tapecycle 10 Suppose I use a media set of 5 media for week1 and use the set of remaining 5 media for week2 and use the media sets alternately for alternate weeks. When I reuse the first media set for week3, will the earlier backup on the media set be overwritten or will the backup of week3 coexist with the earlier backup? Amanda never appends to tapes -- that's a design decision. So when a tape gets reused, whatever is on it gets overwritten. The man page of amanda.conf says that "The number of tapes in rotation must be larger than the number of tapes required for a complete dump cycle." What does this exactly mean? So in my case if my dumpcycle requires 5 media, do I need to use more than 5 tapes in my tape cycle and why? This all has to do with amanda not appending to tapes. -- Joshua Baker-LePain Department of Biomedical Engineering Duke University
Re: Tapetype defintion for HP Ultrium 960
On Tue, 9 Jan 2007 at 4:55am, Yogesh Hasabnis wrote I would be grateful if anybody can forward me the tapetype definition for an HP Ultrium 960 LTO3 tape device (external). The SCSI controller used is an HP 374654-B21 - 64-bit Single Channel Wide Ultra320 SCSI This is what I use for my LTO3 drives: define tapetype LTO3comp { # All values guesswork :) jlb, 8/31/05 # except blocksize ;) jlb, 9/15/05 length 42 mbytes blocksize 2048 filemark 5577 kbytes speed 6 kps } Note the comments. I've got hardware compression on, but I also use software compression (NOTE: do not try this on non-LTO tape drives) and the rest of my data isn't that compressible. Controller. Basically, on what factors does the tapetype deifintion depend? Else if I need to give the amtapetype command is the command "amtapetype -t /dev/nst0" sufficient? Or do I need to specify additional arguments to the command? How much time will this command take? 'man amtapetype', specifically the -e option (and -b). -- Joshua Baker-LePain Department of Biomedical Engineering Duke University
Re: preparing tests/basic questions
On Tue, 9 Jan 2007 at 2:22pm, Martin Marcher wrote * How can I use amanda without a tape drive (didn't buy one by now but I'd like to start testing asap) - Is that smart to do so even, or should I first invest money and then test? 'man amanda', and look at the section regarding the FILE driver. * I do understand the concept of "Not making a full backup every friday" but I didn't quite understand how, if amanda takes care about when all of this is done, I do find out when the last fullbackup was (or if that kind of thinking even applies to amanda). 'man amadmin', specifically 'amadmin find'. -- Joshua Baker-LePain Department of Biomedical Engineering Duke University
Re: Tape errors, hardware?
On Mon, 8 Jan 2007 at 12:10pm, Frank Smith wrote Oops, sorry I left that out: AIT3 What is different about writing filemarks (which fails) and writing large streams of data (which works)? Couldn't tell you. But I have seen tape drives able to write small amounts of data but not large streams that got better with cleaning. Not AIT mind you (my AIT library cleans itself as needed), but it's worth a shot before springing for a new drive. -- Joshua Baker-LePain Department of Biomedical Engineering Duke University
Re: Tape errors, hardware?
On Mon, 8 Jan 2007 at 11:39am, Frank Smith wrote Just trying to verify that I'm having an actual hardware error. Backups on one tape server (that's been in use for years) failed with the following: taper: tape archive03 kb 0 fm 0 writing filemark: Input/output error taper: retrying q42:/d5/backups/oracle/Dmp.0 on new tape: [writing filemark: Input/output error] taper: tape archive04 kb 1796480 fm 1 writing file: Input/output error and everything remained on the holdingingdisk. From using dd I can see that Amanda is successfully updating the date in the header block. I can successfully run amlabel on the tapes, and I can use tar to write and read a tar file to/from the tape. However, 'mt eof' fails, and gives the same error in the system logs as when Amanda runs: kernel: st1: Error with sense data: <6>st1: Current: sense key: Medium Error kernel: Additional sense: Write error kernel: Info fld=0x1 Is it definitely a drive failure when data can be written but not EOF marks, or is there something else I should check before replacing the drive? What sort of drive? Does it just need a cleaning? -- Joshua Baker-LePain Department of Biomedical Engineering Duke University
Re: Is there a way to force a amdump - Monthly Backup at end of day from the command line.
On Fri, 5 Jan 2007 at 12:20pm, Chuck Amadi Systems Administrator wrote I have a scare yesterday with our raid 5 a raid disk died. Hence I have ordered a hot spare and an additional raid disk But I need to do a full amanda backup prior to tar'ing up my file system. Thus I currently run a daily increment backup and a last day on a friday full amanda backup. Is there a way to force a amdump - Monthly Backup at end of day from the command line. I don't fully understand your setup, but there's a couple of ways to force a full backup: 1) In your normal 'daily' setup, run 'amadmin $CONFIG force' for each DLE. 2) Copy your daily config's amanda.conf and disklist into a new config (e.g. Archive) and change your dumpcycle to 0 in the new config. Then run that config. -- Joshua Baker-LePain Department of Biomedical Engineering Duke University
Re: Hardware compression and dump size problems
On Wed, 3 Jan 2007 at 12:01pm, Chris Cameron wrote I have a DLT-8000. 40/80 Gig tapes. I want to backup a 50 gig partition and am going to use hardware compression. Amanda says the dump won't fit. *snip* What do I need to do to have AMANDA try writing this all to tape? I realize it can't know how well the hardware compression will do, but trying to fit 50 gigs compressed on a 40 gig tape doesn't seem unreasonable. To use hardware compression, you have to "lie" to amanda about your tapelength. Increase 'tapelength' in your tapetype by an amount proportional to how compressible you think your data is. If you end up running into EOT frequently, nudge the tapelength down. I'm running Amanda-2.4.4p4. I've tried "comprate 0.50" with no success. comprate only applies to SW compression. -- Joshua Baker-LePain Department of Biomedical Engineering Duke University
Re: backup speed
On Wed, 27 Dec 2006 at 9:45am, Felipe Garrido wrote I've a doubt about backup speed, in my case it's about 500 kb/s and my network it's to 100 mb/s. I want to know it's normal or do I have a configuration problem ? Well, it's a little tough to tell what the problem is without *a lot* more detail about your setup. What sort of hardware on the client and server ends? Are you using a holding disk? Are there a special value in the netusage parameter that I must use in it? I tried with a 1024 kbps (default) and 10 kbps values and the result was the same. netusage is an upper limit that amanda checks before firing off additional dumpers. It doesn't actually impose any limits. -- Joshua Baker-LePain Department of Biomedical Engineering Duke University
Re: 2.5.1p1 - Error connecting to windows clients
On Fri, 8 Dec 2006 at 10:01am, Ryan Castleberry wrote amandapass: --- //winpc/sharebackup%password - disklist: - winpc.mydomain.com//winpc/sharesamba Shouldn't that be someunixboxwithsamba.mydomain.org? Um, no Yes, I am running amanda and samba on a debian box, but when I am trying to backup a Windows box with the above config I get the reported error... You're missing the point. When you back up a Windows box with amanda, the actual client you put in the disklist is the *nix box that connects to the Windows box via smbclient. So, your DLE for the Windows client *should* be: someunixboxwithsamba.mydomain.org//winpc/sharesamba where someunixboxwithsamba.mydomain.org is the system on which your amandapass file from above resides. -- Joshua Baker-LePain Department of Biomedical Engineering Duke University
Re: Can't determine disk and mount point from $CWD...
On Thu, 30 Nov 2006 at 6:23am, Gene Heskett wrote I tried with the FQDN and it works now I have another message like the one below. Since I am trying to to use a FILE-DRIVER to simulate tape behaviour I have 15 pseudo-tapes. However,I didn't define any disk type in amanda.conf; do you think that might be a problem? If it is, what should I define as a tape "type" for these pseudo-tapes? amrecover test AMRECOVER Version 2.4.4p1. Contacting server on localhost ... 220 amandatux AMANDA index server (2.4.4p1) ready. 200 Access OK Setting restore date to today (2004-04-21) 200 Working date set to 2004-04-21. 200 Config set to test. 200 Dump host set to amandatux.int-evry.fr. Trying disk / ... Trying disk rootfs ... Can't determine disk and mount point from $CWD '/etc/amanda/test' This is a standard amrecover FAQ. 'man amrecover' and look at the sethost and setdisk commands. -- Joshua Baker-LePain Department of Biomedical Engineering Duke University
Re: Backup-Question
On Thu, 23 Nov 2006 at 5:18pm, Arno Seidel wrote just a simple question about the backup behavior of amanda. Amanda backups on every backup run not the whole file-system just a part (in percent) depending on the dump-cycle and the size. does amanda collect every file which changes between the last backup run and the actual run? or only the changed files from the last part of the full dump? or every file which changes to the last complete full dump? and how does Amanda recognize if a file has changed ... only throu timestamps? Standard answer #1: Amanda doesn't do backups -- amanda schedules them. Bits are actually gotten off the disks by the program you specify in the disklist, generally either GNUtar or a filesystem-specific 'dump' program. The man pages of those utilities will explain in great detail how they determine what to backup and when. In general, a level 1 backup means everything that has changed since the last level 0. A level 2 means everything that has changed since the last level 1. Etc. -- Joshua Baker-LePain Department of Biomedical Engineering Duke University
Re: NEC T16A autloader
On Tue, 14 Nov 2006 at 9:49am, Antonello Piemonte wrote More specifically I would be interested to know if the autoloader works with the mtx media changer. Or perhaps you could advice for another autloader which works flawlessly with mtx (and amanda). I've no experience with that loader, but, as I've posted to the list several times, I'm using amanda and chg-zd-mtx with 2 generations of Overland Storage loaders with no problems whatsoever. They've always been real solid. I've got a LibraryPro (no longer sold) and a Neo2000 (a bit bigger than the one you mentioned). But I'm fairly certain Overland has a model similar to the NEC you mentioned. -- Joshua Baker-LePain Department of Biomedical Engineering Duke University
Re: What user should be selected for amanda on reiserfs?
On Fri, 10 Nov 2006 at 2:14pm, Giuseppe Sacco wrote from what I understand, the amanda user should be able to read all files (when using gtar) or the device node (when using dump). Yes on the latter, but not the former. The system I am testing amanda on is SLES9 with reiserfs file system over LVM devices, so I think I cannot use the normal /sbin/dump command (since it is meant for ext2/3 only). In order to use gnu tar, I think I have to build amanda with --with-user=root, otherwise the backup will not be able to read all files. Is this true? *No*. Backups using tar are kicked off by the setuid root 'runtar' binary. Do *not* configure with user=root. -- Joshua Baker-LePain Department of Biomedical Engineering Duke University
Re: setup help with archive type backups
On Tue, 7 Nov 2006 at 5:41pm, Bgs wrote I'd like to as some experienced Amanda user what approach would be best for us (or is amanda the best solution for this at all...) We do archiving and not classic share backups. That is we assemble some raw material from time to time and archive it to tape. They always have new file, no file change involved. Consequently we do not erase and rotate tapes either. On the source side, a source directory with date named directories looks to be a good solution, but how should I set up the tape part? Just setup a config with 'dumpcycle 0', 'runspercycle 1', and a very large tapecycle. That will force amanda to do full dumps every amdump and never recycle tapes. For the source directory, you could re-use the same DLE every time. Or, what I do for my 'archive random bits' config, add a DLE (or DLEs), amdump it/them, and then comment out those DLEs. That way your disklist contains a record of everything you've archvied with that config. -- Joshua Baker-LePain Department of Biomedical Engineering Duke University
Re: using multiple tape drives in the same jukebox in parallel
On Fri, 3 Nov 2006 at 5:58pm, Brett Marlowe wrote I'm looking for information on using both tape drives in a jukebox in parallel. I don't seem to able to come up with the right combination of terms to find it the FAQ. Any pointers I can get would be greatly appreciated. With only one config, you're pretty much limited to a RAIT setup. If that's not what you're after, just run 2 configs simultaneously, one pointed at each tape drive. -- Joshua Baker-LePain Department of Biomedical Engineering Duke University
Re: DGUX - Any chance amanda would work with this?
On Thu, 2 Nov 2006 at 3:39pm, Tom Brown wrote I have been tasked at making sure we have a valid backup of a box that i know nothing about! Its in a corner of one of our IDC's, has been up for about 3 years and no-one knows anything about it. uname -a gives me dgux R4.20MU07 generic AViiON PentiumPro does anyone know what i am dealing with here or have any clue as to weather 2nd hit on giggle for "aviion": http://en.wikipedia.org/wiki/Data_General_AViiON amanda will be able to make me a backup of it? Sounds like it's a form of *nix, so there's *some* hope. I don't envy you trying to get anything even remotely modern compiled on it, though. Good luck. -- Joshua Baker-LePain Department of Biomedical Engineering Duke University
Re: Amrecover help needed
On Thu, 2 Nov 2006 at 3:40pm, Anne Wilson wrote It's a long time since I had to do this, and I seem to have forgotten how, despite having the man page printed out. [EMAIL PROTECTED] Backup]# amrecover AMRECOVER Version 2.5.0-20060323. Contacting server on borg ... 220 borg AMANDA index server (2.5.0-20060323) ready. 200 Access OK Setting restore date to today (2006-11-02) 200 Working date set to 2006-11-02. Scanning /tmp/dumps... 200 Config set to Daily. 200 Dump host set to borg. Trying disk /Backup ... Trying disk hdb6 ... Can't determine disk and mount point from $CWD '/Backup' What have I forgotten? sethost $NAME_OF_HOST_YOURE_TRYING_TO_RECOVER_FROM setdisk $NAME_OF_DLE_YOURE_TRYING_TO_RECOVER_FROM -- Joshua Baker-LePain Department of Biomedical Engineering Duke University
Re: No backup after shutdown
On Tue, 10 Oct 2006 at 8:52pm, Anne Wilson wrote My server and client have been shut down for a week, and today should have been the first backup after restarting. This is the report: Total Full Incr. Estimate Time (hrs:min)0:00 Run Time (hrs:min) 0:22 Dump Time (hrs:min)0:21 0:21 0:00 Output Size (meg)6134.4 6134.40.0 Original Size (meg) 7697.0 7697.00.0 Avg Compressed Size (%)75.6 75.6-- Filesystems Dumped3 3 0 Avg Dump Rate (k/s) 5016.1 5016.1-- Tape Time (hrs:min)0:06 0:06 0:00 Tape Size (meg) 6134.8 6134.80.0 Tape Used (%) 307.4 307.40.0 Filesystems Taped 3 3 0 Chunks Taped 10 10 0 Avg Tp Write Rate (k/s) 16182.316182.3-- USAGE BY TAPE: Label Time Size %NbNc Dailys-3 0:02 2000M 100.2 0 5 Dailys-4 0:02 1726M 86.6 1 5 Dailys-5 0:01 1293M 64.7 1 0 Dailys-6 0:01 1116M 55.8 1 0 --- - - borg/Public 0 1198 1116 93.1 4:2 4346.6 0:5 21148.8 borg/home 0 5207 3726 71.6 13:2 4733.9 4:1 14954.6 borg/home/anne/Photos 0 1293 1293 --3:0 7214.6 1:1 16751.2 (brought to you by Amanda version 2.5.0-20060323) *** What went wrong, and how do I get out of this situation? Err, I see a successful backup. What exactly do you think went wrong? -- Joshua Baker-LePain Department of Biomedical Engineering Duke University
Re: 2.5.1 tape spanning not actually working for me.
On Sun, 1 Oct 2006 at 4:09pm, Steve Newcomb wrote I'm using chg-multi with two identical Exabyte drives. The capacity of each tape is slightly less than 5 Gb. I have a DLE that, at level 0, creates a dump of 22 Gb. *snip* *** THE DUMPS DID NOT FINISH PROPERLY! The dumps were flushed to tapes CH0008, CH0009. *** A TAPE ERROR OCCURRED: [No more writable valid tape found]. Tape spanning occurs within an amdump/amflush. So tapelength*runtapes must be > your largest DLE. -- Joshua Baker-LePain Department of Biomedical Engineering Duke University
Re: backup/amflush oddity
On Wed, 27 Sep 2006 at 9:06am, Jeff Portwine wrote However, I've noticed that every time Amanda doesn't find a writeable tape (usually due to the tape not being changed that day) there is very little data written to the holding disk and the amflush is very very small. The day after this, the dump is much larger than usual compensating for this. When we change the tapes every day, each tape is usually around 50% full. On a day when we forget to change the tape or are unable to change the tape the amflush results in about 0.5% tape useage, and the day following the day we did the amflush tape useage is usually 85-90%. Is this how it's supposed to behave? How can I fix it to do a normal dump to the holding disk on days when either we can't change the tape or we forget to change the tape? Read up on the 'reserve' parameter in amanda.conf, its default value, and its effects on degraded mode dumps. -- Joshua Baker-LePain Department of Biomedical Engineering Duke University
Re: Running out of tape when half full???
On Fri, 22 Sep 2006 at 9:03am, Matthew Claridge wrote Hope someone can shed some light on this. Suddenly my backups have started failing due to the tape running out of space: *** A TAPE ERROR OCCURRED: [[writing file: No space left on device]]. FAILURE AND STRANGE DUMP SUMMARY: server1.rwa /var lev 0 STRANGE server1.rwa /usr/local lev 0 FAILED [out of tape] server1.rwa /usr/local lev 0 FAILED ["data write: Connection reset by peer"] server1.rwa /usr/local lev 0 FAILED [dump to tape failed] NOTES: taper: tape DailySet202 kb 34175264 fm 16 writing file: No space left on device However, I'm using an 80GB (uncompressed) VXA tape: tapetype VXA2-V23 # what kind of tape it is (see tapetypes below) define tapetype VXA2-V23 { comment "Exabyte VXA2 tape drive and V23 tapes" length 76209 mbytes filemark 3908 kbytes speed 3956 kps } A few options: 1) The drive is dirty and needs to be cleaned. 2) The drive somehow got put into hardware compression mode, and you're using software compression. 3) You're dumping over the network (no holding disk), the network connection is too slow, and thus the tape is shoeshining (which severely reduces its capacity). 4) Something else I'm not thinking of b/c I ain't had my coffee yet. Look in the system log (/var/log/messages on Linux, e.g.) and you should see some messages from the tape driver with more info on the tape error. -- Joshua Baker-LePain Department of Biomedical Engineering Duke University
Re: Strange size estimate
On Wed, 20 Sep 2006 at 12:16am, Aykut Demirkol wrote I have a problem that i couldn't found answer in mail archive. My amanda 2.5.1 client (with gtar.1.15.1)gives sendsize[40863]: time 616.907: Total bytes written: 1107597486080 (1.1TiB, 1.7GiB/s) sendsize[40863]: time 616.908: . sendsize[40863]: estimate time for / level 0: 616.904 sendsize[40863]: estimate size for / level 0: 1081638170 KB sendsize[40863]: time 616.908: waiting for runtar "/" child sendsize[40863]: time 616.933: after runtar / wait gnutar_calc_estimates: warning - seek failed: Illegal seek sendsize[40863]: time 617.053: done with amname / dirname / spindle -1 sendsize[40861]: time 617.087: child 40863 terminated normally sendsize: time 617.087: pid 40861 finish time Tue Sep 19 21:33:44 2006 Well the strange think is my client have 30GB disk but gtar calculates 1.1TiB disk space. What OS/distro are you running, and on what architecture? Is /var in the / partition? The usual cause for a wild estimate like that is a ginormous sparse file, such as /var/log/lastlog sometimes is on 64bit Linux systems. But I thought the newer tars handled that properly... -- Joshua Baker-LePain Department of Biomedical Engineering Duke University
Re: Directory too large for single tape.
On Wed, 13 Sep 2006 at 6:59am, Stephen Carville wrote Ian Turner wrote: If you upgrade to Amanda 2.5.x, then you can instruct Amanda to split dumps across tapes; so you can make your partitions as large as you like, and just use as many tapes as are required. AFAICT, this doesn't really split the tar but tries to create separate tars of subdirectories. Unfortunately RMAN puts it all in one directory (There may be a way to split it -- I'm checking into that too). You're incorrect -- as of 2.5.x amanda can span a single DLE across multiple tapes. Is it possible to get amanda to follow symlinks? Every file name ends in a digit is it is trivial to do something like: As other folks have mentioned (I believe), you can split a single directory up with include and/or exclude directives in the DLEs. -- Joshua Baker-LePain Department of Biomedical Engineering Duke University
Re: Prefere a dump from the disklist and do several dumpes to holding disk?
On Mon, 11 Sep 2006 at 11:41am, Jon LaBadie wrote On Mon, Sep 11, 2006 at 11:28:29AM -0400, Joshua Baker-LePain wrote: On Mon, 11 Sep 2006 at 11:23am, Jon LaBadie wrote That tape write rate is too slow. Native speed for an LTO3 drive is rated at ~80MB/s. I think it is really an LTO2. The OP said 400GB, but was probably referring to marketing capacity. Well that makes sense. Even in that case, though, 30MB/s is still too slow, especially given that he's using hardware compression. Really, my tapechart (from Fuji), says Ultrium 2 is 30MB/s for HP drive and 35 for an IBM drive. *sigh* That's what I get for not checking the specs. I recalled (incorrectly) that LTO had been scaling speed with capacity the whole time. You're right -- rated native speed for LTO2 is 30MB/s. I also suspect his data is already pretty random, possibly compressed by whatever moved it to his server. Note only 200GB of amanda data filled the tape. Well, a bit more, but not enough to really affect the speed. So it looks like the problem really is dumping to holding disk, which shouldn't be necessary if this is all on the same server. -- Joshua Baker-LePain Department of Biomedical Engineering Duke University
Re: Prefere a dump from the disklist and do several dumpes to holding disk?
On Mon, 11 Sep 2006 at 11:23am, Jon LaBadie wrote On Mon, Sep 11, 2006 at 11:08:56AM -0400, Joshua Baker-LePain wrote: Tape Time (hrs:min) 3:35 3:35 0:00 Tape Size (meg) 374733.9 374733.9 0.0 Tape Used (%) 194.1 194.1 0.0 Filesystems Taped 12 12 0 Avg Tp Write Rate (k/s) 29707.6 29707.6 -- That tape write rate is too slow. Native speed for an LTO3 drive is rated at ~80MB/s. I think it is really an LTO2. The OP said 400GB, but was probably referring to marketing capacity. Well that makes sense. Even in that case, though, 30MB/s is still too slow, especially given that he's using hardware compression. -- Joshua Baker-LePain Department of Biomedical Engineering Duke University
Re: Prefere a dump from the disklist and do several dumpes to holding disk?
On Mon, 11 Sep 2006 at 4:27pm, Dominik Schips wrote 1. How can I tell AMANDA to start with the big directories first and then the small directories? As I have seen AMANDA allways starts with the smallest and then the next biggest and so on. 'man amanda.conf' and look for the 'dumporder' and 'taperalgo' flags. 2. I use a holding disk for the configuration. But AMANDA always do only one dump at the same time. How can I tell AMANDA to dump more directories at the same time to the holding disk so that the LTO device can backup the data faster and don't have to wait for the dumps of the holding disk? I'm a bit confused about your setup. As I understand it, all the client data is rsynced to one server, and you're backing up that server. But is that server also the amanda server to which the tape drive is attached? If that's the case, then a holding disk is a waste of time. You're not adding any parallelism and you're adding the time it takes to copy the data from one directory to another. Tape Time (hrs:min) 3:35 3:35 0:00 Tape Size (meg) 374733.9 374733.9 0.0 Tape Used (%) 194.1 194.1 0.0 Filesystems Taped 12 12 0 Avg Tp Write Rate (k/s) 29707.6 29707.6 -- That tape write rate is too slow. Native speed for an LTO3 drive is rated at ~80MB/s. It can throttle back to about half that without shoe-shining, but any slower than that and you're putting unnecessary wear and tear on your drive and your tapes as well as losing tape capacity. USAGE BY TAPE: Label Time Size % Nb Daily027 1:59 224033.5 116.1 11 Daily028 1:36 150700.4 78.1 1 All that really should have fit on one tape. It probably didn't b/c of the shoeshining mentioned above. So, step 1 is to optimize your backup server. First, you probably want to recompile amanda with the --with-maxtapeblocksize set to something larger than amanda's default of 64KB. This lets you use the 'blocksize' keyword in your tapetype -- I use 2048 KB for my LTO3 drive, and my tape write speed averages about 60MB/s. You also need to look at your server's disk I/O performance. Feeding a tape drive at 60MB/s is non-trivial, especially if the disk is doing *anything* else at the same time. Use tools like bonnie++ and tiobench to benchmark your disks. If they can't sustain read speeds of at least 50MB/s, look at upgrading. My server for my LTO3 library has a hardware RAID0 across 4 7200RPM SATA disks, and it can only keep up with the one tape drive. I'm upgrading the server simply so that I can use both tape drives in the library simultaneously. Good luck. -- Joshua Baker-LePain Department of Biomedical Engineering Duke University
Re: amtapetype problems
On Fri, 8 Sep 2006 at 2:39pm, Nick Jones wrote Please keep responses on the list. Also, top posting and not trimming your posts are generally frowned upon. Thanks for pointing that out. I've had so many problems with this, I just assumed this to be another strange error and did not even notice the medium not present message duh. Also, I think load is not the right word to use since mt cannot load a tape, it must be done manually or by a robot. Precisely. Use 'mtx' to have the robot load the tape. After the tape drive goes through its (automatic) loading cycle, the tape will be ready for use. 'mt status' can confirm that. Furthermore, at least with my Overland robot, 'mtx unload' automatically does a 'mt offline' to eject the tape. So running 'mt' manually is pretty rare, really. -- Joshua Baker-LePain Department of Biomedical Engineering Duke University
Re: amtapetype problems
On Fri, 8 Sep 2006 at 1:31pm, Nick Jones wrote [EMAIL PROTECTED] ~]# mt -f /dev/nst0 status SCSI 2 tape drive: File number=-1, block number=-1, partition=0. Tape block size 0 bytes. Density code 0x0 (default). Soft error count since last status=0 General status bits on (5): DR_OPEN IM_REP_EN [EMAIL PROTECTED] ~]# mt -f /dev/nst0 offline /dev/nst0: Input/output error [EMAIL PROTECTED] ~]# mt -f /dev/nst0 load /dev/nst0: Input/output error [EMAIL PROTECTED] ~]# mt -f /dev/st0 status SCSI 2 tape drive: File number=-1, block number=-1, partition=0. Tape block size 0 bytes. Density code 0x0 (default). Soft error count since last status=0 General status bits on (5): DR_OPEN IM_REP_EN [EMAIL PROTECTED] ~]# mt -f /dev/st0 load /dev/st0: Input/output error Here's what the log says Sep 8 13:30:50 localhost kernel: st0: Error with sense data: <6>st0: Current: sense key: Not Ready Sep 8 13:30:50 localhost kernel: Additional sense: Medium not present Any ideas? Err, there's no tape in the drive? Read through 'man mtx' and 'man mt', and understand them. In brief, you use 'mtx' to manipulate the loaders robotics -- e.g. telling it to move the tape in slot 10 into drive 1. You use 'mt' to talk to the tape drive, e.g. to get the status. 'mtx' talks to the generic SCSI device associated with the loader, while 'mt' talks to the SCSI tape device associated with, well, the tape device. -- Joshua Baker-LePain Department of Biomedical Engineering Duke University
Re: Limit amanda level.
On Mon, 28 Aug 2006 at 8:50am, McGraw, Robert P. wrote Is there a way to limit the highest backup level? Presently on some backups it will go to level 4. I would like to only have backup level 0-2. I have pursued the amanda.conf file but there is nothing obvious that I can see. Look at the various bump* directives -- those control when a backup is "bumped" up to the next level. -- Joshua Baker-LePain Department of Biomedical Engineering Duke University
Re: RE Tuning for performance
On Thu, 24 Aug 2006 at 10:44am, Joshua Baker-LePain wrote On Thu, 24 Aug 2006 at 4:36pm, Cyrille Bollu wrote [EMAIL PROTECTED] bonnie++-1.03a]# ./bonnie++ -u 0 (snip) Version 1.03 --Sequential Output-- --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- MachineSize K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP srv-fhq-bkp.fed 16G 36522 95 120980 60 58116 21 17956 49 56374 27 431.1 0 I see bad sequential input with getc but good (better than what I get with Amanda) block sequential input. The getc/putc perfomance is a measure of glibc, not your disks. The only numbers I'm really interested in are the block output (i.e. reads) and block input (i.e. writes) to the array. You can read from the array at 100MB/s, so that is *not* what is limiting your bandwidth to the tape drive (unless the array is otherwise busy when you're trying to run backups). Have you tried increasing amanda's blocksize and/or testing with 'tar -b'? Well that's what I get for shooting my mouth off without checking myself. Thanks to Jon LaBadie for pointing out that I got my meanings mixed above. Output=>writes (good speed), input=>reads, and yours is not that hot and can barely keep up with a LTO3 drive even when otherwise idle. So, play with your RAID controller settings and see if you can't get better read speeds. I'd throw in a tiobench test as well, to make sure you're not optimizing against 1 benchmark. -- Joshua Baker-LePain Department of Biomedical Engineering Duke University
Re: RE Tuning for performance
On Thu, 24 Aug 2006 at 4:36pm, Cyrille Bollu wrote [EMAIL PROTECTED] bonnie++-1.03a]# ./bonnie++ -u 0 (snip) Version 1.03 --Sequential Output-- --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- MachineSize K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP srv-fhq-bkp.fed 16G 36522 95 120980 60 58116 21 17956 49 56374 27 431.1 0 I see bad sequential input with getc but good (better than what I get with Amanda) block sequential input. The getc/putc perfomance is a measure of glibc, not your disks. The only numbers I'm really interested in are the block output (i.e. reads) and block input (i.e. writes) to the array. You can read from the array at 100MB/s, so that is *not* what is limiting your bandwidth to the tape drive (unless the array is otherwise busy when you're trying to run backups). Have you tried increasing amanda's blocksize and/or testing with 'tar -b'? -- Joshua Baker-LePain Department of Biomedical Engineering Duke University
Re: Backup plan and big filesystems
On Wed, 23 Aug 2006 at 6:23am, Joshua Baker-LePain wrote On Wed, 23 Aug 2006 at 1:53am, Jon LaBadie wrote Others have a cost consideration for how many tapes physically are offsited (can offsite be a verb? :) Verbing weirds language. ;) And lest folks thing I'm trying to take credit for that statement, I'll state that the attribution is the great philosopher Calvin. No, not that one, the other one. *sigh* Never post before your morning coffee... -- Joshua Baker-LePain Department of Biomedical Engineering Duke University
Re: Backup plan and big filesystems
On Wed, 23 Aug 2006 at 1:53am, Jon LaBadie wrote Others have a cost consideration for how many tapes physically are offsited (can offsite be a verb? :) Verbing weirds language. ;) -- Joshua Baker-LePain Department of Biomedical Engineering Duke University
Re: RE Tuning for performance
On Tue, 22 Aug 2006 at 9:36am, Cyrille Bollu wrote [EMAIL PROTECTED] a écrit sur 21/08/2006 18:10:56 : What RAID controller? Have you benchmarked the array itself with something like bonnie++ or tiobench? A Dell PowerEdge Expandable RAID Controller 4e/Di. Yes I benchmarked it using iozone but could not get meaningfull results. Grab bonnie++ from <http://www.coker.com.au/bonnie++/> and throw it at the array. It's very easy to use and gives some quite basic numbers. The ones you're interested in are sequential read and sequential write. -- Joshua Baker-LePain Department of Biomedical Engineering Duke University
Re: RE Tuning for performance
On Mon, 21 Aug 2006 at 5:28pm, Cyrille Bollu wrote For the record my BOT system is made of: 1) 1 Dell Poweredge 2850 with 1 1.4TB RAID5 array made of 6 U320 SCSI HD What RAID controller? Have you benchmarked the array itself with something like bonnie++ or tiobench? 2) 1 Dell Powervault 110 LTO3 tape drive 3) Redhat ES 3.3 4) Amanda (amanda-2.4.4p1-0.3E) Is your tape drive on its own controller? 5) Use the holding disk feature. As you suspected, you don't want to do that in this case. 6) Recompile Amanda and raise the tape record size You *do* want to do this. When I first got my LTO3 library, I did some performance tests with tar and varying blocksizes. I found that bigger generally was better. 'tar -b 64' (which equals amanda's default 32KiB blocksize) got me 41MiB/s, but 'tar -b 4096' got me 60MiB/s. So I now use a 2MiB blocksize. 7) increase the server's TCP/IP MTU? This will have no effect. All traffic is local, so it'll be going over lo and not any eth device. -- Joshua Baker-LePain Department of Biomedical Engineering Duke University
Re: amtapetype problems
On Fri, 18 Aug 2006 at 2:45pm, Nick Jones wrote I can use mtx to get tape status. I have not tried to load or unload a tape. Basically, assuming everything works and the problem lies with amanda, how do I use amtapetype? What device do I use? Last login: Fri Aug 18 13:40:00 2006 from adams.gige.uiowa.edu [EMAIL PROTECTED] ~]# mtx -f /dev/sg1 inquiry Product Type: Tape Drive Vendor ID: 'HP ' Product ID: 'Ultrium 3-SCSI ' Revision: 'G24H' Attached Changer: No [EMAIL PROTECTED] ~]# mtx -f /dev/sg2 inquiry Product Type: Medium Changer Vendor ID: 'OVERLAND' Product ID: 'LXB ' Revision: '0107' Attached Changer: No Trying amtapetype with either of these devices fails. Any hints? I can try the other things you suggested btw, such as tar or load/unload a tape, and will let you know if they don't work. sg is the "generic scsi" device -- use it to talk to the changer. To talk to the tape drive, use nst0, the "non-rewinding scsi tape" device. 'mt -f /dev/nst0 offline', e.g., will eject a tape. -- Joshua Baker-LePain Department of Biomedical Engineering Duke University
Re: amdump problem
On Thu, 17 Aug 2006 at 1:30pm, Natalia García Nebot wrote Hi! well, i have configured amanda only in my server host which is conected to a tape device. I have only one tape yet. I have put in my disklist thie line: aroprod /home/natalia always-full I execute this command: su amanda -c "amdump DiariaPrueba" ERROR planner Request to aroprod timed out. what am I doing wrong? Not using 'amcheck' to debug this, for a start I'm guessing... -- Joshua Baker-LePain Department of Biomedical Engineering Duke University
Re: amlabel problem
On Thu, 17 Aug 2006 at 12:15pm, Natalia García Nebot wrote Hi! I have installed amanda on my server host. The server is conected to a tape device. I have created a configuration named DiariaPrueba. I have configured all parameters in amanda.conf and I want to label only one tape to make tests before. In my amanda.conf I have configures de parameter labelstr to labelstr "^DiariaPruebaTape[0-9][0-9]*$" When I try to label my first tape amlabe says me: amlabel: could not load tapelist "/etc/amanda/DiariaPrueba/tapelist" su amanda -c "touch /etc/amanda/DiariaPrueba/tapelist" -- Joshua Baker-LePain Department of Biomedical Engineering Duke University
Re: Defining backup levels
On Wed, 16 Aug 2006 at 6:17pm, Anne Wilson wrote What exactly is meant by backup levels 1 & 2? For instance, when a backup bumps to level 2, does that mean that the last level1 backup has all the changes since level0, and the level 2 backups will have everything since the bump date? From 'man dump': -level# The dump level (any integer). A level 0, full backup, guaran- tees the entire file system is copied (but see also the -h option below). A level number above 0, incremental backup, tells dump to copy all files new or modified since the last dump of a lower level. The default level is 9. Historically only levels 0 to 9 were usable in dump, this version is able to understand any integer as a dump level. IOW, a level 2 only grabs stuff that has changed since the last level 1. -- Joshua Baker-LePain Department of Biomedical Engineering Duke University
Re: Dell PowerVault 124T issues
On Sat, 12 Aug 2006 at 4:00pm, Iulian Topliceanu wrote Hi, Joshua Baker-LePain wrote: Well, sg2 is the tape drive itself and sg7, like I said, is the backplane in the server. I'm guessing that the changer is on the same SCSI ID as the tape drive but on another LUN. Add 'options scsi_mod max_luns=255' to /etc/modprobe.conf, remake the initrd, and reboot. Let us know if that helps. I've modified /etc/modules.conf (since I'm still running Amanda on a RH 9) adding the like. That's actually the wrong syntax for RH9. That syntax is for 2.6 based distros. I believe it's 'max_scsi_luns' for 2.4, but you'd have to check yourself. By failing to communicate I understand things like this: [EMAIL PROTECTED] root]# mtx -f /dev/sg2 status mtx: Request Sense: Long Report=yes mtx: Request Sense: Valid Residual=no mtx: Request Sense: Error Code=70 (Current) mtx: Request Sense: Sense Key=Illegal Request Well, mtx is designed to talk to a changer, not a tape drive. Get your OS talking to your changer, and point mtx at that. -- Joshua Baker-LePain Department of Biomedical Engineering Duke University
Re: dumpcycle
On Fri, 11 Aug 2006 at 4:09pm, Joe wrote Having read the docs about amanda.conf... I have 7 tapes. Every day I want a full dump on the tape. So... dumpcycle 0 days runspercycle 1 tapecycle 7 tapes Yep -- you got it. -- Joshua Baker-LePain Department of Biomedical Engineering Duke University
Re: Dell PowerVault 124T issues
On Tue, 8 Aug 2006 at 10:39pm, Iulian Topliceanu wrote First of all: [EMAIL PROTECTED] rz]# mtx -f /dev/sg2 inquiry Product Type: Tape Drive Vendor ID: 'IBM ' Product ID: 'ULTRIUM-TD3 ' Revision: '5BG2' Attached Changer: No Why is the vendor IBM? Shouldn't it be Dell? Dell may make the library, but IBM made the drives. The LTO3 drives in my Overland library are from HP. [EMAIL PROTECTED] rz]# mtx -f /dev/sg7 inquiry Product Type: Processor Vendor ID: 'DELL' Product ID: '1x4 U2W SCSI BP ' Revision: '1.16' Attached Changer: No That's the SCSI backplane in your server -- it has nothing to do with the library. blk: queue f7fc4e18, I/O limit 4095Mb (mask 0x) Vendor: ADIC Model: FastStor DLT Rev: D118 Type: Medium Changer ANSI SCSI revision: 02 blk: queue f7fc4c18, I/O limit 4095Mb (mask 0x) Vendor: BNCHMARK Model: DLT1 Rev: 391B Type: Sequential-Access ANSI SCSI revision: 02 blk: queue f7fc4a18, I/O limit 4095Mb (mask 0x) Vendor: IBM Model: ULTRIUM-TD3 Rev: 5BG2 Type: Sequential-Access ANSI SCSI revision: 03 blk: queue f7fc4618, I/O limit 4095Mb (mask 0x) Vendor: QUANTUM Model: ATLAS10K2-TY367J Rev: DA40 Type: Direct-Access ANSI SCSI revision: 03 blk: queue f7fc4218, I/O limit 4095Mb (mask 0x) Vendor: QUANTUM Model: ATLAS10K2-TY367J Rev: DA40 Type: Direct-Access ANSI SCSI revision: 03 blk: queue c2f98c18, I/O limit 4095Mb (mask 0x) Vendor: QUANTUM Model: ATLAS10K2-TY367J Rev: DA40 Type: Direct-Access ANSI SCSI revision: 03 blk: queue c2f98818, I/O limit 4095Mb (mask 0x) Vendor: QUANTUM Model: ATLAS10K2-TY367J Rev: DA40 Type: Direct-Access ANSI SCSI revision: 03 blk: queue c2f98418, I/O limit 4095Mb (mask 0x) Vendor: DELL Model: 1x4 U2W SCSI BP Rev: 1.16 Type: Processor ANSI SCSI revision: 02 blk: queue c2f98018, I/O limit 4095Mb (mask 0x) scsi2:A:0:0: Tagged Queuing enabled. Depth 32 scsi2:A:1:0: Tagged Queuing enabled. Depth 32 scsi2:A:2:0: Tagged Queuing enabled. Depth 32 scsi2:A:3:0: Tagged Queuing enabled. Depth 32 Attached scsi disk sda at scsi2, channel 0, id 0, lun 0 Attached scsi disk sdb at scsi2, channel 0, id 1, lun 0 Attached scsi disk sdc at scsi2, channel 0, id 2, lun 0 Attached scsi disk sdd at scsi2, channel 0, id 3, lun 0 (scsi0:A:3): 20.000MB/s transfers (10.000MHz, offset 15, 16bit) st0: Block limits 2 - 16777214 bytes. Attached scsi generic sg0 at scsi0, channel 0, id 1, lun 0, type 8 Attached scsi generic sg7 at scsi2, channel 0, id 6, lun 0, type 3 (scsi1:A:6): 160.000MB/s transfers (80.000MHz DT, offset 127, 16bit) I'm a bit confused about all these devices. I can't communicate with my PV 124T either on /dev/sg2 nor on /dev/sg7. Well, sg2 is the tape drive itself and sg7, like I said, is the backplane in the server. I'm guessing that the changer is on the same SCSI ID as the tape drive but on another LUN. Add 'options scsi_mod max_luns=255' to /etc/modprobe.conf, remake the initrd, and reboot. Let us know if that helps. -- Joshua Baker-LePain Department of Biomedical Engineering Duke University
Re: dump larger than tape, 30864683 KB, but cannot incremental dump new disk
On Tue, 8 Aug 2006 at 11:51am, mario wrote That said, your tapelist shows 4350 MB, why making chuncks of 4500MB? You may well be facing the same problem again... I thought amanda can split such big backup archives up in serveral parts. I doubt that someone will backup up a 200GB System onto one single tape. You're looking for the tape spanning feature, which is explained in the docs. There's also been a lot of discussion recently regarding backups to DVDs -- look through the list archives. -- Joshua Baker-LePain Department of Biomedical Engineering Duke University
Re: amanda config problems
On Wed, 2 Aug 2006 at 5:03pm, Jeff Portwine wrote I checked the xinetd.d entry on the client: service amanda { socket_type = dgram protocol= udp wait= yes user= backup group = backup groups = yes server = /usr/local/libexec/amandad } I checked the logs: /usr/local/var/amanda/DailySet1/log/amdump.1:planner:USE_AMANDAHOSTS CLIENT_LOGIN="backup" FORCE_USERID HAVE_GZIP So it seems to me everywhere I look that it should be running with user "backup" but it still tries to run as localuser 'root'.I just don't understand it. What does the /etc/passwd entry for the backup user on the client look like? -- Joshua Baker-LePain Department of Biomedical Engineering Duke University