Testing tapes before use / bad tape
Hi! I'm using amanda 2.4.4p1 in combination with a Quantum SDLT 320 tape drive. This week I received some replacement tapes for broken ones. Using the first tape I got some kind of short write while using amflush: [...] *** A TAPE ERROR OCCURRED: [[writing file: Input/output error]]. [...] xxx /home lev 6 FAILED [out of tape] [...] taper: tape yyy3 kb 5972448 fm 14 writing file: Input/output error The system wrote about 3 GB on a 160/320 GB tape. The amflush succeeded with one of the older tapes (I flushed the data to the next tape yyy4). Before having this problems again, how can I test a tape before use? I thought about a second config doing full-backups and using amverify to check the tape after the backup. But that's not using the complete tape, so there could be an error near the end of the tape an I wouldn't recognize. Is there a test tool available running under linux/unix? Would be nice if one could have a kind of report to print (tape error at position xxx GB or something) for warranty reasons. My second problem is how to handle the short write? I have to send in the tape, but the are 3-4 GB of data on this tape. Without this data, my backup is inconsistent. The only possibility I see (at the moment) is doing a full backup of the partitions having some data on this tape. Regards, Martin Öhler
Re: Testing tapes before use / bad tape
On Sunday 23 November 2003 04:28, Martin Oehler wrote: Hi! I'm using amanda 2.4.4p1 in combination with a Quantum SDLT 320 tape drive. This week I received some replacement tapes for broken ones. Using the first tape I got some kind of short write while using amflush: [...] *** A TAPE ERROR OCCURRED: [[writing file: Input/output error]]. [...] xxx /home lev 6 FAILED [out of tape] [...] taper: tape yyy3 kb 5972448 fm 14 writing file: Input/output error The system wrote about 3 GB on a 160/320 GB tape. The amflush succeeded with one of the older tapes (I flushed the data to the next tape yyy4). Before having this problems again, how can I test a tape before use? I thought about a second config doing full-backups and using amverify to check the tape after the backup. But that's not using the complete tape, so there could be an error near the end of the tape an I wouldn't recognize. Is there a test tool available running under linux/unix? Would be nice if one could have a kind of report to print (tape error at position xxx GB or something) for warranty reasons. My second problem is how to handle the short write? I have to send in the tape, but the are 3-4 GB of data on this tape. Without this data, my backup is inconsistent. The only possibility I see (at the moment) is doing a full backup of the partitions having some data on this tape. Regards, Martin Öhler There is amtapetype, which will destructively write the tape till it hits EOT, and will tell you the size it found. See the man page for running options to help speed it up as its quite slow, doing 2 passes. -- Cheers, Gene AMD [EMAIL PROTECTED] 320M [EMAIL PROTECTED] 512M 99.27% setiathome rank, not too shabby for a WV hillbilly Yahoo.com attornies please note, additions to this message by Gene Heskett are: Copyright 2003 by Maurice Eugene Heskett, all rights reserved.
Running amdump leads to high CPU load on Linux server
I've recently begun to have trouble with the Linux system that is my amanda server...when amdump runs, the load spikes to between 4.00 and 6.00, and the system becomes nearly unresponsive for the duration of the backup. The server is backing up several local partitions, and also two partitions on remote servers. I've tried starting amdump with nice and setting it to a low priority, but when gtar and gzip are started by amanda, the priority setting is somehow lost. The server isn't even trying to back up multiple partitions in parallel, so I'm at a loss as to how this is happening, especially on a fairly fast server (dual Athlon MP 2000). Any suggestions? If you need to see the amanda logs or anything else that would help, just ask. -Kurt Raschke
Re: Running amdump leads to high CPU load on Linux server
On Sun, Nov 23, 2003 at 07:46:32PM -0500, Kurt Raschke wrote: I've recently begun to have trouble with the Linux system that is my amanda server...when amdump runs, the load spikes to between 4.00 and 6.00, and the system becomes nearly unresponsive for the duration of the backup. The server is backing up several local partitions, and also two partitions on remote servers. I've tried starting amdump with nice and setting it to a low priority, but when gtar and gzip are started by amanda, the priority setting is somehow lost. The server isn't even trying to back up multiple partitions in parallel, so I'm at a loss as to how this is happening, especially on a fairly fast server (dual Athlon MP 2000). Any suggestions? As you are using gzip, is it with the best or the fast option? The latter uses MANY fewer cpu cycles. -- Jon H. LaBadie [EMAIL PROTECTED] JG Computing 4455 Province Line Road(609) 252-0159 Princeton, NJ 08540-4322 (609) 683-7220 (fax)
Re: Running amdump leads to high CPU load on Linux server
On Sun, Nov 23, 2003 at 08:28:48PM -0500, Jon LaBadie wrote: As you are using gzip, is it with the best or the fast option? The latter uses MANY fewer cpu cycles. Well, I that was one of the things I checked, and it's running with fast. -Kurt
Re: Running amdump leads to high CPU load on Linux server
On Sunday 23 November 2003 20:33, Kurt Raschke wrote: On Sun, Nov 23, 2003 at 08:28:48PM -0500, Jon LaBadie wrote: As you are using gzip, is it with the best or the fast option? The latter uses MANY fewer cpu cycles. Well, I that was one of the things I checked, and it's running with fast. -Kurt FWIW, of those DLE's that use gzip here, this was true in the later 2.4 kernels. ATM, I'm running 2.6.0-test9-mm5 with the deadline scheduler enabled in this boot, and the machine remains very responsive except for the estimate phase, which drags a bit. -- Cheers, Gene AMD [EMAIL PROTECTED] 320M [EMAIL PROTECTED] 512M 99.27% setiathome rank, not too shabby for a WV hillbilly Yahoo.com attornies please note, additions to this message by Gene Heskett are: Copyright 2003 by Maurice Eugene Heskett, all rights reserved.