Re: LTO Tape Bar Codes...
On 3/20/07, Byarlay, Wayne A. <[EMAIL PROTECTED]> wrote: As it turns out, our office's label maker can do bar codes that are readable by my library, so I just had to choose type "39" bar codes, then make them whatever I felt like. I went with "A001" through "A029". As a flexible alternative, quite a few people use my web-based barcode generator [1] or they script against the BWIPP resource [2] on which it is based. [1] http://www.terryburton.co.uk/barcodewriter/generator/ [2] http://www.terryburton.co.uk/barcodewriter/ Hope this helps, Tez
Re: large dumps - 2.4.2
On Wednesday 21 March 2007, Jurgen Pletinckx wrote: >I'm trying to revive an installation of amanda 2.4.2 which has >been left unattended for quite a while. I've flushed whatever >was in the holding disks to tape, cleaned up the remaining >cruft, and, very tentatively, started amdump. > >Rather to my surprise, that seems to work. Several disks >from several hosts have been written to tape, and more are >waiting. I will verify what is actually on tape, but it sure >looks good. > >Except for the following types of failures, that is: >deepskyblue:/dev/xlv/xlv10 [dumps too big, but cannot >incremental dump skip-incr disk] >deepskyblue:/dev/xlv/xlv20 [dump larger than tape, but >cannot incremental dump skip-incr disk] > >Now, the current state of these disks is >Filesystem Type kbytes use avail %use Mounted on >/dev/xlv/xlv2 xfs 71124192 57681676 13442516 82 /xlv2 >/dev/xlv/xlv1 xfs 71124160 57826144 13298016 82 /xlv1 > >but they were larger at the time I started amdump. (Yes, I'm doing >a bit of spring-cleaning on the disks, while waiting for amdump to >continue). However, they were well under 65G. I would therefore expect > >them to fit on the 70G tapes I'm using. > >Is this a problem that will disappear after a few more amdump runs? >I.e., the planner just gets the other partitions out of the way first. > >Or should I expect to have to alter the disklist, in order to split >the contents of these large disks over different dumps? I think I would work out a way to split those big boys up into smaller pieces of the pie. Amanda's scheduler likes to try and equalize the amount of tape used to a fairly consistent percentage from run to run, and you'll make that job a whole lot easier if no one disklist entry is more than 10-20% of a tape. I might also add that 2.4.2 is very dusty these days, and it might not hurt to bring it up to one of the 2.5.x versions. 2.4.2 has had many years for bit rot to set in now, and it might be doing something a wee bit differently than the current versions are. -- Cheers, Gene "There are four boxes to be used in defense of liberty: soap, ballot, jury, and ammo. Please use in that order." -Ed Howdershelt (Author) And they shall beat their swords into plowshares, for if you hit a man with a plowshare, he's going to know he's been hit.
Re: problem with client on 64bit machine
On Wed, Mar 14, 2007 at 08:36:38AM -0500, Kenneth Kalan wrote: > I cannot get the 64 bit box to backup. NetBSD/alpha as client works on a DEC 1000A in my basement with amanda 2.4.4. -- Aaron J. Grier | "Not your ordinary poofy goof." | [EMAIL PROTECTED] "silly brewer, saaz are for pils!" -- virt
Amanda with Exabyte Magnum 224 LTO
Has anyone used Amanda with the Exabyte Magnum 224 (or 448) LTO?
Re: large dumps - 2.4.2
On Wed, Mar 21, 2007 at 04:27:56PM +0100, Jurgen Pletinckx wrote: > > Except for the following types of failures, that is: > deepskyblue:/dev/xlv/xlv10 [dumps too big, but cannot > incremental dump skip-incr disk] > deepskyblue:/dev/xlv/xlv20 [dump larger than tape, but > cannot incremental dump skip-incr disk] > > Now, the current state of these disks is > Filesystem Type kbytes use avail %use Mounted on > /dev/xlv/xlv2 xfs 71124192 57681676 13442516 82 /xlv2 > /dev/xlv/xlv1 xfs 71124160 57826144 13298016 82 /xlv1 > > but they were larger at the time I started amdump. (Yes, I'm doing > a bit of spring-cleaning on the disks, while waiting for amdump to > continue). However, they were well under 65G. I would therefore expect > them to fit on the 70G tapes I'm using. > Guessing here. You are using DLT tape with a 35GB "native" capacity and believe the marketing hype that they are "70GB tapes". Further guessing. The previous amanda admin is using software compression (gzip) rather than letting the hardware compress things on the fly. This is very typical and normal. If so, amanda wants to know the native capacity of the tape and that is what is specified in the "tapetype", setting. This is probably between 33&35GB, measured with the amtapetype program. If amanda has a history of these DLE it knows their compressibility. It may be more or less than the frequently claimed 50%. OTOH, if hardware compression is being used, most amanda admins find the 50% compression claim of the drive manufacturer to be optimistic. Thue your admin may have listed the tapetype capacity of the drive as something lower than 70GB. > Is this a problem that will disappear after a few more amdump runs? > I.e., the planner just gets the other partitions out of the way first. > > Or should I expect to have to alter the disklist, in order to split > the contents of these large disks over different dumps? Probably the best thing to do at the moment is continue cleaning out the file system debris and then retry amdump. Later you can consider splitting the single DLE into multiple DLEs. -- Jon H. LaBadie [EMAIL PROTECTED] JG Computing 4455 Province Line Road(609) 252-0159 Princeton, NJ 08540-4322 (609) 683-7220 (fax)
Re: large dumps - 2.4.2
Jurgen Pletinckx schrieb: > I'm trying to revive an installation of amanda 2.4.2 which has > been left unattended for quite a while. I've flushed whatever > was in the holding disks to tape, cleaned up the remaining > cruft, and, very tentatively, started amdump. > > Rather to my surprise, that seems to work. Several disks > from several hosts have been written to tape, and more are > waiting. I will verify what is actually on tape, but it sure > looks good. What did you expect from amanda? ;) > Except for the following types of failures, that is: > deepskyblue:/dev/xlv/xlv10 [dumps too big, but cannot > incremental dump skip-incr disk] > deepskyblue:/dev/xlv/xlv20 [dump larger than tape, but > cannot incremental dump skip-incr disk] > > Now, the current state of these disks is > Filesystem Type kbytes use avail %use Mounted on > /dev/xlv/xlv2 xfs 71124192 57681676 13442516 82 /xlv2 > /dev/xlv/xlv1 xfs 71124160 57826144 13298016 82 /xlv1 > > but they were larger at the time I started amdump. (Yes, I'm doing > a bit of spring-cleaning on the disks, while waiting for amdump to > continue). However, they were well under 65G. I would therefore expect > > them to fit on the 70G tapes I'm using. I assume you have other DLEs in your config as well. The message only tells you "in this run these DLEs wouldn't fit onto tape anymore (in sum with the other DLEs) and I *have* to do a level0(=full)-backup at first". This is perfectly OK for a first run of a config. > Is this a problem that will disappear after a few more amdump runs? > I.e., the planner just gets the other partitions out of the way first. Kind of. You should see a lev0 of those DLEs soon. > Or should I expect to have to alter the disklist, in order to split > the contents of these large disks over different dumps? You could do that, depends on your overall volume to dump and its relation to your num of tapes and their size. I would wait for the next runs. S
large dumps - 2.4.2
I'm trying to revive an installation of amanda 2.4.2 which has been left unattended for quite a while. I've flushed whatever was in the holding disks to tape, cleaned up the remaining cruft, and, very tentatively, started amdump. Rather to my surprise, that seems to work. Several disks from several hosts have been written to tape, and more are waiting. I will verify what is actually on tape, but it sure looks good. Except for the following types of failures, that is: deepskyblue:/dev/xlv/xlv10 [dumps too big, but cannot incremental dump skip-incr disk] deepskyblue:/dev/xlv/xlv20 [dump larger than tape, but cannot incremental dump skip-incr disk] Now, the current state of these disks is Filesystem Type kbytes use avail %use Mounted on /dev/xlv/xlv2 xfs 71124192 57681676 13442516 82 /xlv2 /dev/xlv/xlv1 xfs 71124160 57826144 13298016 82 /xlv1 but they were larger at the time I started amdump. (Yes, I'm doing a bit of spring-cleaning on the disks, while waiting for amdump to continue). However, they were well under 65G. I would therefore expect them to fit on the 70G tapes I'm using. Is this a problem that will disappear after a few more amdump runs? I.e., the planner just gets the other partitions out of the way first. Or should I expect to have to alter the disklist, in order to split the contents of these large disks over different dumps? -- Jurgen Pletinckx AlgoNomics NV
Re: amrecover problem with spaces in directory names
On Wednesday 21 March 2007, Jean-Louis Martineau wrote: >Steven, > >It is a known bug, it is already fixed in the CVS tree. >Try the latest 2.5.1p3 snapshot from >http://www.zmanda.com/community-builds.php > >You can also try to use wildcard: cd Directory?with?space > >Jean-Louis > >Steven Atkinson wrote: >> Hi, >> >> I am using amanda-2.5.1p3 with DLEs backed up using tar. When a >> directory name contains a space it does not seem possible to cd into >> it to recover files. This is a copy of the output received when >> trying to change into a directory called "Directory with space". >> >> amrecover> ls >> 2007-03-21 amanda-2.5.1p2.tar.gz >> 2007-03-21 "Directory with space/" >> 2007-03-21 .ssh/ >> amrecover> cd "Directory with space/" >> Invalid directory - "Directory with space/" >> amrecover> cd "Directory with space" >> Invalid directory - "Directory with space" >> amrecover> cd Directory with space >> Invalid directory - Directory >> syntax error >> amrecover> cd Directory\ with\ space >> "Directory\" is not a valid shell wildcard pattern: trailing backslash >> (\) >> syntax error >> >> It is possible to add and extract the directory. This makes the >> problem inconvenient rather than crippling. >> >> amrecover> add "Directory with space" >> Added dir "/atn/Directory with space/" at date 2007-03-21 >> amrecover> extract >> >> Extracting files using tape drive /dev/nst0 on host >> backup-01.fhs.local. The following tapes are needed: Archive02-002 >> >> Restoring files into directory /var/spool/amanda/tmp >> Continue [?/Y/n]? Y >> >> Extracting files using tape drive /dev/nst0 on host >> backup-01.fhs.local. Load tape Archive02-002 now >> Continue [?/Y/n/s/t]? Y >> >> Does anyone have any pointers as to where to start looking to fix this >> issue. >> >> Thanks >> Steve Atkinson >> >> -- >> Deputy Network Manager >> Fallibroome High School >> UK. Have you moved the snapshot location? I've been getting mine from your umontreal site since forever... But I notice a 'low level of activity' there in recent weeks. -- Cheers, Gene "There are four boxes to be used in defense of liberty: soap, ballot, jury, and ammo. Please use in that order." -Ed Howdershelt (Author) Newlan's Truism: An "acceptable" level of unemployment means that the government economist to whom it is acceptable still has a job.
Re: amrecover problem with spaces in directory names
Steven, It is a known bug, it is already fixed in the CVS tree. Try the latest 2.5.1p3 snapshot from http://www.zmanda.com/community-builds.php You can also try to use wildcard: cd Directory?with?space Jean-Louis Steven Atkinson wrote: Hi, I am using amanda-2.5.1p3 with DLEs backed up using tar. When a directory name contains a space it does not seem possible to cd into it to recover files. This is a copy of the output received when trying to change into a directory called "Directory with space". amrecover> ls 2007-03-21 amanda-2.5.1p2.tar.gz 2007-03-21 "Directory with space/" 2007-03-21 .ssh/ amrecover> cd "Directory with space/" Invalid directory - "Directory with space/" amrecover> cd "Directory with space" Invalid directory - "Directory with space" amrecover> cd Directory with space Invalid directory - Directory syntax error amrecover> cd Directory\ with\ space "Directory\" is not a valid shell wildcard pattern: trailing backslash (\) syntax error It is possible to add and extract the directory. This makes the problem inconvenient rather than crippling. amrecover> add "Directory with space" Added dir "/atn/Directory with space/" at date 2007-03-21 amrecover> extract Extracting files using tape drive /dev/nst0 on host backup-01.fhs.local. The following tapes are needed: Archive02-002 Restoring files into directory /var/spool/amanda/tmp Continue [?/Y/n]? Y Extracting files using tape drive /dev/nst0 on host backup-01.fhs.local. Load tape Archive02-002 now Continue [?/Y/n/s/t]? Y Does anyone have any pointers as to where to start looking to fix this issue. Thanks Steve Atkinson -- Deputy Network Manager Fallibroome High School UK.
amrecover problem with spaces in directory names
Hi, I am using amanda-2.5.1p3 with DLEs backed up using tar. When a directory name contains a space it does not seem possible to cd into it to recover files. This is a copy of the output received when trying to change into a directory called "Directory with space". amrecover> ls 2007-03-21 amanda-2.5.1p2.tar.gz 2007-03-21 "Directory with space/" 2007-03-21 .ssh/ amrecover> cd "Directory with space/" Invalid directory - "Directory with space/" amrecover> cd "Directory with space" Invalid directory - "Directory with space" amrecover> cd Directory with space Invalid directory - Directory syntax error amrecover> cd Directory\ with\ space "Directory\" is not a valid shell wildcard pattern: trailing backslash (\) syntax error It is possible to add and extract the directory. This makes the problem inconvenient rather than crippling. amrecover> add "Directory with space" Added dir "/atn/Directory with space/" at date 2007-03-21 amrecover> extract Extracting files using tape drive /dev/nst0 on host backup-01.fhs.local. The following tapes are needed: Archive02-002 Restoring files into directory /var/spool/amanda/tmp Continue [?/Y/n]? Y Extracting files using tape drive /dev/nst0 on host backup-01.fhs.local. Load tape Archive02-002 now Continue [?/Y/n/s/t]? Y Does anyone have any pointers as to where to start looking to fix this issue. Thanks Steve Atkinson -- Deputy Network Manager Fallibroome High School UK.