Stefan, On Tue, Nov 30, 2004 at 06:10:22PM +0100, Stefan G. Weichinger wrote: > Hi, Brian, > > (let's keep this thread inside this list, pls, it's easier to follow > things)
Of course, had intended to CC the list. > on Dienstag, 30. November 2004 at 15:43 you wrote to amanda-users: > >> you mean "holding disk" in AMANDA-terms? > > BC> Yes, "holding disk". In more generic terms a "spooling" area for the > dump/tar'd, > BC> possibly compressed files before they are DD'd to tape. Or, and this > happens > BC> periodically, there is some subtletly to the term "spooling" that I've > missed > BC> or forgetten ? (Which isn't to say the conversation wouldn't be smoother > if I'd > BC> used to proper jargon in context) > > It's Ok, no problem, I just wanted to be sure that we talk about the > same thing and not about any print-spooler or something. You know, > using the same terms helps ;-) Yes, common terminalogy help. There was this one time in SF... but that is an almost wholey unrelated digression. > BC> Since there was no holding area and we where running direct to tape it > seems > BC> that we where ok for any DLE that was both started and completed to a > single > BC> output tape volume. > > BC> However a DLE that "didn't fit" on the remaining tape didn't restart the > TAR > BC> when the next tape volume was loaded in the drive nor was it completed in > the > BC> holding area for a retry of the DD. > > BC> At least that is what it looks like to me, a not very clearly noted > failure. > > BC> I'll forward the report to you. > > BC> Unfortunately the raid array is occupied by large data sets (the users are > BC> analyzing RNA structures) and there is no knowing in advance how much data > BC> an individual will have nor how much of it has been changed between runs. > AFAI can see from your report this has been the second run of this > configuration as the planner added 13 out of 14 DLEs as "new disk". Yes, the very first run after (reinstalling the physically failed) jukebox, (note, do not let anyone load tapes with the paper instructions) of only the root partition I added the /usr5/* directories and the /usr1 partition. The idea being that /usr1 and root being "relatively small" would use dump and the directories on the raid based partition would use tar. > So it is very likely that not all of your level 0 backups that have to > be done first for new DLEs will fit on your tapes. Given a large enough holding area I'd expect that any DD that hit EOT would retry. This is how it operates on my Solaris 9/Jukebox/LTO amanda server. The problem here is I believe the inability to restart the DLE from TAR on down and not having a holding area there is no file to attempt to DD to the next tape volume. > I understand that you can't run this config every day as it seems to > have run for full 3 days this time. After the initial run I added /usr5/dumps (which is on the same raid partition) and run time dramatically inproved. ** This is also midleading as the failed level 0 from the previous run should have re-run as level zero and many of them ran at level 1. This is a "second" problem, a result of the first but a completely different part of the logic. > Please show me your amanda.conf also so I can see your tapetype (seems > to be 160000 Mb "long") and dumptypes. Initial run had a tape type of # Quantum sdlt 320, I don't know filemark, mostly its the # length that is important anyway. define tapetype SDLT { comment "QUAMTUM SDLT320" length 160 mbytes filemark 100 kbytes # don't know a better value speed 100 kbytes # dito } Which is incorrect for the SDLT 320, I've increased it to length 160000 mbytes which I'd thought (mistakenly ? ) was correct for the drive. > I am not sure right now if your AMANDA-version supports the parameter > "taperalgo" yet, also I am not sure if it helps you when you don't > have any holdingdisk. There is a dumporder parameter in amanda.conf which I believe was built on the current template when I installed amanda on this server. I do not know if dumporder is utilized when scheduling the clients or when scheduling taper. > From your second posting I now see that you now have got a > holdingdisk, which will help you A LOT if it is of any reasonable > size. This will buffer things and enable AMANDA to retry things. Yes, I have yet to hear back from the dept contact though. I don't know if I'm going to be able to keep a holding area on /usr5. It is more than likely to interfere with user processing or be unavailable when I need it for amanda. I really should have another spindle, ideally as large as the total usage of the top 2 users on /usr5 - however that is about 300 Gig. Also, having a holding area on the same "partition" as the file structure being saved has got to be a questionable move, raid based or not. > Gene wrote (in an off-list reply, as it seems): > > >> >samar / comp-root > >> > >> Here is the first biter I think. Tar runs recursively thru the named > >> directories, meaning the above line would also include all below, > >> UNLESS you have added to the comp-root define in your amanda.conf, an > >> "exclude file=/path/to/somefile" that names ./usr1 and ./user5 on > >> seperate lines of that file. > > And you wrote: > > > ya know, I'm not seeing it explicitely but the default is to > > dump via xfsdump > > GNUTAR or DUMP? I meant to say that I believe dump is used in favor of tar unless explicitely specified. I was looking for some verification of that in amanda.conf but didn't see it. I know that /etc/fstab is checked to determine which of efsdump or xfsdump will be used. ie: IRIX system, these are the vendor specific disk structures, not UFS nor ext2(?) (on linux). I did install gnutar and configured amanda to find it. > The first would be able to dump subdirectories, the latter dumps > partitions. > > Run "amadmin samar disklist" and have a close look how AMANDA > interprets your whole config. cool, never ran that before. I've included the (head of) the output below. It looks to be using dump v tar where I'd intended it to. > From your report it seems to be clear that gnutar is run, but I don't > know if you know and want that. The idea was to run gnutar (vendor specific tars where never encoraged from what I recall of other discussions, true) for the /usr5 directories since I have no tape that will support a DLE anywhere near the size of this partition (.8TBytes). > Gene is right with pointing at the missing exclusion, you also see the > active exclusions in the output of the mentioned command. This should be a non-issue though, since I am using dump on /root ? > BTW, also have a look at your "columnspec"-parameter to pretty up your > reports. Yes, I'll take a second pass at that. I should install the more recent version of amanda, since I saw that it now supports larger "units" of measurement. > ;-) > > That much for a start, keep us informed. thanks very much, will do. Brian > > -- > best regards, > Stefan > > Stefan G. Weichinger > mailto:[EMAIL PROTECTED] > > > > > --- Brian R Cuttler [EMAIL PROTECTED] Computer Systems Support (v) 518 486-1697 Wadsworth Center (f) 518 473-6384 NYS Department of Health Help Desk 518 473-0773 samar 33# /usr/local/sbin/amadmin samar disklist line 1: host samar: interface default disk /: program "DUMP" priority 0 dumpcycle 7 maxdumps 1 maxpromoteday 10000 strategy STANDARD compress CLIENT FAST comprate 0.50 0.50 auth BSD kencrypt NO holdingdisk YES record YES index NO skip-incr NO skip-full NO line 2: host samar: interface default disk /usr1: program "DUMP" priority 1 dumpcycle 7 maxdumps 1 maxpromoteday 10000 strategy STANDARD compress CLIENT FAST comprate 0.50 0.50 auth BSD kencrypt NO holdingdisk YES record YES index NO skip-incr NO skip-full NO line 3: host samar: interface default disk /usr5/amanda: program "GNUTAR" priority 1 dumpcycle 7 maxdumps 1 maxpromoteday 10000 strategy STANDARD compress CLIENT FAST comprate 0.50 0.50 auth BSD kencrypt NO holdingdisk YES record YES index YES skip-incr NO skip-full NO