Re: How many dumpers active?
"John R. Jackson" wrote: > > >(What's "broken" about it...?) > > It doesn't generate any of that output I posted :-(. I'm guessing it > might be because the dump was finished, so it skipped over doing some > things that might show up while it's still running. On my SuSE amanda-2.4.1p1-7 here, I get the same behaviour: no percentages and the like, just a summary is printed (when amstatus is run after the dump has already finished): # amstatus Set1 --file amdump.1 --summary Using /var/lib/amanda/Set1/amdump.1 SUMMARY part real estimated size size partition : 30 estimated : 301015944k failed : 3 380096k wait for dumping: 0 0k dumping to tape : 0 0k dumping : 00k0k dumped : 27 657920k 635848k wait for writing: 00k0k writing to tape : 00k0k failed to tape : 00k0k taped : 27 657920k 635848k 8 dumpers idle : not-idle taper writing, tapeq: 0 network free kps: 2600 holding space : 883100 That's all! I suppose, it has never been different... -- Regards Chris Karakas DonĀ“t waste your cpu time - crack rc5: http://www.distributed.net
Re: How many dumpers active?
>Looking at the amstatus code in 2.4.2 indicates the output you indicated >*is* displayed if you pass the --stats option. Thanks. I used to have a mix of 2.4.1p1 and 2.4.2 and I don't think this arg used to be needed. >However, I don't see any way to get that output using 2.4.1p1. I don't remember when I added that code. It might have been post-2.4.1p1 and so only worked locally. Or I might be totally confused about all of this :-). >Darin Dugan John R. Jackson, Technical Software Specialist, [EMAIL PROTECTED]
Re: How many dumpers active?
At 05:42 PM 11/9/2000, John R. Jackson wrote: >[...] >For instance, I have this as part of my "run-amanda" script right >after amdump (the syntax may be different at 2.4.1p1 -- I forget): > > amstatus ${config} --summary --file amdump.1 > >Since the most recent amdump. file is always amdump.1, this gives >me a summary of what happened during this run. > >Part of the output looks like this (although I just noticed it's broken >with 2.4.2 -- sigh :-): >[...] Looking at the amstatus code in 2.4.2 indicates the output you indicated *is* displayed if you pass the --stats option. However, I don't see any way to get that output using 2.4.1p1. What version do you normally use that does work? >John R. Jackson, Technical Software Specialist, [EMAIL PROTECTED] -- Darin Dugan [EMAIL PROTECTED]
Re: How many dumpers active?
>Up to MAX_DUMPERS (which in my copy of server-src/driverio.c is 63). True. >amplot is handy for this, as well ... Also true. >(What's "broken" about it...?) It doesn't generate any of that output I posted :-(. I'm guessing it might be because the dump was finished, so it skipped over doing some things that might show up while it's still running. >david John R. Jackson, Technical Software Specialist, [EMAIL PROTECTED]
Re: How many dumpers active?
>Date: Thu, 09 Nov 2000 18:42:16 -0500 >From: "John R. Jackson" <[EMAIL PROTECTED]> >>How do I know how many dumpers are active? ... >Amanda will start the number you tell it to. Up to MAX_DUMPERS (which in my copy of server-src/driverio.c is 63). >The real question is, >how well are they being used. The easiest way to tell is with amstatus. >In particular, running it against a completed amdump. file. amplot is handy for this, as well (and for checking on various resource constraints in general). >For instance, I have this as part of my "run-amanda" script right >after amdump (the syntax may be different at 2.4.1p1 -- I forget): > amstatus ${config} --summary --file amdump.1 >Since the most recent amdump. file is always amdump.1, this gives >me a summary of what happened during this run. >Part of the output looks like this (although I just noticed it's broken >with 2.4.2 -- sigh :-): (What's "broken" about it...?) Cheers, david -- David Wolfskill [EMAIL PROTECTED] UNIX System Administrator Desk: 650/577-7158 TIE: 8/499-7158 Cell: 650/759-0823
Re: How many dumpers active?
>How do I know how many dumpers are active? ... Amanda will start the number you tell it to. The real question is, how well are they being used. The easiest way to tell is with amstatus. In particular, running it against a completed amdump. file. For instance, I have this as part of my "run-amanda" script right after amdump (the syntax may be different at 2.4.1p1 -- I forget): amstatus ${config} --summary --file amdump.1 Since the most recent amdump. file is always amdump.1, this gives me a summary of what happened during this run. Part of the output looks like this (although I just noticed it's broken with 2.4.2 -- sigh :-): ... dumper0 busy : 5:52:01 ( 98.03%) dumper1 busy : 0:23:09 ( 6.45%) dumper2 busy : 0:13:27 ( 3.75%) dumper3 busy : 0:16:13 ( 4.52%) dumper4 busy : 0:06:40 ( 1.86%) dumper5 busy : 0:03:39 ( 1.02%) taper busy : 3:54:20 ( 65.26%) 0 dumpers busy : 0:03:21 ( 0.93%) file-too-large: 0:03:21 (100.00%) 1 dumper busy : 4:03:22 ( 67.78%) no-diskspace: 3:40:55 ( 90.77%) file-too-large: 0:21:13 ( 8.72%) no-bandwidth: 0:01:13 ( 0.50%) 2 dumpers busy : 0:17:33 ( 4.89%) no-bandwidth: 0:17:33 (100.00%) 3 dumpers busy : 0:07:42 ( 2.14%) no-bandwidth: 0:07:42 (100.00%) 4 dumpers busy : 0:02:05 ( 0.58%) no-bandwidth: 0:02:05 (100.00%) 5 dumpers busy : 0:00:40 ( 0.19%) no-bandwidth: 0:00:40 (100.00%) 6 dumpers busy : 0:03:33 ( 0.99%) not-idle: 0:01:53 ( 53.10%) no-dumpers: 0:01:40 ( 46.90%) This says: dumper 0 was busy almost all the time dumper 1 (and above) were not used very much taper was busy about 2/3 of the total run time all dumpers were idle less than 1% of the total run time one dumper was busy 67.78% of the total run time and the reason two dumpers were not started when one was busy was not enough holding disk space (no-diskspace) 90.77% of that time, the next image to dump was too large to fit in the holding disk at all (file-too-large) 8.72% of that time and network bandwidth was exhausted (no-bandwidth) 0.50% of that time BTW, the above is straight out of the book chapter. >... It said both dumpers >were active, but there were no dumper processes on the one client box >in my disklist file. The dumper process only runs on the server, not the client. Each dumper is responsible for reaching out to one client and backing up one "disk". What, exactly were you trying to do? If you were trying to increase the number of dumps that go on at once on a single client, you need to increase maxdumps. If you need more dumpers because clients are "starved" for service, then increase inparallel. >Robert John R. Jackson, Technical Software Specialist, [EMAIL PROTECTED]