Re: Nitpick - amreport statistics

2010-09-21 Thread Dustin J. Mitchell
On Mon, Sep 20, 2010 at 6:56 PM, Jon LaBadie j...@jgcomp.com wrote:
 Looks better to my eye.  Any one else?

With Jean-Louis' review, committed in r3428.  Thanks!

Dustin

-- 
Open Source Storage Engineer
http://www.zmanda.com



Re: Nitpick - amreport statistics

2010-09-20 Thread Dustin J. Mitchell
On Thu, Sep 16, 2010 at 4:52 PM, Dustin J. Mitchell dus...@zmanda.com wrote:
 Are you working on this?  Perhaps someone else on the list can jump in?

Hmm, well, I just took care of this:

  http://github.com/djmitche/amanda/commit/z11908.patch

  Total   Full  Incr.   Level:#
        
Estimate Time (hrs:min) 0:00
Run Time (hrs:min)  0:00
Dump Time (hrs:min) 0:00   0:00   0:00
Output Size (meg)0.00.00.0
Original Size (meg)  0.00.00.0
Avg Compressed Size (%)100.0--   100.0
DLEs Dumped1  0  1  1:1
Avg Dump Rate (k/s)   3750.0--  3750.0

Tape Time (hrs:min) 0:00   0:00   0:00
Tape Size (meg)  0.00.00.0
Tape Used (%)0.00.00.0
DLEs Taped 1  0  1  1:1
Parts Taped1  0  1  1:1
Avg Tp Write Rate (k/s)300.0--   300.0

Jon, what do you think?

Dustin

-- 
Open Source Storage Engineer
http://www.zmanda.com



Re: Nitpick - amreport statistics

2010-09-20 Thread Jon LaBadie
On Mon, Sep 20, 2010 at 04:22:34PM -0500, Dustin J. Mitchell wrote:
 On Thu, Sep 16, 2010 at 4:52 PM, Dustin J. Mitchell dus...@zmanda.com wrote:
  Are you working on this? ??Perhaps someone else on the list can jump in?
 
 Hmm, well, I just took care of this:
 
   http://github.com/djmitche/amanda/commit/z11908.patch
 
   Total   Full  Incr.   Level:#
         
 Estimate Time (hrs:min) 0:00
 Run Time (hrs:min)  0:00
 Dump Time (hrs:min) 0:00   0:00   0:00
 Output Size (meg)0.00.00.0
 Original Size (meg)  0.00.00.0
 Avg Compressed Size (%)100.0--   100.0
 DLEs Dumped1  0  1  1:1
 Avg Dump Rate (k/s)   3750.0--  3750.0
 
 Tape Time (hrs:min) 0:00   0:00   0:00
 Tape Size (meg)  0.00.00.0
 Tape Used (%)0.00.00.0
 DLEs Taped 1  0  1  1:1
 Parts Taped1  0  1  1:1
 Avg Tp Write Rate (k/s)300.0--   300.0
 
 Jon, what do you think?
 

Looks better to my eye.  Any one else?

jl
-- 
Jon H. LaBadie  j...@jgcomp.com
 JG Computing
 12027 Creekbend Drive  (703) 787-0884
 Reston, VA  20194  (703) 787-0922 (fax)


Re: Nitpick - amreport statistics

2010-09-16 Thread Dustin J. Mitchell
On Thu, Sep 9, 2010 at 11:05 PM, Dustin J. Mitchell dus...@zmanda.com wrote:
 I don't mean to put you on the spot - like I said, I can take care of
 this if you'd prefer.

Are you working on this?  Perhaps someone else on the list can jump in?

Dustin

-- 
Open Source Storage Engineer
http://www.zmanda.com


Nitpick - amreport statistics

2010-09-09 Thread Jon LaBadie
On Thu, Sep 09, 2010 at 10:55:55AM -0500, Dustin J. Mitchell wrote:
 I bet most of you have some small nitpick with Amanda that you've
 never felt warranted an email.  Well, now's your chance!  I'd like to
 put some polish on Amanda, and it's hard for me to see the areas that
 need burnishing, since I work on Amanda all day, every day.
 
  - typo in a manpage?
  - command-line usage oddity?
  - confusing use of terminology?
  - something else?
 
 Start up a new thread on the mailing list, or email me privately if
 you'd prefer, to let me know what's bugging you.  Bonus points for
 also supplying a patch, but that's not at all required!
 
 Note that I do reserve the right to say, actually, that's
 complicated (and explain why).

Ok, here is an easy one for you; or someone who can find where
in the code these data are printed.

The STATISTICS section of amreport has bothered me.  Here is
a sample from a recent 3.1.2 report.


STATISTICS:
  Total   Full  Incr.
      
Estimate Time (hrs:min) 0:07
Run Time (hrs:min)  4:25
Dump Time (hrs:min) 8:58   5:08   3:50
Output Size (meg)52665.622391.030274.6
Original Size (meg)  79151.332995.246156.0
Avg Compressed Size (%) 66.5   67.9   65.6  (level:#disks ...)
Filesystems Dumped24  3 21  (1:20 2:1)
Avg Dump Rate (k/s)   1669.8 1240.2 2245.0

Tape Time (hrs:min) 1:36   0:47   0:49
Tape Size (meg)  52665.622391.030274.6
Tape Used (%)  205.7   87.5  118.3  (level:#disks ...)
Filesystems Taped 24  3 21  (1:20 2:1)
(level:#parts ...)
Parts Taped   39  8 31  (1:30 2:1)
Avg Tp Write Rate (k/s)   9367.7 8156.710523.2


First, what I call a DLE is refered to as Filesystem some places and
disk others.

Second I don't like the repeating extra column headings printed 3
times after the Incr. column.  The headings should be moved to the
top and the unnecessary blank line reclaimed.  I'm not against blank
lines to group things, but not just to stick an extra column hdr.

Two ways to fix the extra headers.  Move them to the top or put
them at the end of the first column.  I.e. make Parts Taped
into Parts Taped (lvl:#parts ...).

Here is my suggested revision:



STATISTICS:
  Total   Full  Incr.(lvl:# ...)
         ---
Estimate Time (hrs:min) 0:07
Run Time (hrs:min)  4:25
Dump Time (hrs:min) 8:58   5:08   3:50
Output Size (meg)52665.622391.030274.6
Original Size (meg)  79151.332995.246156.0
Avg Compressed Size (%) 66.5   67.9   65.6
DLEs Dumped   24  3 21   (1:20 2:1)
Avg Dump Rate (k/s)   1669.8 1240.2 2245.0

Tape Time (hrs:min) 1:36   0:47   0:49
Tape Size (meg)  52665.622391.030274.6
Tape Used (%)  205.7   87.5  118.3
DLEs Taped24  3 21   (1:20 2:1)
Parts Taped   39  8 31   (1:30 2:1)
Avg Tp Write Rate (k/s)   9367.7 8156.710523.2


Might not even need the parens.

Hmmm I wonder why Incrementals always seem to dump at double
the rate of level 0's?

jl
-- 
Jon H. LaBadie  j...@jgcomp.com
 JG Computing
 12027 Creekbend Drive  (703) 787-0884
 Reston, VA  20194  (703) 787-0922 (fax)


Re: Nitpick - amreport statistics

2010-09-09 Thread Dustin J. Mitchell
On Thu, Sep 9, 2010 at 10:26 PM, Jon LaBadie j...@jgcomp.com wrote:
 First, what I call a DLE is refered to as Filesystem some places and
 disk others.

 Second I don't like the repeating extra column headings printed 3
 times after the Incr. column.  The headings should be moved to the
 top and the unnecessary blank line reclaimed.  I'm not against blank
 lines to group things, but not just to stick an extra column hdr.

 Two ways to fix the extra headers.  Move them to the top or put
 them at the end of the first column.  I.e. make Parts Taped
 into Parts Taped (lvl:#parts ...).

All good suggestions, and I can take care of this if you'd like.  I do
wonder if anyone is parsing those lines and expects those particular
row labels, but I suppose they can update their regexes if that's the
case.

For what it's worth, this code is in perl/Amanda/Report/human.pm, or
in your installed system it will be in human.pm under either
/usr/lib/amanda/perl or your perl sitedir (locate human.pm should
find it..).  Do you want to take a crack at making those changes
yourself, and send along either a patch or just the tweaked version of
human.pm (from which I can generate a patch)?

I don't mean to put you on the spot - like I said, I can take care of
this if you'd prefer.

Dustin

-- 
Open Source Storage Engineer
http://www.zmanda.com



A couple of questions about dump statistics for Amanda 2.5.1p2

2007-01-05 Thread Mark Hennessy
1. Where can I find detailed documentation on what each of the values
provided in the statistics means and if/how they should add up?

2. When Amanda does its estimates for individual file systems being backed up
with dump , how does it do it?  
Does it do the same procedure for L0 dumps as for incremental dumps?  Is
there any way to get any time savings if there's a full L0 dump being done as
opposed to an incremental dump?

STATISTICS:
  Total   Full  Incr.
      
Estimate Time (hrs:min)2:01
Run Time (hrs:min)10:22
Dump Time (hrs:min)4:44   2:49   1:55
Output Size (meg)   58030.438691.519338.9
Original Size (meg) 48637.938691.5 9946.4
Avg Compressed Size (%) -- -- --(level:#disks ...)
Filesystems Dumped   59 12 47   (1:46 2:1)
Avg Dump Rate (k/s)  3483.8 3903.2 2867.4

Tape Time (hrs:min)1:49   1:19   0:30
Tape Size (meg) 53214.738692.014522.7
Tape Used (%)  52.5   38.1   14.3   (level:#disks ...)
Filesystems Taped60 12 48   (1:47 2:1)

Chunks Taped  0  0  0
Avg Tp Write Rate (k/s)  8367.6 8392.1 8302.9

--
 Mark Hennessy



Amanda Statistics.

2005-06-16 Thread Erik P. Olsen
Hi,

Below the statistics from my last backup. I assume the times are all
elapsed times, but why is amanda so wrong in estimating the times? Or is
it a bug?

STATISTICS:
  Total   Full  Daily
      
Estimate Time (hrs:min)0:03
Run Time (hrs:min) 1:11
Dump Time (hrs:min)0:51   0:51   0:01
Output Size (meg)7692.4 7648.9   43.5
Original Size (meg) 11996.711886.5  110.2
Avg Compressed Size (%)64.1   64.4   39.4
(level:#disks ...)
Filesystems Dumped7  2  5   (1:5)
Avg Dump Rate (k/s)  2559.5 2584.4  950.4

Tape Time (hrs:min)0:49   0:49   0:00

-- 
Regards,
Erik P. Olsen
GPG http://pgp.mit.edu 0x71375E63


signature.asc
Description: This is a digitally signed message part


Re: Amanda Statistics.

2005-06-16 Thread Christoph Scheeder

Erik P. Olsen schrieb:

Hi,

Below the statistics from my last backup. I assume the times are all
elapsed times, but why is amanda so wrong in estimating the times? Or is
it a bug?

STATISTICS:
  Total   Full  Daily
      
Estimate Time (hrs:min)0:03
Run Time (hrs:min) 1:11
Dump Time (hrs:min)0:51   0:51   0:01
Output Size (meg)7692.4 7648.9   43.5
Original Size (meg) 11996.711886.5  110.2
Avg Compressed Size (%)64.1   64.4   39.4
(level:#disks ...)
Filesystems Dumped7  2  5   (1:5)
Avg Dump Rate (k/s)  2559.5 2584.4  950.4

Tape Time (hrs:min)0:49   0:49   0:00


Errm,
Estimate time is not the estimated time to do the backups, it is the time it 
took to get the estimates for the size of the dumps.

Christoph


Re: Amanda Statistics.

2005-06-16 Thread Mitch Collinsworth


On Thu, 16 Jun 2005, Erik P. Olsen wrote:


Below the statistics from my last backup. I assume the times are all
elapsed times, but why is amanda so wrong in estimating the times? Or is
it a bug?

STATISTICS:
 Total   Full  Daily
         
Estimate Time (hrs:min)0:03
Run Time (hrs:min) 1:11
Dump Time (hrs:min)0:51   0:51   0:01
Output Size (meg)7692.4 7648.9   43.5
Original Size (meg) 11996.711886.5  110.2
Avg Compressed Size (%)64.1   64.4   39.4
(level:#disks ...)
Filesystems Dumped7  2  5   (1:5)
Avg Dump Rate (k/s)  2559.5 2584.4  950.4

Tape Time (hrs:min)0:49   0:49   0:00


0:03 is not the estimated time for the run, it's how long it took to
perform the estimate phase of the run.

-Mitch


Re: statistics

2002-09-05 Thread Gene Heskett

On Thursday 05 September 2002 00:52, Chris Herrmann wrote:
Thanks Gene! Hopefully you've answered my question without me
 needed to ask it :o)

I'm assuming I need to upgrade tar on the client machine, not the
 amanda server...?

Where ever its actually being used, which I'd assume would be both 
client and server at various times in the backup nightly run.

Are there any source tarballs of tar floating around? The tar
 located:

ftp://ftp.gnu.org/gnu/tar/

is 1.13.19

which comes with RH7.1... there's an rpm for 1.13.25 but i'll need
 to upgrade some libs
error: failed dependencies:
libc.so.6(GLIBC_2.2.3)   is needed by tar-1.13.25-4

which I'm not keen to do unless I really have to.

The 1.13-25 version is on alpha.gnu.org now for about a year, but 
the tarballs of either will compile just fine on your machines as 
long as the developer stuff is on them.

FWIW, 1.13-19 is also reported to work well with amanda.  I was 
automaticly recommending the latest just because its the latest I 
guess :-)

[...]

-- 
Cheers, Gene
AMD K6-III@500mhz 320M
Athlon1600XP@1400mhz  512M
99.13% setiathome rank, not too shabby for a WV hillbilly



RE: statistics

2002-09-05 Thread Chris Herrmann

|The 1.13-25 version is on alpha.gnu.org now for about a year, but
|the tarballs of either will compile just fine on your machines as
|long as the developer stuff is on them.
|
|FWIW, 1.13-19 is also reported to work well with amanda.  I was
|automaticly recommending the latest just because its the latest I
|guess :-)
|
Yeah, I'm running 1.13-19 atm, and it's hassle free apart from the reiser
partition

new(er) toys == increased happiness :o)





statistics

2002-09-04 Thread greg



Does this look right?


STATISTICS: 
Total Full 
Daily 
  Estimate Time 
(hrs:min) 0:06Run Time 
(hrs:min) 11:48Dump Time 
(hrs:min) 
11:42 11:42 
0:01Output Size (meg) 
24171.4 24171.4 
0.0Original Size (meg) 54320.4 
54318.3 2.1Avg Compressed Size 
(%) 44.5 
44.5 1.5 (level:#disks 
...)Filesystems 
Dumped 
3 
2 1 
(1:1)Avg Dump Rate (k/s) 
587.4 
588.0 0.8Tape Time 
(hrs:min) 
11:41 11:41 
0:00Tape Size (meg) 
24171.5 24171.5 
0.1Tape Used 
(%) 
64.5 
64.5 0.0 (level:#disks 
...)Filesystems 
Taped 
3 
2 1 
(1:1)Avg Tp Write Rate (k/s) 588.5 
588.5 29.3


It is saying it took 11 hrs to dump 54GB or 24GB 
compressed. I have a quantum DLT8000-40 which is rated
at 6MB/s or 12MB/s compressed. 11hrs seems a 
long time even considering gzip as the software compression.
Is there something I am missing here? 


-greg



Re: statistics

2002-09-04 Thread Joshua Baker-LePain

On Thu, 5 Sep 2002 at 3:29pm, greg wrote

 Run Time (hrs:min)11:48
 Dump Time (hrs:min)   11:42  11:42   0:01
 Output Size (meg)   24171.424171.40.0
 Original Size (meg) 54320.454318.32.1
 Avg Compressed Size (%)44.5   44.51.5   (level:#disks ...)
 Filesystems Dumped3  2  1   (1:1)
 Avg Dump Rate (k/s)   587.4  588.00.8
 
 Tape Time (hrs:min)   11:41  11:41   0:00
 Avg Tp Write Rate (k/s)   588.5  588.5   29.3

Notice how Tape Rate ~= Dump Rate?  You're not using a holding disk (or 
not one that's big enough), and so all your dumps are going straight to 
tape).  This means:

a) You can only do one at a time, i.e. all dumps happen serially.

b) You're limited by speed of the client hardware/gzip/network.

A holding disk would help things quite a bit.  Other than that, hardware 
compression might help a bit, but probably not too drastically.

-- 
Joshua Baker-LePain
Department of Biomedical Engineering
Duke University




RE: statistics

2002-09-04 Thread Chris Herrmann

yes - that's right. the slow part is your computer compressing the data. The
offline message is a yet to be crafted question for this list - amanda
isn't backing up the reiserfs partition on one of the machines. The drive is
a SDLT110, there are 3 servers, and a dedicated 40G holding disk. After
setting it up this way, the performance of amanda skyrocketed (by having a
dedicated holding disk, fast backup drive, one server getting multiple
clients to do client-side compression).

Cheers,

Chris
--

These dumps were to tape Loyalty003.
The next tape Amanda expects to use is: Loyalty004.
FAILURE AND STRANGE DUMP SUMMARY:
diablo /dev/hde1 lev 0 FAILED [disk /dev/hde1 offline on diablo?]

STATISTICS:
Total Full Daily
  
Estimate Time (hrs:min) 0:04
Run Time (hrs:min) 6:51
Dump Time (hrs:min) 9:09 8:47 0:22
Output Size (meg) 19111.8 16576.7 2535.1
Original Size (meg) 35566.2 31388.3 4177.9
Avg Compressed Size (%) 53.7 52.8 60.7 (level:#disks ...)
Filesystems Dumped 12 11 1 (1:1)
Avg Dump Rate (k/s) 594.5 536.8 1994.5
Tape Time (hrs:min) 0:34 0:30 0:04
Tape Size (meg) 19112.2 16577.1 2535.2
Tape Used (%) 18.6 16.1 2.5 (level:#disks ...)
Filesystems Taped 12 11 1 (1:1)
Avg Tp Write Rate (k/s) 9584.5 9556.1 9775.1


NOTES:
planner: Adding new disk diablo:/dev/hde1.
planner: Full dump of gemini:/dev/sda5 promoted from 1 day ahead.
planner: Full dump of bob:/dev/sdc1 promoted from 1 day ahead.
planner: Full dump of bob:/dev/sda4 promoted from 1 day ahead.
planner: Full dump of gemini:/dev/sda6 promoted from 1 day ahead.
planner: Full dump of bob:/dev/sdb1 promoted from 1 day ahead.
planner: Full dump of bob:/dev/sda3 promoted from 1 day ahead.
planner: Full dump of gemini:/dev/sda3 promoted from 1 day ahead.
planner: Full dump of gemini:/dev/sda1 promoted from 1 day ahead.
planner: Full dump of bob:/dev/sda1 promoted from 1 day ahead.
planner: Full dump of diablo:/dev/hda3 promoted from 1 day ahead.
planner: Full dump of diablo:/dev/hda1 promoted from 1 day ahead.
taper: tape Loyalty003 kb 19570912 fm 12 [OK]


DUMP SUMMARY:
DUMPER STATS TAPER STATS
HOSTNAME DISK L ORIG-KB OUT-KB COMP% MMM:SS KB/s MMM:SS KB/s
-- - 
bob /dev/sda1 0 1636282 490848 30.0 32:19 253.1 0:558997.6
bob /dev/sda3 0 132318 27328 20.7 2:36 175.3 0:0216471.5
bob /dev/sda4 0 1241536 786336 63.3 6:302016.4 1:219731.9
bob /dev/sdb1 0 1157677 577760 49.9 22:47 422.6 1:019507.0
bob /dev/sdc1 0 79762334850560 60.8 39:502029.8 8:139831.1
bob /dev/sdd1 1 42781942595968 60.7 21:421994.5 4:269775.0
diablo /dev/hda1 0 2113691 919424 43.5 30:23 504.4 1:438910.0
diablo /dev/hda3 0 30202 4672 15.5 0:17 269.1 0:014178.1
diablo /dev/hde1 0 FAILED ---
gemini /dev/sda1 0 1717516 557536 32.5 123:15 75.4 1:088226.4
gemini /dev/sda3 0 953407 280096 29.4 27:42 168.5 0:446435.9
gemini /dev/sda5 0 139187958458624 60.8 211:53 665.4 14:239803.3
gemini /dev/sda6 0 1263950 21376 1.7 29:29 12.1 0:073192.7
(brought to you by Amanda version 2.4.2p2)






-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]On
Behalf Of greg
Sent: Friday, 6 September 2002 08:30
To: [EMAIL PROTECTED]
Subject: statistics


Does this look right?


STATISTICS:
  Total   Full  Daily
      
Estimate Time (hrs:min)0:06
Run Time (hrs:min)11:48
Dump Time (hrs:min)   11:42  11:42   0:01
Output Size (meg)   24171.424171.40.0
Original Size (meg) 54320.454318.32.1
Avg Compressed Size (%)44.5   44.51.5   (level:#disks ...)
Filesystems Dumped3  2  1   (1:1)
Avg Dump Rate (k/s)   587.4  588.00.8

Tape Time (hrs:min)   11:41  11:41   0:00
Tape Size (meg) 24171.524171.50.1
Tape Used (%)  64.5   64.50.0   (level:#disks ...)
Filesystems Taped 3  2  1   (1:1)
Avg Tp Write Rate (k/s)   588.5  588.5   29.3


It is saying it took 11 hrs to dump 54GB or 24GB compressed.  I have a
quantum DLT8000-40 which is rated
at 6MB/s or 12MB/s compressed.  11hrs seems a long time even considering
gzip as the software compression.
Is there something I am missing here?

-greg




Re: statistics

2002-09-04 Thread Jay Lessert

On Thu, Sep 05, 2002 at 03:29:39PM -0700, greg wrote:
 Does this look right?
 
 STATISTICS:
   Total   Full  Daily
       
 Estimate Time (hrs:min)0:06
 Run Time (hrs:min)11:48
 Dump Time (hrs:min)   11:42  11:42   0:01
 Output Size (meg)   24171.424171.40.0
 Original Size (meg) 54320.454318.32.1
 Avg Compressed Size (%)44.5   44.51.5   (level:#disks ...)
 Filesystems Dumped3  2  1   (1:1)
 Avg Dump Rate (k/s)   587.4  588.00.8
 
 Tape Time (hrs:min)   11:41  11:41   0:00
 Tape Size (meg) 24171.524171.50.1
 Tape Used (%)  64.5   64.50.0   (level:#disks ...)
 Filesystems Taped 3  2  1   (1:1)
 Avg Tp Write Rate (k/s)   588.5  588.5   29.3
 
 
 It is saying it took 11 hrs to dump 54GB or 24GB compressed.  I have
 a quantum DLT8000-40 which is rated at 6MB/s or 12MB/s compressed.
 11hrs seems a long time even considering gzip as the software
 compression.  Is there something I am missing here?

It would be useful to see the DUMP SUMMARY: section, but the average
dump and tape rates *almost* match, so it looks very much like your two
full dumps were straight to tape, did not use holding disk, and were
rate-limited by gzip.

There are other things that could have limited the dump rate, but
gzip is the first place to look.  You can force fulls on the
same two file systems and run a top in background, something
like:

% top b -d300 -n500  top.out 

...on the client(s) and server in question, see how much cpu time the
gzip processes take.  If the gzip times are short, then I would
try running the backup processes (dump or tar or whatever) by
hand to /dev/null and see how long that takes.

-- 
Jay Lessert   [EMAIL PROTECTED]
Accelerant Networks Inc.   (voice)1.503.439.3461
Beaverton OR, USA(fax)1.503.466.9472



Re: statistics

2002-09-04 Thread Frank Smith

It all depends.  Does your largest disklist entry fit on your holding
disk?  Look further down in your daily report and look at the times
for dumper and taper for each disklist entry.  My guess is that you
are doing one or more direct to tape dumps across the network, which
can be extremely slow since the tape may have to repeatedly stop and
reposition itself as the data trickles in.


Frank

--On Thursday, September 05, 2002 15:29:39 -0700 greg [EMAIL PROTECTED] wrote:

 Does this look right?


 STATISTICS:
   Total   Full  Daily
       
 Estimate Time (hrs:min)0:06
 Run Time (hrs:min)11:48
 Dump Time (hrs:min)   11:42  11:42   0:01
 Output Size (meg)   24171.424171.40.0
 Original Size (meg) 54320.454318.32.1
 Avg Compressed Size (%)44.5   44.51.5   (level:#disks
 ...)
 Filesystems Dumped3  2  1   (1:1)
 Avg Dump Rate (k/s)   587.4  588.00.8

 Tape Time (hrs:min)   11:41  11:41   0:00
 Tape Size (meg) 24171.524171.50.1
 Tape Used (%)  64.5   64.50.0   (level:#disks
 ...)
 Filesystems Taped 3  2  1   (1:1)
 Avg Tp Write Rate (k/s)   588.5  588.5   29.3


 It is saying it took 11 hrs to dump 54GB or 24GB compressed.  I have a
 quantum DLT8000-40 which is rated
 at 6MB/s or 12MB/s compressed.  11hrs seems a long time even considering
 gzip as the software compression.
 Is there something I am missing here?

 -greg





--
Frank Smith[EMAIL PROTECTED]
Systems Administrator Voice: 512-374-4673
Hoover's Online Fax: 512-374-4501



Re: statistics

2002-09-04 Thread greg

You guys are right.  It is that the holding disk is not big enough so amanda
is not using it.
All the network data is rsyncd to this machine and then backed up locally.
This is to have
a running mirror and a tape backup.  But I do believe the problem is the
holding disk which I am
working on fixing now.

Thanks

-greg



 It all depends.  Does your largest disklist entry fit on your holding
 disk?  Look further down in your daily report and look at the times
 for dumper and taper for each disklist entry.  My guess is that you
 are doing one or more direct to tape dumps across the network, which
 can be extremely slow since the tape may have to repeatedly stop and
 reposition itself as the data trickles in.






Re: statistics

2002-09-04 Thread Gene Heskett

On Wednesday 04 September 2002 18:53, Chris Herrmann wrote:
yes - that's right. the slow part is your computer compressing the
 data. The offline message is a yet to be crafted question for
 this list - amanda isn't backing up the reiserfs partition on one
 of the machines. The drive is a SDLT110, there are 3 servers, and
 a dedicated 40G holding disk. After setting it up this way, the
 performance of amanda skyrocketed (by having a dedicated holding
 disk, fast backup drive, one server getting multiple clients to
 do client-side compression).

Re reiserfs.  If you are using dump, it knows nothing about 
reiserfs.  But late (1.13-25) versions of tar should be just fine.

[...]

-- 
Cheers, Gene
AMD K6-III@500mhz 320M
Athlon1600XP@1400mhz  512M
99.13% setiathome rank, not too shabby for a WV hillbilly



RE: statistics

2002-09-04 Thread Chris Herrmann

Thanks Gene! Hopefully you've answered my question without me needed to ask
it :o)

I'm assuming I need to upgrade tar on the client machine, not the amanda
server...?

Are there any source tarballs of tar floating around? The tar located:

ftp://ftp.gnu.org/gnu/tar/

is 1.13.19

which comes with RH7.1... there's an rpm for 1.13.25 but i'll need to
upgrade some libs
error: failed dependencies:
libc.so.6(GLIBC_2.2.3)   is needed by tar-1.13.25-4

which I'm not keen to do unless I really have to.

Cheers,

Chris



|-Original Message-
|From: [EMAIL PROTECTED]
|[mailto:[EMAIL PROTECTED]]On Behalf Of Gene Heskett
|Sent: Thursday, 5 September 2002 12:26
|To: Chris Herrmann; 'greg'; [EMAIL PROTECTED]
|Subject: Re: statistics
|
|
|On Wednesday 04 September 2002 18:53, Chris Herrmann wrote:
|yes - that's right. the slow part is your computer compressing the
| data. The offline message is a yet to be crafted question for
| this list - amanda isn't backing up the reiserfs partition on one
| of the machines. The drive is a SDLT110, there are 3 servers, and
| a dedicated 40G holding disk. After setting it up this way, the
| performance of amanda skyrocketed (by having a dedicated holding
| disk, fast backup drive, one server getting multiple clients to
| do client-side compression).
|
|Re reiserfs.  If you are using dump, it knows nothing about
|reiserfs.  But late (1.13-25) versions of tar should be just fine.
|
|[...]
|
|--
|Cheers, Gene
|AMD K6-III@500mhz 320M
|Athlon1600XP@1400mhz  512M
|99.13% setiathome rank, not too shabby for a WV hillbilly
|




Re: Script for tape usage statistics

2001-08-03 Thread Johannes Niess

John R. Jackson [EMAIL PROTECTED] writes:

[...]

 Please feel free to modify the script. May it save you from worn out
 tapes at restore time.
 
 A couple of thoughts.  Once you know the config directory from above,
 you might just cd there to avoid the duplicated names throughout the
 script, and it also does some minimal permissions checking.  This is
 the technique most of the Amanda scripts already use.
 
   cd $config_dir/$1 || exit 1
 
 Using the following:
 
   sort +0rn -1 +1
 
 as the last sort shows the entries in decreasing reference count but
 increasing label name, which might be a bit more readable.
 
 Johannes Niess

John,

Thank you for the tips.  I know that my script is not the best example
of shell skript ever done. Emphasis for this first implementation was
on the main algorithm. I'll wait a couple of days for other comments
and post an improved version.

Johannes Niess



Script for tape usage statistics

2001-08-02 Thread Johannes Niess

Hi,

This little script can be used to keep track of number of tape
usages. I use it with recent GNU tools on a Linux machine.

Multiple skript calls per configuration and day do no harm thanks to
uniq and file time comparison. The calling user needs write access
to the config dir in /etc/amanda/config. I looked at amgetconf to
find the configuration directory but could not find the right entry in
amanda.conf.

What about adding the output of this skript to the daily mail? At the
moment I have to add it to the cron job and might miss amflush'es.

Please feel free to modify the script. May it save you from worn out
tapes at restore time.

Johannes Niess

P.S: Keep an eye on line endings for the commands.


#! /bin/sh
# Made by Johannes Niess (mail: [EMAIL PROTECTED])
if test -z $1 ; then echo Amanda-tapestats reports the number of times the tape list 
entry has been updated. The skript has to be run at least once after each update.
Usage: amanda-tapestats amanda-configuration;exit;fi

if test /etc/amanda/$1/tapelist.long -ot /etc/amanda/$1/tapelist;
then
echo Amanda tape usage count;
cut -d   -f 1,2 /etc/amanda/$1/tapelist*|sort|uniq tapelist.$  mv tapelist.$ 
/etc/amanda/$1/tapelist.long;
cut -f 2 -d   /etc/amanda/$1/tapelist.long |sort|uniq -c|sort -rn  
/etc/amanda/$1/tapecount  cat /etc/amanda/$1/tapecount;
fi