Re: can't kill a non-numeric process ID -- selecting specific run for amstatus

2019-11-05 Thread Chris Hoogendyk



On 9/5/19 5:25 PM, Nathan Stratton Treadway wrote:

On Thu, Sep 05, 2019 at 14:12:29 -0400, Chris Hoogendyk wrote:

Although my email said that the report was from the September 4th
run, amstatus said it was giving me information about the September
4th run. What about the others?

[...]

As a followup question, How do I get amstatus to tell me about
multiple parallel runs and then deal with those individually?


Short answer: use the --file option on "amstatus", after looking in
/var/log/amanda/ (or whatever your "logdir" is pointed to) to
find the specific amdump. (or amflush.*) file you want to look
at.

I don't have a lot of experience with parallel runs, but off hand I
don't believe there is a way to get amstatus to do "show me the amdump
sessions that are currently running" -- you have to figure that out some
other way, or run amstatus on each file and look through the output to
see which ones are still underway (e.g. the "taped" DLE counts are less
than the total number of DLEs or whatever).

Nathan

Nathan Stratton Treadway - natha...@ontko.com - Mid-Atlantic region Ray
Ontko & Co.  - Software consulting services - http://www.ontko.com/
  GPG Key: http://www.ontko.com/~nathanst/gpg_key.txt   ID: 1023D/ECFB6239
  Key fingerprint = 6AD8 485E 20B9 5C71 231C  0C32 15F3 ADCD ECFB 6239


--
---

Chris Hoogendyk

-
   O__   Systems Administrator
  c/ /'_ --- Biology & Geosciences Departments
 (*) \(*) -- 315 Morrill Science Center
~~ - University of Massachusetts, Amherst



---

Erdös 4



Re: can't kill a non-numeric process ID -- selecting specific run for amstatus

2019-11-05 Thread Chris Hoogendyk
Just to follow up on this. I haven't had this happen too often, so it hasn't come up since Nathan's 
suggestion in this message – until this weekend. My run from Monday evening returned early with a 
status indicating that there was no tape drive available and no holding space left. This is on a 
system with two LTO7 tape drives. So, seems like I had some things hung up.


$ amstatus daily

only gives me the latest run, for which I already have the email report. `ps -ef | grep amanda` 
shows that I have two amanda runs active. The dumper0 through 9 processes show the log filename; so, 
I can sort the two runs based on that, and then the process IDs and parent process IDs point me back 
to the instance of the driver that spawned the other processes.


In my case, /usr/local/etc/amanda/daily/log/... is where I looked to find the appropriate amdump or 
amflush files based on the date matching what I see in the `ps -ef`. Then,


    $ amstatus --file=amdump.20191103233003 daily

gave me the report I wanted for that run. After I saw that it was indeed hung, a simple kill of the 
driver process for that instance of amanda terminated everything associated with that run and 
resulted in the appropriate email report being sent.


Back to a clean slate now.

Thank you Nathan!


(Oh, and sorry about the blank email earlier. I clicked on the window to bring it to the front and 
accidentally clicked on the send.)



On 9/5/19 5:25 PM, Nathan Stratton Treadway wrote:

On Thu, Sep 05, 2019 at 14:12:29 -0400, Chris Hoogendyk wrote:

Although my email said that the report was from the September 4th
run, amstatus said it was giving me information about the September
4th run. What about the others?

[...]

As a followup question, How do I get amstatus to tell me about
multiple parallel runs and then deal with those individually?


Short answer: use the --file option on "amstatus", after looking in
/var/log/amanda/ (or whatever your "logdir" is pointed to) to
find the specific amdump. (or amflush.*) file you want to look
at.

I don't have a lot of experience with parallel runs, but off hand I
don't believe there is a way to get amstatus to do "show me the amdump
sessions that are currently running" -- you have to figure that out some
other way, or run amstatus on each file and look through the output to
see which ones are still underway (e.g. the "taped" DLE counts are less
than the total number of DLEs or whatever).

Nathan

Nathan Stratton Treadway - natha...@ontko.com - Mid-Atlantic region Ray
Ontko & Co.  - Software consulting services - http://www.ontko.com/
  GPG Key: http://www.ontko.com/~nathanst/gpg_key.txt   ID: 1023D/ECFB6239
  Key fingerprint = 6AD8 485E 20B9 5C71 231C  0C32 15F3 ADCD ECFB 6239


--
---

Chris Hoogendyk

-
   O__   Systems Administrator
  c/ /'_ --- Biology & Geosciences Departments
 (*) \(*) -- 315 Morrill Science Center
~~ - University of Massachusetts, Amherst



---

Erdös 4



Re: can't kill a non-numeric process ID -- selecting specific run for amstatus

2019-09-05 Thread Nathan Stratton Treadway
On Thu, Sep 05, 2019 at 14:12:29 -0400, Chris Hoogendyk wrote:
> Although my email said that the report was from the September 4th
> run, amstatus said it was giving me information about the September
> 4th run. What about the others?
[...]
> As a followup question, How do I get amstatus to tell me about
> multiple parallel runs and then deal with those individually?
> 

Short answer: use the --file option on "amstatus", after looking in
/var/log/amanda/ (or whatever your "logdir" is pointed to) to
find the specific amdump. (or amflush.*) file you want to look
at.

I don't have a lot of experience with parallel runs, but off hand I
don't believe there is a way to get amstatus to do "show me the amdump
sessions that are currently running" -- you have to figure that out some
other way, or run amstatus on each file and look through the output to
see which ones are still underway (e.g. the "taped" DLE counts are less
than the total number of DLEs or whatever).

Nathan

Nathan Stratton Treadway - natha...@ontko.com - Mid-Atlantic region Ray
Ontko & Co.  - Software consulting services - http://www.ontko.com/
 GPG Key: http://www.ontko.com/~nathanst/gpg_key.txt   ID: 1023D/ECFB6239
 Key fingerprint = 6AD8 485E 20B9 5C71 231C  0C32 15F3 ADCD ECFB 6239


Re: amstatus error

2019-09-04 Thread Nuno Dias
 The problem I have is due to windows backups using zmanda, if I delete
de backups on holding disk from zmanda, the amstatus works as expected!

Cheers,
Nuno

On Sun, 2019-08-11 at 09:29 +0200, Stefan G. Weichinger wrote:
> Am 02.08.19 um 10:56 schrieb Nuno Dias:
> >  Hi,
> > 
> >  Since two days ago I have this error, everytime I try to run
> > amstatus
> > 
> > amstatus: bad status on taper SHM-WRITE (taper): 16 at
> > /usr/lib64/perl5/vendor_perl/Amanda/Status.pm line 935, <$fd> line
> > 9210.
> > 
> >  I can see that the amdump is still running, the taper process also
> > running, and I see no errors in logs about the tape.
> > 
> >  How I solve this issue? anyone knows?
> 
> I don't know but I also notice that the output of amstatus changed
> ... I
> don't get any (perl) errors but the whole information around the
> taper
> seems not to be updated.
> 
> A run over 5 tapes still shows "searching for a tape" while I can see
> that it wrote to several tapes already (by looking at the tape
> inventory
> and the fact it changed through tapes).
> 
> debian-9.9 here, amanda-3.5.1
> 
> Your error message also points to a line mentioning taper problems.
> 
> 
-- 
Nuno Dias 
LIP


smime.p7s
Description: S/MIME cryptographic signature


Re: amstatus error

2019-08-11 Thread Stefan G. Weichinger
Am 02.08.19 um 10:56 schrieb Nuno Dias:
>  Hi,
> 
>  Since two days ago I have this error, everytime I try to run amstatus
> 
> amstatus: bad status on taper SHM-WRITE (taper): 16 at
> /usr/lib64/perl5/vendor_perl/Amanda/Status.pm line 935, <$fd> line
> 9210.
> 
>  I can see that the amdump is still running, the taper process also
> running, and I see no errors in logs about the tape.
> 
>  How I solve this issue? anyone knows?

I don't know but I also notice that the output of amstatus changed ... I
don't get any (perl) errors but the whole information around the taper
seems not to be updated.

A run over 5 tapes still shows "searching for a tape" while I can see
that it wrote to several tapes already (by looking at the tape inventory
and the fact it changed through tapes).

debian-9.9 here, amanda-3.5.1

Your error message also points to a line mentioning taper problems.




amstatus error

2019-08-02 Thread Nuno Dias
 Hi,

 Since two days ago I have this error, everytime I try to run amstatus

amstatus: bad status on taper SHM-WRITE (taper): 16 at
/usr/lib64/perl5/vendor_perl/Amanda/Status.pm line 935, <$fd> line
9210.

 I can see that the amdump is still running, the taper process also
running, and I see no errors in logs about the tape.

 How I solve this issue? anyone knows?

Thanks,
Nuno
-- 
Nuno Dias 
LIP


smime.p7s
Description: S/MIME cryptographic signature


Re: amstatus lapsed time

2018-12-04 Thread Chris Nighswonger
On Tue, Dec 4, 2018 at 6:37 PM Uwe Menges  wrote:

> On 12/4/18 5:41 PM, Chris Nighswonger wrote:
> > So it appears that the
> > 11:29:10 part is nearly correct, but the 1+ part is clearly not.
>
> The last column is the current time for running dumps, or the last time
> it was dumping for DLEs that are already dumped (== finish time).
>
> There is special coding in place to put N+ in front of the current time
> if it was started on N day(s) before now, see
>
> https://github.com/zmanda/amanda/blob/b2fd140efb54e1ebca464f0a9ff407460d8c350b/perl/Amanda/Status.pm#L2301
>
>
The approach there is confusing at best for the unsuspecting. For example:
if the backup job were to start on one day at 23:59 and then finish up the
next at 0:59 the results would show it running for 1+ 0:59 which seems to
imply it has run for 24 hours plus 59 minutes. Reading the code (along with
your explanation) clears this up showing that what is being stated is that
the dump finished up at 12:59a on the day after it was started.

I notice that the man page for amstatus does not cover the output format. A
header line like that given in amreport might be helpful in avoiding such
confusion. Something like this:
https://github.com/zmanda/amanda/blob/b2fd140efb54e1ebca464f0a9ff407460d8c350b/perl/Amanda/Report/human.pm#L386

Kind regards,
Chris


Re: amstatus lapsed time

2018-12-04 Thread Uwe Menges
On 12/4/18 5:41 PM, Chris Nighswonger wrote:
> So when I run amstatus  I see stuff like this:
> 
> From Mon Dec  3 22:00:01 EST 2018
> 
> 
> 
> 1359952k dumping   533792k (148.30%) (1+11:29:10)
> 
> For reference:
> 
> backup@scriptor:~ date
> Tue Dec  4 11:37:07 EST 2018
> 
> I'm assuming that the last field is the time the dumper has been
> running. But that is impossible as this dump began at 2200 yesterday and
> amstatus was run at 1129 today (12:29 later). So it appears that the
> 11:29:10 part is nearly correct, but the 1+ part is clearly not.

The last column is the current time for running dumps, or the last time
it was dumping for DLEs that are already dumped (== finish time).

There is special coding in place to put N+ in front of the current time
if it was started on N day(s) before now, see
https://github.com/zmanda/amanda/blob/b2fd140efb54e1ebca464f0a9ff407460d8c350b/perl/Amanda/Status.pm#L2301

Yours, Uwe


Re: amstatus lapsed time

2018-12-04 Thread Chris Nighswonger
On Tue, Dec 4, 2018 at 1:51 PM Debra S Baddorf  wrote:

> > On Dec 4, 2018, at 12:49 PM, Chris Nighswonger <
> cnighswon...@foundations.edu> wrote:
> >
> > On Tue, Dec 4, 2018 at 1:44 PM Debra S Baddorf  wrote:
> > Well, for starters,  2200 to 1129  is 13:29  so that doesn’t match up
> either.
> >
> > Talk about bad math
> >
> > Not to loud... I'll loose my job teaching math. ;-)
>
> LOL!   Well,  I was your A++ pupil,  so don’t feel too bad.  I had already
> read the book,
> and so could pay attention to the derivations the prof was doing,  and
> correct his errors.
> They appreciated it,  so the end of the derivation would work right.


I wonder if it is only coincidental that I was grading Physics tests
today

Way too many numbers in any case. Better go back and check over the grades.

Chris


Re: amstatus lapsed time

2018-12-04 Thread Debra S Baddorf



> On Dec 4, 2018, at 12:49 PM, Chris Nighswonger  
> wrote:
> 
> On Tue, Dec 4, 2018 at 1:44 PM Debra S Baddorf  wrote:
> Well, for starters,  2200 to 1129  is 13:29  so that doesn’t match up either. 
> 
> Talk about bad math
> 
> Not to loud... I'll loose my job teaching math. ;-)


LOL!   Well,  I was your A++ pupil,  so don’t feel too bad.  I had already read 
the book,
and so could pay attention to the derivations the prof was doing,  and correct 
his errors.
They appreciated it,  so the end of the derivation would work right.

Deb Baddorf



Re: amstatus lapsed time

2018-12-04 Thread Chris Nighswonger
On Tue, Dec 4, 2018 at 1:44 PM Debra S Baddorf  wrote:

> Well, for starters,  2200 to 1129  is 13:29  so that doesn’t match up
> either.
>

Talk about bad math

Not to loud... I'll loose my job teaching math. ;-)


Re: amstatus lapsed time

2018-12-04 Thread Debra S Baddorf
Well, for starters,  2200 to 1129  is 13:29  so that doesn’t match up either.   
Somebody who know more?
Deb Baddorf

> On Dec 4, 2018, at 10:41 AM, Chris Nighswonger  
> wrote:
> 
> So when I run amstatus  I see stuff like this:
> 
> From Mon Dec  3 22:00:01 EST 2018
> 
> 
> 
> 1359952k dumping   533792k (148.30%) (1+11:29:10)
> 
> For reference:
> 
> backup@scriptor:~ date
> Tue Dec  4 11:37:07 EST 2018
> 
> I'm assuming that the last field is the time the dumper has been running. But 
> that is impossible as this dump began at 2200 yesterday and amstatus was run 
> at 1129 today (12:29 later). So it appears that the 11:29:10 part is nearly 
> correct, but the 1+ part is clearly not. I've noted in amreport  that 
> the time lapsed time estimates appear grossly incorrect as well. ie:
> 
>   Total   Full  Incr.   Level:#
>         
> Estimate Time (hrs:min) 0:07
> Run Time (hrs:min)  1:57
> Dump Time (hrs:min)28:13  26:27   1:46
> 
> Perhaps "Dump Time" is cumulative across all dumpers? That still does not 
> seem to explain the amstatus discrepancy.
> 
> What am I missing?
> 
> Kind regards,
> Chris




amstatus lapsed time

2018-12-04 Thread Chris Nighswonger
So when I run amstatus  I see stuff like this:

>From Mon Dec  3 22:00:01 EST 2018



1359952k dumping   533792k (148.30%) (1+11:29:10)

For reference:

backup@scriptor:~ date
Tue Dec  4 11:37:07 EST 2018

I'm assuming that the last field is the time the dumper has been running.
But that is impossible as this dump began at 2200 yesterday and amstatus
was run at 1129 today (12:29 later). So it appears that the 11:29:10 part
is nearly correct, but the 1+ part is clearly not. I've noted in amreport
 that the time lapsed time estimates appear grossly incorrect as
well. ie:

  Total   Full  Incr.   Level:#
        
Estimate Time (hrs:min) 0:07
Run Time (hrs:min)  1:57
Dump Time (hrs:min)28:13  26:27   1:46

Perhaps "Dump Time" is cumulative across all dumpers? That still does not
seem to explain the amstatus discrepancy.

What am I missing?

Kind regards,
Chris


Re: amstatus notation

2011-06-13 Thread Jean-Louis Martineau

Tim Dunphy wrote:

hello all..

 I was hoping that someone could please explain the following entry in an 
amstatus I just saw.

 fs.sever1.example.com:/ebs 1 0m finished 
(1+23:10:15)


 Specifically what is meant by:

 (1+23:10:15)
  


1 day, 23 hours, 10 minutes, 15 seconds

Jean-Louis


amstatus notation

2011-06-11 Thread Tim Dunphy
hello all..

 I was hoping that someone could please explain the following entry in an 
amstatus I just saw.

 fs.sever1.example.com:/ebs 1 0m finished 
(1+23:10:15)


 Specifically what is meant by:

 (1+23:10:15)

Thanks in advance!

tim


Re: Warning messages from amstatus

2010-12-13 Thread Jack O'Connell

Thanks.

On Dec 13, 2010, at 10:39 AM, Jean-Louis Martineau wrote:


The amdump log file is corrupted.
The attached patch fix it for newer log.

Jean-Louis

Jack O'Connell wrote:

Hello,
After upgrading to v3.2.0, amstatus intermittently generates the  
following warning messages followed by a status report that is not  
consistent with amreport or amoverview reports of successful  
competion.


** (process:26492): WARNING **: Use of uninitialized value in hash  
element at /usr/sbin/amstatus line 507,  line 3956.



** (process:26492): WARNING **: Use of uninitialized value in hash  
element at /usr/sbin/amstatus line 508,  line 3956.



** (process:26492): WARNING **: Use of uninitialized value in hash  
element at /usr/sbin/amstatus line 509,  line 3956.



** (process:26492): WARNING **: Use of uninitialized value in  
subtraction (-) at /usr/sbin/amstatus line 509,  line 3956.



** (process:26492): WARNING **: Use of uninitialized value in hash  
element at /usr/sbin/amstatus line 511,  line 3956.



** (process:26492): WARNING **: Use of uninitialized value in hash  
element at /usr/sbin/amstatus line 512,  line 3956.



** (process:26492): WARNING **: Use of uninitialized value in hash  
element at /usr/sbin/amstatus line 538,  line 3962.



** (process:26492): WARNING **: Use of uninitialized value in hash  
element at /usr/sbin/amstatus line 540,  line 3962.



** (process:26492): WARNING **: Use of uninitialized value in hash  
element at /usr/sbin/amstatus line 544,  line 3962.



** (process:26492): WARNING **: Use of uninitialized value in  
subtraction (-) at /usr/sbin/amstatus line 544,  line 3962.



** (process:26492): WARNING **: Use of uninitialized value in hash  
element at /usr/sbin/amstatus line 546,  line 3962.



** (process:26492): WARNING **: Use of uninitialized value in hash  
element at /usr/sbin/amstatus line 551,  line 3962.



** (process:26492): WARNING **: Use of uninitialized value in hash  
element at /usr/sbin/amstatus line 552,  line 3962.



Thanks,
Jack O'Connell
ALCF - Storage
(630)252-3610
joco...@alcf.anl.gov <mailto:joco...@alcf.anl.gov>







Index: ChangeLog
===
--- ChangeLog   (revision 3576)
+++ ChangeLog   (revision 3580)
@@ -1,3 +1,9 @@
+2010-10-29  Jean-Louis Martineau 
+   * server-src/amflush.c: Open 'amflush' log file in append mode.
+
+2010-10-29  Jean-Louis Martineau 
+   * server-src/amdump.pl: Open 'amdump' log file in append mode.
+
2010-10-28  Jean-Louis Martineau 
* common-src/conffile.c: Fix quoting in recovery-limit output.
* server-src/amadmin.c (disklist_one): Print recovery-limit.
Index: server-src/amflush.c
===
--- server-src/amflush.c(revision 3576)
+++ server-src/amflush.c(revision 3580)
@@ -681,7 +681,7 @@

fflush(stdout); fflush(stderr);
errfile = vstralloc(conf_logdir, "/amflush", NULL);
-if((fderr = open(errfile, O_WRONLY| O_CREAT | O_TRUNC, 0600))  
== -1) {
+if((fderr = open(errfile, O_WRONLY| O_APPEND | O_CREAT |  
O_TRUNC, 0600)) == -1) {

error(_("could not open %s: %s"), errfile, strerror(errno));
/*NOTREACHED*/
}
Index: server-src/amdump.pl
===
--- server-src/amdump.pl(revision 3576)
+++ server-src/amdump.pl(revision 3580)
@@ -192,7 +192,8 @@
# undef first.. stupid perl.
debug("beginning amdump log");
$amdump_log = undef;
-open($amdump_log, ">", $amdump_log_filename)
+# Must be opened in append so that all subprocess can write to  
it.

+open($amdump_log, ">>", $amdump_log_filename)
or die("could not open amdump log file '$amdump_log_filename': $!");
}



Thanks,
Jack O'Connell
ALCF - Storage
(630)252-3610
joco...@alcf.anl.gov







Re: Warning messages from amstatus

2010-12-13 Thread Jean-Louis Martineau

The amdump log file is corrupted.
The attached patch fix it for newer log.

Jean-Louis

Jack O'Connell wrote:

Hello,
After upgrading to v3.2.0, amstatus intermittently generates the 
following warning messages followed by a status report that is not 
consistent with amreport or amoverview reports of successful competion.


** (process:26492): WARNING **: Use of uninitialized value in hash 
element at /usr/sbin/amstatus line 507,  line 3956.



** (process:26492): WARNING **: Use of uninitialized value in hash 
element at /usr/sbin/amstatus line 508,  line 3956.



** (process:26492): WARNING **: Use of uninitialized value in hash 
element at /usr/sbin/amstatus line 509,  line 3956.



** (process:26492): WARNING **: Use of uninitialized value in 
subtraction (-) at /usr/sbin/amstatus line 509,  line 3956.



** (process:26492): WARNING **: Use of uninitialized value in hash 
element at /usr/sbin/amstatus line 511,  line 3956.



** (process:26492): WARNING **: Use of uninitialized value in hash 
element at /usr/sbin/amstatus line 512,  line 3956.



** (process:26492): WARNING **: Use of uninitialized value in hash 
element at /usr/sbin/amstatus line 538,  line 3962.



** (process:26492): WARNING **: Use of uninitialized value in hash 
element at /usr/sbin/amstatus line 540,  line 3962.



** (process:26492): WARNING **: Use of uninitialized value in hash 
element at /usr/sbin/amstatus line 544,  line 3962.



** (process:26492): WARNING **: Use of uninitialized value in 
subtraction (-) at /usr/sbin/amstatus line 544,  line 3962.



** (process:26492): WARNING **: Use of uninitialized value in hash 
element at /usr/sbin/amstatus line 546,  line 3962.



** (process:26492): WARNING **: Use of uninitialized value in hash 
element at /usr/sbin/amstatus line 551,  line 3962.



** (process:26492): WARNING **: Use of uninitialized value in hash 
element at /usr/sbin/amstatus line 552,  line 3962.



Thanks,
Jack O'Connell
ALCF - Storage
(630)252-3610
joco...@alcf.anl.gov <mailto:joco...@alcf.anl.gov>







Index: ChangeLog
===
--- ChangeLog	(revision 3576)
+++ ChangeLog	(revision 3580)
@@ -1,3 +1,9 @@
+2010-10-29  Jean-Louis Martineau 
+	* server-src/amflush.c: Open 'amflush' log file in append mode.
+
+2010-10-29  Jean-Louis Martineau 
+	* server-src/amdump.pl: Open 'amdump' log file in append mode.
+
 2010-10-28  Jean-Louis Martineau 
 	* common-src/conffile.c: Fix quoting in recovery-limit output.
 	* server-src/amadmin.c (disklist_one): Print recovery-limit.
Index: server-src/amflush.c
===
--- server-src/amflush.c	(revision 3576)
+++ server-src/amflush.c	(revision 3580)
@@ -681,7 +681,7 @@
 
 fflush(stdout); fflush(stderr);
 errfile = vstralloc(conf_logdir, "/amflush", NULL);
-if((fderr = open(errfile, O_WRONLY| O_CREAT | O_TRUNC, 0600)) == -1) {
+if((fderr = open(errfile, O_WRONLY| O_APPEND | O_CREAT | O_TRUNC, 0600)) == -1) {
 	error(_("could not open %s: %s"), errfile, strerror(errno));
 	/*NOTREACHED*/
 }
Index: server-src/amdump.pl
===
--- server-src/amdump.pl	(revision 3576)
+++ server-src/amdump.pl	(revision 3580)
@@ -192,7 +192,8 @@
 # undef first.. stupid perl.
 debug("beginning amdump log");
 $amdump_log = undef;
-open($amdump_log, ">", $amdump_log_filename)
+# Must be opened in append so that all subprocess can write to it.
+open($amdump_log, ">>", $amdump_log_filename)
 	or die("could not open amdump log file '$amdump_log_filename': $!");
 }
 


Re: Warning messages from amstatus

2010-12-13 Thread Marc Muehlfeld

Am 13.12.2010 16:34, schrieb Jack O'Connell:

** (process:26492): WARNING **: Use of uninitialized value in hash element at
/usr/sbin/amstatus line 507,  line 3956.


I can confirm this.

I've read that there is a patch already commited. But it would be nice, if a 
developer could provide a patch for downloading, because I don't know how to 
get it for the current version from the repository.




Regards,
Marc


Warning messages from amstatus

2010-12-13 Thread Jack O'Connell

Hello,
	After upgrading to v3.2.0, amstatus intermittently generates the  
following warning messages followed by a status report that is not  
consistent with amreport or amoverview reports of successful competion.


** (process:26492): WARNING **: Use of uninitialized value in hash  
element at /usr/sbin/amstatus line 507,  line 3956.



** (process:26492): WARNING **: Use of uninitialized value in hash  
element at /usr/sbin/amstatus line 508,  line 3956.



** (process:26492): WARNING **: Use of uninitialized value in hash  
element at /usr/sbin/amstatus line 509,  line 3956.



** (process:26492): WARNING **: Use of uninitialized value in  
subtraction (-) at /usr/sbin/amstatus line 509,  line 3956.



** (process:26492): WARNING **: Use of uninitialized value in hash  
element at /usr/sbin/amstatus line 511,  line 3956.



** (process:26492): WARNING **: Use of uninitialized value in hash  
element at /usr/sbin/amstatus line 512,  line 3956.



** (process:26492): WARNING **: Use of uninitialized value in hash  
element at /usr/sbin/amstatus line 538,  line 3962.



** (process:26492): WARNING **: Use of uninitialized value in hash  
element at /usr/sbin/amstatus line 540,  line 3962.



** (process:26492): WARNING **: Use of uninitialized value in hash  
element at /usr/sbin/amstatus line 544,  line 3962.



** (process:26492): WARNING **: Use of uninitialized value in  
subtraction (-) at /usr/sbin/amstatus line 544,  line 3962.



** (process:26492): WARNING **: Use of uninitialized value in hash  
element at /usr/sbin/amstatus line 546,  line 3962.



** (process:26492): WARNING **: Use of uninitialized value in hash  
element at /usr/sbin/amstatus line 551,  line 3962.



** (process:26492): WARNING **: Use of uninitialized value in hash  
element at /usr/sbin/amstatus line 552,  line 3962.



Thanks,
Jack O'Connell
ALCF - Storage
(630)252-3610
joco...@alcf.anl.gov







Re: Use of uninitialized value in numeric ne (!=) at /usr/sbin/amstatus line 1122.

2010-11-26 Thread Jean-Louis Martineau

Send me the amdump or amdump.1 log file.

Jean-Louis

Marc Muehlfeld wrote:

Hi,

I currently saw amstatus showing a warning:

...
genome.mr.lfmg.de:/shares/AppleBackUp   1   14m waiting for dumping
genome.mr.lfmg.de:/shares/IMGM  4  395m waiting for dumping

** (process:15658): WARNING **: Use of uninitialized value in numeric 
ne (!=) at /usr/sbin/amstatus line 1122.


genome.mr.lfmg.de:/shares/IMGM/04_Aufträge  6  391m waiting for dumping
genome.mr.lfmg.de:/shares/IT1  263m waiting for dumping
...


I saw this warning the first time today.

Amanda 3.2.0


Regards,
Marc






Use of uninitialized value in numeric ne (!=) at /usr/sbin/amstatus line 1122.

2010-11-26 Thread Marc Muehlfeld

Hi,

I currently saw amstatus showing a warning:

...
genome.mr.lfmg.de:/shares/AppleBackUp   1   14m waiting for dumping
genome.mr.lfmg.de:/shares/IMGM  4  395m waiting for dumping

** (process:15658): WARNING **: Use of uninitialized value in numeric ne (!=) 
at /usr/sbin/amstatus line 1122.


genome.mr.lfmg.de:/shares/IMGM/04_Aufträge  6  391m waiting for dumping
genome.mr.lfmg.de:/shares/IT1  263m waiting for dumping
...


I saw this warning the first time today.

Amanda 3.2.0


Regards,
Marc


--
Marc Muehlfeld (IT-Leiter)
Zentrum fuer Humangenetik und Laboratoriumsmedizin Dr. Klein und Dr. Rost
Lochhamer Str. 29 - D-82152 Martinsried
Telefon: +49(0)89/895578-0 - Fax: +49(0)89/895578-780
http://www.medizinische-genetik.de


Re: amstatus throwing Warnings of uninitialized values

2010-10-30 Thread Jean-Louis Martineau

Previous patch do not work, file are already in unbufferred mode.
The file needs to be opened in append mode so that each process can 
append to the file.

Newest patch fix it.

Jean-Louis

Jean-Louis Martineau wrote:
The bug is not with the amstatus program. The current log file is 
corrupted because many program write to it at the same time and they 
use buffered output.


Try the attached patch, it make the file descriptor unbufferred. You 
should not see that error on the following run.


Jean-Louis
diff --git a/server-src/amdump.pl b/server-src/amdump.pl
index 383fa85..2f51ce5 100644
--- a/server-src/amdump.pl
+++ b/server-src/amdump.pl
@@ -192,7 +192,7 @@ sub start_logfiles {
 # undef first.. stupid perl.
 debug("beginning amdump log");
 $amdump_log = undef;
-open($amdump_log, ">", $amdump_log_filename)
+open($amdump_log, ">>", $amdump_log_filename)
 	or die("could not open amdump log file '$amdump_log_filename': $!");
 }
 


Re: amstatus throwing Warnings of uninitialized values

2010-10-28 Thread Jean-Louis Martineau
The bug is not with the amstatus program. The current log file is 
corrupted because many program write to it at the same time and they use 
buffered output.


Try the attached patch, it make the file descriptor unbufferred. You 
should not see that error on the following run.


Jean-Louis


Dennis Benndorf wrote:

Hello,

after upgrading to 3.2.0 on the server `amstatus config` is throwing the 
following messeges:




** (process:20136): WARNING **: Use of uninitialized value $getest{"***"...} in 
substitution (s///) at /usr/local/sbin/amstatus line 266,  line 511.


** (process:20136): WARNING **: Use of uninitialized value $getest{"***"...} in 
substitution (s///) at /usr/local/sbin/amstatus line 267,  line 511.


** (process:20136): WARNING **: Use of uninitialized value $getest{"***"...} in 
string eq at /usr/local/sbin/amstatus line 269,  line 511.


** (process:20136): WARNING **: Use of uninitialized value in substitution (s///) at 
/usr/local/sbin/amstatus line 266,  line 515.


** (process:20136): WARNING **: Use of uninitialized value in substitution (s///) at 
/usr/local/sbin/amstatus line 267,  line 515.


** (process:20136): WARNING **: Use of uninitialized value in substitution (s///) at 
/usr/local/sbin/amstatus line 268,  line 515.


** (process:20136): WARNING **: Use of uninitialized value within %getest in string 
eq at /usr/local/sbin/amstatus line 269,  line 515.


** (process:20136): WARNING **: Use of uninitialized value $getest{"***"...} in 
substitution (s///) at /usr/local/sbin/amstatus line 266,  line 516.


** (process:20136): WARNING **: Use of uninitialized value $getest{"***"...} in 
substitution (s///) at /usr/local/sbin/amstatus line 267,  line 516.


** (process:20136): WARNING **: Use of uninitialized value $getest{"***"...} in 
string eq at /usr/local/sbin/amstatus line 269,  line 516.


** (process:20136): WARNING **: Use of uninitialized value $getest{"***"...} in 
substitution (s///) at /usr/local/sbin/amstatus line 266,  line 659.


** (process:20136): WARNING **: Use of uninitialized value $getest{"***"...} in 
substitution (s///) at /usr/local/sbin/amstatus line 267,  line 659.


** (process:20136): WARNING **: Use of uninitialized value $getest{"***"...} in 
substitution (s///) at /usr/local/sbin/amstatus line 268,  line 659.


** (process:20136): WARNING **: Use of uninitialized value $getest{"***"...} in 
string eq at /usr/local/sbin/amstatus line 269,  line 659.


** (process:20136): WARNING **: Use of uninitialized value in substitution (s///) at 
/usr/local/sbin/amstatus line 266,  line 660.


** (process:20136): WARNING **: Use of uninitialized value in substitution (s///) at 
/usr/local/sbin/amstatus line 267,  line 660.


** (process:20136): WARNING **: Use of uninitialized value in substitution (s///) at 
/usr/local/sbin/amstatus line 268,  line 660.


** (process:20136): WARNING **: Use of uninitialized value within %getest in string 
eq at /usr/local/sbin/amstatus line 269,  line 660.


** (process:20136): WARNING **: Use of uninitialized value $getest{"***"...} in 
substitution (s///) at /usr/local/sbin/amstatus line 266,  line 661.


** (process:20136): WARNING **: Use of uninitialized value $getest{"***"...} in 
substitution (s///) at /usr/local/sbin/amstatus line 267,  line 661.


** (process:20136): WARNING **: Use of uninitialized value $getest{"***"...} in 
string eq at /usr/local/sbin/amstatus line 269,  line 661.


** (process:20136): WARNING **: Use of uninitialized value $getest{"***"...} in 
substitution (s///) at /usr/local/sbin/amstatus line 266,  line 696.


** (process:20136): WARNING **: Use of uninitialized value $getest{"***"...} in 
substitution (s///) at /usr/local/sbin/amstatus line 267,  line 696.


** (process:20136): WARNING **: Use of uninitialized value $getest{"***"...} in 
substitution (s///) at /usr/local/sbin/amstatus line 268,  line 696.


** (process:20136): WARNING **: Use of uninitialized value $getest{"***"...} in 
string eq at /usr/local/sbin/amstatus line 269,  line 696.


** (process:20136): WARNING **: Use of uninitialized value $getest{"***"...} in 
substitution (s///) at /usr/local/sbin/amstatus line 266,  line 798.


** (process:20136): WARNING **: Use of uninitialized value $getest{"***"...} in 
substitution (s///) at /usr/local/sbin/amstatus line 267,  line 798.


** (process:20136): WARNING **: Use of uninitialized value $getest{"***"...} in 
substitution (s///) at /usr/local/sbin/amstatus line 268,  line 798.


** (process:20136): WARNING **: Use of uninitialized value $getest{"***"...} in 
string eq at /usr/local/sbin/amstatus line 269,  line 798.


** (process:20136): WARNING **: Use of uninitialized value in substitution (s///) at 
/usr/local/sbin/amstatus line 266,  line 799.


*

Re: amstatus throwing Warnings of uninitialized values

2010-10-27 Thread Jean-Louis Martineau

Send me the amdump.? file for which you get the error.

Jean-Louis

Dennis Benndorf wrote:

Hello,

after upgrading to 3.2.0 on the server `amstatus config` is throwing the 
following messeges:




** (process:20136): WARNING **: Use of uninitialized value $getest{"***"...} in 
substitution (s///) at /usr/local/sbin/amstatus line 266,  line 511.


** (process:20136): WARNING **: Use of uninitialized value $getest{"***"...} in 
substitution (s///) at /usr/local/sbin/amstatus line 267,  line 511.


** (process:20136): WARNING **: Use of uninitialized value $getest{"***"...} in 
string eq at /usr/local/sbin/amstatus line 269,  line 511.


** (process:20136): WARNING **: Use of uninitialized value in substitution (s///) at 
/usr/local/sbin/amstatus line 266,  line 515.


** (process:20136): WARNING **: Use of uninitialized value in substitution (s///) at 
/usr/local/sbin/amstatus line 267,  line 515.


** (process:20136): WARNING **: Use of uninitialized value in substitution (s///) at 
/usr/local/sbin/amstatus line 268,  line 515.


** (process:20136): WARNING **: Use of uninitialized value within %getest in string 
eq at /usr/local/sbin/amstatus line 269,  line 515.


** (process:20136): WARNING **: Use of uninitialized value $getest{"***"...} in 
substitution (s///) at /usr/local/sbin/amstatus line 266,  line 516.


** (process:20136): WARNING **: Use of uninitialized value $getest{"***"...} in 
substitution (s///) at /usr/local/sbin/amstatus line 267,  line 516.


** (process:20136): WARNING **: Use of uninitialized value $getest{"***"...} in 
string eq at /usr/local/sbin/amstatus line 269,  line 516.


** (process:20136): WARNING **: Use of uninitialized value $getest{"***"...} in 
substitution (s///) at /usr/local/sbin/amstatus line 266,  line 659.


** (process:20136): WARNING **: Use of uninitialized value $getest{"***"...} in 
substitution (s///) at /usr/local/sbin/amstatus line 267,  line 659.


** (process:20136): WARNING **: Use of uninitialized value $getest{"***"...} in 
substitution (s///) at /usr/local/sbin/amstatus line 268,  line 659.


** (process:20136): WARNING **: Use of uninitialized value $getest{"***"...} in 
string eq at /usr/local/sbin/amstatus line 269,  line 659.


** (process:20136): WARNING **: Use of uninitialized value in substitution (s///) at 
/usr/local/sbin/amstatus line 266,  line 660.


** (process:20136): WARNING **: Use of uninitialized value in substitution (s///) at 
/usr/local/sbin/amstatus line 267,  line 660.


** (process:20136): WARNING **: Use of uninitialized value in substitution (s///) at 
/usr/local/sbin/amstatus line 268,  line 660.


** (process:20136): WARNING **: Use of uninitialized value within %getest in string 
eq at /usr/local/sbin/amstatus line 269,  line 660.


** (process:20136): WARNING **: Use of uninitialized value $getest{"***"...} in 
substitution (s///) at /usr/local/sbin/amstatus line 266,  line 661.


** (process:20136): WARNING **: Use of uninitialized value $getest{"***"...} in 
substitution (s///) at /usr/local/sbin/amstatus line 267,  line 661.


** (process:20136): WARNING **: Use of uninitialized value $getest{"***"...} in 
string eq at /usr/local/sbin/amstatus line 269,  line 661.


** (process:20136): WARNING **: Use of uninitialized value $getest{"***"...} in 
substitution (s///) at /usr/local/sbin/amstatus line 266,  line 696.


** (process:20136): WARNING **: Use of uninitialized value $getest{"***"...} in 
substitution (s///) at /usr/local/sbin/amstatus line 267,  line 696.


** (process:20136): WARNING **: Use of uninitialized value $getest{"***"...} in 
substitution (s///) at /usr/local/sbin/amstatus line 268,  line 696.


** (process:20136): WARNING **: Use of uninitialized value $getest{"***"...} in 
string eq at /usr/local/sbin/amstatus line 269,  line 696.


** (process:20136): WARNING **: Use of uninitialized value $getest{"***"...} in 
substitution (s///) at /usr/local/sbin/amstatus line 266,  line 798.


** (process:20136): WARNING **: Use of uninitialized value $getest{"***"...} in 
substitution (s///) at /usr/local/sbin/amstatus line 267,  line 798.


** (process:20136): WARNING **: Use of uninitialized value $getest{"***"...} in 
substitution (s///) at /usr/local/sbin/amstatus line 268,  line 798.


** (process:20136): WARNING **: Use of uninitialized value $getest{"***"...} in 
string eq at /usr/local/sbin/amstatus line 269,  line 798.


** (process:20136): WARNING **: Use of uninitialized value in substitution (s///) at 
/usr/local/sbin/amstatus line 266,  line 799.


** (process:20136): WARNING **: Use of uninitialized value in substitution (s///) at 
/usr/local/sbin/amstatus line 267,  line 799.


** (process:20136): WARNING **: Use of uninitialized value in substitution (s///) at 
/usr

Re: amstatus tape usage enhancement

2010-07-26 Thread Jean-Francois Malouin
* Jean-Francois Malouin  [20100721 
17:56]:
> * Jean-Louis Martineau  [20100721 16:04]:
> > Do the dump was done with the patched taper?
> 
> ah! I see what you mean now. I overlooked the fact that
> the patch was modifying the taper, not just amstatus,
> so I just recompiled but did not installed anything.
> I just ran the new amstatus in the source dir, ie
> amanda-3.1.1/server-src/amstatus.
> 
> I can't do anything right now as amanda is busy on the
> servers running 3.1.1. I'll let you know later.


Back from a small break...
Thanks Jean-Louis, the patched amstatus and taper do the job!

jf

> 
> thanks,
> jf
> 
> 
> >
> > Jean-Louis
> >
> > Jean-Francois Malouin wrote:
> >> Hello Jean-Louis,
> >>
> >> I was away hence the delay.
> >>
> >> * Jean-Louis Martineau  [20100719 08:31]:
> >>   
> >>> Hi Jean-François,
> >>>
> >>> Try the attached patch, it will works with newer log files only.
> >>> Thanks for reporting the bug.
> >>> 
> >>
> >> I applied the patch but it doesn't seem to do the right thing.
> >> I get essentially the same output with the patched amstatus.
> >> From a different amanda run than below:
> >>
> >> [...]
> >>
> >> SUMMARY  part  real  estimated
> >>size   size
> >> partition   :  98
> >> estimated   :  50  1855244m
> >> flush   :  48606853m
> >> failed  :   00m   (  0.00%)
> >> wait for dumping:   00m   (  0.00%)
> >> dumping to tape :   00m   (  0.00%)
> >> dumping :   0 0m 0m (  0.00%) (  0.00%)
> >> dumped  :  50   1855210m   1855244m (100.00%) (100.00%)
> >> wait for writing:  49968740m968774m (100.00%) ( 52.22%)
> >> wait to flush   :   0 0m 0m (100.00%) (  0.00%)
> >> writing to tape :   1886469m886469m (100.00%) ( 47.78%)
> >> failed to tape  :   0 0m 0m (  0.00%) (  0.00%)
> >> taped   :  48606853m606853m (100.00%) ( 24.65%)
> >> 12 dumpers idle : no-dumpers
> >> taper status: Writing bigbrain:BIG_BRAIN_pm3903_postmortem_mnc_original
> >> taper qlen: 49
> >>
> >> plus other stats.
> >>
> >> Thanks,
> >> jf
> >>
> >>   
> >>> Jean-Louis
> >>>
> >>> Jean-Francois Malouin wrote:
> >>> 
> >>>> Hi,
> >>>>
> >>>> With amanda-3.1 seems we lost the tape usage in the summary report
> >>>> output by amstatus. Prior versions were showning which tape has been
> >>>> used along with its usage like (2.6.1p2):
> >>>>
> >>>>
> >>>> SUMMARY  part  real  estimated
> >>>>size   size
> >>>> partition   :  35
> >>>> estimated   :  21   871112m
> >>>> flush   :  14341794m
> >>>> failed  :   00m   (  0.00%)
> >>>> wait for dumping:   00m   (  0.00%)
> >>>> dumping to tape :   00m   (  0.00%)
> >>>> dumping :   0 0m 0m (  0.00%) (  0.00%)
> >>>> dumped  :  21872030m871112m (100.11%) (100.11%)
> >>>> wait for writing:  20472157m471239m (100.19%) ( 54.20%)
> >>>> wait to flush   :   0 0m 0m (100.00%) (  0.00%)
> >>>> writing to tape :   1399873m399873m (100.00%) ( 45.90%)
> >>>> failed to tape  :   0 0m 0m (  0.00%) (  0.00%)
> >>>> taped   :  14341794m341794m (100.00%) ( 28.18%)
> >>>>   tape 1:  14520994m520994m (134.96%) 
> >>>> av24-2_right2_V00023L3 (128 chunks)
> >>>>
> >>>> I liked that feature. Possible to get it back?
> >>>>
> >>>> Thanks!
> >>>> jf
> >>>> 
> >>
> >>   
> >>> diff --git a/installcheck/amstatus.pl b/installcheck/amstatus.pl
> >>> index cf41b47..6985c6c 100644
> >>> --- a/installcheck/amstatus.pl
> >>> +++ b/installcheck/amstatus.pl
> >>> @@ -153,7 +153,7 @@ DUMP 

Re: amstatus tape usage enhancement

2010-07-21 Thread Jean-Francois Malouin
* Jean-Louis Martineau  [20100721 16:04]:
> Do the dump was done with the patched taper?

ah! I see what you mean now. I overlooked the fact that
the patch was modifying the taper, not just amstatus,
so I just recompiled but did not installed anything.
I just ran the new amstatus in the source dir, ie
amanda-3.1.1/server-src/amstatus.

I can't do anything right now as amanda is busy on the
servers running 3.1.1. I'll let you know later.

thanks,
jf


>
> Jean-Louis
>
> Jean-Francois Malouin wrote:
>> Hello Jean-Louis,
>>
>> I was away hence the delay.
>>
>> * Jean-Louis Martineau  [20100719 08:31]:
>>   
>>> Hi Jean-François,
>>>
>>> Try the attached patch, it will works with newer log files only.
>>> Thanks for reporting the bug.
>>> 
>>
>> I applied the patch but it doesn't seem to do the right thing.
>> I get essentially the same output with the patched amstatus.
>> From a different amanda run than below:
>>
>> [...]
>>
>> SUMMARY  part  real  estimated
>>size   size
>> partition   :  98
>> estimated   :  50  1855244m
>> flush   :  48606853m
>> failed  :   00m   (  0.00%)
>> wait for dumping:   00m   (  0.00%)
>> dumping to tape :   00m   (  0.00%)
>> dumping :   0 0m 0m (  0.00%) (  0.00%)
>> dumped  :  50   1855210m   1855244m (100.00%) (100.00%)
>> wait for writing:  49968740m968774m (100.00%) ( 52.22%)
>> wait to flush   :   0 0m 0m (100.00%) (  0.00%)
>> writing to tape :   1886469m886469m (100.00%) ( 47.78%)
>> failed to tape  :   0 0m 0m (  0.00%) (  0.00%)
>> taped   :  48606853m606853m (100.00%) ( 24.65%)
>> 12 dumpers idle : no-dumpers
>> taper status: Writing bigbrain:BIG_BRAIN_pm3903_postmortem_mnc_original
>> taper qlen: 49
>>
>> plus other stats.
>>
>> Thanks,
>> jf
>>
>>   
>>> Jean-Louis
>>>
>>> Jean-Francois Malouin wrote:
>>> 
>>>> Hi,
>>>>
>>>> With amanda-3.1 seems we lost the tape usage in the summary report
>>>> output by amstatus. Prior versions were showning which tape has been
>>>> used along with its usage like (2.6.1p2):
>>>>
>>>>
>>>> SUMMARY  part  real  estimated
>>>>size   size
>>>> partition   :  35
>>>> estimated   :  21   871112m
>>>> flush   :  14341794m
>>>> failed  :   00m   (  0.00%)
>>>> wait for dumping:   00m   (  0.00%)
>>>> dumping to tape :   00m   (  0.00%)
>>>> dumping :   0 0m 0m (  0.00%) (  0.00%)
>>>> dumped  :  21872030m871112m (100.11%) (100.11%)
>>>> wait for writing:  20472157m471239m (100.19%) ( 54.20%)
>>>> wait to flush   :   0 0m 0m (100.00%) (  0.00%)
>>>> writing to tape :   1399873m399873m (100.00%) ( 45.90%)
>>>> failed to tape  :   0 0m 0m (  0.00%) (  0.00%)
>>>> taped   :  14341794m341794m (100.00%) ( 28.18%)
>>>>   tape 1:  14520994m520994m (134.96%) 
>>>> av24-2_right2_V00023L3 (128 chunks)
>>>>
>>>> I liked that feature. Possible to get it back?
>>>>
>>>> Thanks!
>>>> jf
>>>> 
>>
>>   
>>> diff --git a/installcheck/amstatus.pl b/installcheck/amstatus.pl
>>> index cf41b47..6985c6c 100644
>>> --- a/installcheck/amstatus.pl
>>> +++ b/installcheck/amstatus.pl
>>> @@ -153,7 +153,7 @@ DUMP clienthost 9ffe1f /some/dir 
>>> 20080618130147 14050 0 1970:1:1
>>>  
>>>  dumper: pid 4086 executable dumper0 version 9.8.7
>>>  dumper: pid 4095 executable dumper3 version 9.8.7
>>> -taper: using label `Conf-001' date `20080618130147'
>>> +taper: wrote label 'Conf-001'
>>>  driver: result time 1.312 from taper: TAPER-OK
>>>  driver: state time 1.312 free kps: 600 space: 868352 taper: idle 
>>> idle-dumpers: 4 qlen tapeq: 0 runq: 0 roomq: 0 wakeup: 0 driver-idle: 
>>> not-idle
>>>  driver: interface-state time 1.

Re: amstatus tape usage enhancement

2010-07-21 Thread Jean-Louis Martineau

Do the dump was done with the patched taper?

Jean-Louis

Jean-Francois Malouin wrote:

Hello Jean-Louis,

I was away hence the delay.

* Jean-Louis Martineau  [20100719 08:31]:
  

Hi Jean-François,

Try the attached patch, it will works with newer log files only.
Thanks for reporting the bug.



I applied the patch but it doesn't seem to do the right thing.
I get essentially the same output with the patched amstatus.
From a different amanda run than below:

[...]

SUMMARY  part  real  estimated
   size   size
partition   :  98
estimated   :  50  1855244m
flush   :  48606853m
failed  :   00m   (  0.00%)
wait for dumping:   00m   (  0.00%)
dumping to tape :   00m   (  0.00%)
dumping :   0 0m 0m (  0.00%) (  0.00%)
dumped  :  50   1855210m   1855244m (100.00%) (100.00%)
wait for writing:  49968740m968774m (100.00%) ( 52.22%)
wait to flush   :   0 0m 0m (100.00%) (  0.00%)
writing to tape :   1886469m886469m (100.00%) ( 47.78%)
failed to tape  :   0 0m 0m (  0.00%) (  0.00%)
taped   :  48606853m606853m (100.00%) ( 24.65%)
12 dumpers idle : no-dumpers
taper status: Writing bigbrain:BIG_BRAIN_pm3903_postmortem_mnc_original
taper qlen: 49

plus other stats.

Thanks,
jf

  

Jean-Louis

Jean-Francois Malouin wrote:


Hi,

With amanda-3.1 seems we lost the tape usage in the summary report
output by amstatus. Prior versions were showning which tape has been
used along with its usage like (2.6.1p2):


SUMMARY  part  real  estimated
   size   size
partition   :  35
estimated   :  21   871112m
flush   :  14341794m
failed  :   00m   (  0.00%)
wait for dumping:   00m   (  0.00%)
dumping to tape :   00m   (  0.00%)
dumping :   0 0m 0m (  0.00%) (  0.00%)
dumped  :  21872030m871112m (100.11%) (100.11%)
wait for writing:  20472157m471239m (100.19%) ( 54.20%)
wait to flush   :   0 0m 0m (100.00%) (  0.00%)
writing to tape :   1399873m399873m (100.00%) ( 45.90%)
failed to tape  :   0 0m 0m (  0.00%) (  0.00%)
taped   :  14341794m341794m (100.00%) ( 28.18%)
  tape 1:  14520994m520994m (134.96%) av24-2_right2_V00023L3 
(128 chunks)

I liked that feature. Possible to get it back?

Thanks!
jf
  
  


  

diff --git a/installcheck/amstatus.pl b/installcheck/amstatus.pl
index cf41b47..6985c6c 100644
--- a/installcheck/amstatus.pl
+++ b/installcheck/amstatus.pl
@@ -153,7 +153,7 @@ DUMP clienthost 9ffe1f /some/dir 
20080618130147 14050 0 1970:1:1
 
 dumper: pid 4086 executable dumper0 version 9.8.7
 dumper: pid 4095 executable dumper3 version 9.8.7
-taper: using label `Conf-001' date `20080618130147'
+taper: wrote label 'Conf-001'
 driver: result time 1.312 from taper: TAPER-OK
 driver: state time 1.312 free kps: 600 space: 868352 taper: idle idle-dumpers: 
4 qlen tapeq: 0 runq: 0 roomq: 0 wakeup: 0 driver-idle: not-idle
 driver: interface-state time 1.312 if default: free 600
@@ -270,7 +270,7 @@ DUMP clienthost 9ffe1f "C:\\Some Dir\\" 
20080618130147 14050 0 1
 
 dumper: pid 4086 executable dumper0 version 9.8.7
 dumper: pid 4095 executable dumper3 version 9.8.7
-taper: using label `Conf-001' date `20080618130147'
+taper: wrote label 'Conf-001'
 driver: result time 1.312 from taper: TAPER-OK
 driver: state time 1.312 free kps: 600 space: 868352 taper: idle idle-dumpers: 
4 qlen tapeq: 0 runq: 0 roomq: 0 wakeup: 0 driver-idle: not-idle
 driver: interface-state time 1.312 if default: free 600
@@ -414,7 +414,7 @@ DUMP localhost 9efeff01 /etc 20090410074759 
14339 0 1970:1:1:0:0
 dumper: pid 4119 executable dumper3 version 3.0.0
 dumper: pid 4118 executable dumper2 version 3.0.0
 dumper: pid 4117 executable dumper1 version 3.0.0
-taper: using label `maitreyee-010' date `20090410074759'
+taper: wrote label 'maitreyee-010'
 driver: result time 2.928 from taper: TAPER-OK 
 driver: state time 2.937 free kps: 8000 space: 1215488 taper: idle idle-dumpers: 4 qlen tapeq: 0 runq: 0 roomq: 0 wakeup: 0 driver-idle: not-idle

 driver: interface-state time 2.937 if default: free 8000
diff --git a/server-src/amstatus.pl b/server-src/amstatus.pl
index 79803c1..7d46425 100644
--- a/server-src/amstatus.pl
+++ b/server-src/amstatus.pl
@@ -740,31 +740,10 @@ while($lineX = ) {
}
}
elsif($line[0] eq "taper") {
-   if($line[1] eq "slot") {
-   #2:slot 3:"wrote" 4:"label&

Re: amstatus tape usage enhancement

2010-07-21 Thread Jean-Francois Malouin
Hello Jean-Louis,

I was away hence the delay.

* Jean-Louis Martineau  [20100719 08:31]:
> Hi Jean-François,
>
> Try the attached patch, it will works with newer log files only.
> Thanks for reporting the bug.

I applied the patch but it doesn't seem to do the right thing.
I get essentially the same output with the patched amstatus.
>From a different amanda run than below:

[...]

SUMMARY  part  real  estimated
   size   size
partition   :  98
estimated   :  50  1855244m
flush   :  48606853m
failed  :   00m   (  0.00%)
wait for dumping:   00m   (  0.00%)
dumping to tape :   00m   (  0.00%)
dumping :   0 0m 0m (  0.00%) (  0.00%)
dumped  :  50   1855210m   1855244m (100.00%) (100.00%)
wait for writing:  49968740m968774m (100.00%) ( 52.22%)
wait to flush   :   0 0m 0m (100.00%) (  0.00%)
writing to tape :   1886469m886469m (100.00%) ( 47.78%)
failed to tape  :   0 0m 0m (  0.00%) (  0.00%)
taped   :  48606853m606853m (100.00%) ( 24.65%)
12 dumpers idle : no-dumpers
taper status: Writing bigbrain:BIG_BRAIN_pm3903_postmortem_mnc_original
taper qlen: 49

plus other stats.

Thanks,
jf

>
> Jean-Louis
>
> Jean-Francois Malouin wrote:
>> Hi,
>>
>> With amanda-3.1 seems we lost the tape usage in the summary report
>> output by amstatus. Prior versions were showning which tape has been
>> used along with its usage like (2.6.1p2):
>>
>>
>> SUMMARY  part  real  estimated
>>size   size
>> partition   :  35
>> estimated   :  21   871112m
>> flush   :  14341794m
>> failed  :   00m   (  0.00%)
>> wait for dumping:   00m   (  0.00%)
>> dumping to tape :   00m   (  0.00%)
>> dumping :   0 0m 0m (  0.00%) (  0.00%)
>> dumped  :  21872030m871112m (100.11%) (100.11%)
>> wait for writing:  20472157m471239m (100.19%) ( 54.20%)
>> wait to flush   :   0 0m 0m (100.00%) (  0.00%)
>> writing to tape :   1399873m399873m (100.00%) ( 45.90%)
>> failed to tape  :   0 0m 0m (  0.00%) (  0.00%)
>> taped   :  14341794m341794m (100.00%) ( 28.18%)
>>   tape 1:  14520994m520994m (134.96%) av24-2_right2_V00023L3 
>> (128 chunks)
>>
>> I liked that feature. Possible to get it back?
>>
>> Thanks!
>> jf
>>   
>

> diff --git a/installcheck/amstatus.pl b/installcheck/amstatus.pl
> index cf41b47..6985c6c 100644
> --- a/installcheck/amstatus.pl
> +++ b/installcheck/amstatus.pl
> @@ -153,7 +153,7 @@ DUMP clienthost 9ffe1f /some/dir 
> 20080618130147 14050 0 1970:1:1
>  
>  dumper: pid 4086 executable dumper0 version 9.8.7
>  dumper: pid 4095 executable dumper3 version 9.8.7
> -taper: using label `Conf-001' date `20080618130147'
> +taper: wrote label 'Conf-001'
>  driver: result time 1.312 from taper: TAPER-OK
>  driver: state time 1.312 free kps: 600 space: 868352 taper: idle 
> idle-dumpers: 4 qlen tapeq: 0 runq: 0 roomq: 0 wakeup: 0 driver-idle: not-idle
>  driver: interface-state time 1.312 if default: free 600
> @@ -270,7 +270,7 @@ DUMP clienthost 9ffe1f "C:\\Some Dir\\" 
> 20080618130147 14050 0 1
>  
>  dumper: pid 4086 executable dumper0 version 9.8.7
>  dumper: pid 4095 executable dumper3 version 9.8.7
> -taper: using label `Conf-001' date `20080618130147'
> +taper: wrote label 'Conf-001'
>  driver: result time 1.312 from taper: TAPER-OK
>  driver: state time 1.312 free kps: 600 space: 868352 taper: idle 
> idle-dumpers: 4 qlen tapeq: 0 runq: 0 roomq: 0 wakeup: 0 driver-idle: not-idle
>  driver: interface-state time 1.312 if default: free 600
> @@ -414,7 +414,7 @@ DUMP localhost 9efeff01 /etc 
> 20090410074759 14339 0 1970:1:1:0:0
>  dumper: pid 4119 executable dumper3 version 3.0.0
>  dumper: pid 4118 executable dumper2 version 3.0.0
>  dumper: pid 4117 executable dumper1 version 3.0.0
> -taper: using label `maitreyee-010' date `20090410074759'
> +taper: wrote label 'maitreyee-010'
>  driver: result time 2.928 from taper: TAPER-OK 
>  driver: state time 2.937 free kps: 8000 space: 1215488 taper: idle 
> idle-dumpers: 4 qlen tapeq: 0 runq: 0 roomq: 0 wakeup: 0 driver-idle: not-idle
>  driver: interface-state time 2.937 if default: free 8000
> di

Re: amstatus tape usage enhancement

2010-07-19 Thread Jean-Louis Martineau

Hi Jean-François,

Try the attached patch, it will works with newer log files only.
Thanks for reporting the bug.

Jean-Louis

Jean-Francois Malouin wrote:

Hi,

With amanda-3.1 seems we lost the tape usage in the summary report
output by amstatus. Prior versions were showning which tape has been
used along with its usage like (2.6.1p2):


SUMMARY  part  real  estimated
   size   size
partition   :  35
estimated   :  21   871112m
flush   :  14341794m
failed  :   00m   (  0.00%)
wait for dumping:   00m   (  0.00%)
dumping to tape :   00m   (  0.00%)
dumping :   0 0m 0m (  0.00%) (  0.00%)
dumped  :  21872030m871112m (100.11%) (100.11%)
wait for writing:  20472157m471239m (100.19%) ( 54.20%)
wait to flush   :   0 0m 0m (100.00%) (  0.00%)
writing to tape :   1399873m399873m (100.00%) ( 45.90%)
failed to tape  :   0 0m 0m (  0.00%) (  0.00%)
taped   :  14341794m341794m (100.00%) ( 28.18%)
  tape 1:  14520994m520994m (134.96%) av24-2_right2_V00023L3 
(128 chunks)

I liked that feature. Possible to get it back?

Thanks!
jf
  


diff --git a/installcheck/amstatus.pl b/installcheck/amstatus.pl
index cf41b47..6985c6c 100644
--- a/installcheck/amstatus.pl
+++ b/installcheck/amstatus.pl
@@ -153,7 +153,7 @@ DUMP clienthost 9ffe1f /some/dir 20080618130147 14050 0 1970:1:1
 
 dumper: pid 4086 executable dumper0 version 9.8.7
 dumper: pid 4095 executable dumper3 version 9.8.7
-taper: using label `Conf-001' date `20080618130147'
+taper: wrote label 'Conf-001'
 driver: result time 1.312 from taper: TAPER-OK
 driver: state time 1.312 free kps: 600 space: 868352 taper: idle idle-dumpers: 4 qlen tapeq: 0 runq: 0 roomq: 0 wakeup: 0 driver-idle: not-idle
 driver: interface-state time 1.312 if default: free 600
@@ -270,7 +270,7 @@ DUMP clienthost 9ffe1f "C:\\Some Dir\\" 20080618130147 14050 0 1
 
 dumper: pid 4086 executable dumper0 version 9.8.7
 dumper: pid 4095 executable dumper3 version 9.8.7
-taper: using label `Conf-001' date `20080618130147'
+taper: wrote label 'Conf-001'
 driver: result time 1.312 from taper: TAPER-OK
 driver: state time 1.312 free kps: 600 space: 868352 taper: idle idle-dumpers: 4 qlen tapeq: 0 runq: 0 roomq: 0 wakeup: 0 driver-idle: not-idle
 driver: interface-state time 1.312 if default: free 600
@@ -414,7 +414,7 @@ DUMP localhost 9efeff01 /etc 20090410074759 14339 0 1970:1:1:0:0
 dumper: pid 4119 executable dumper3 version 3.0.0
 dumper: pid 4118 executable dumper2 version 3.0.0
 dumper: pid 4117 executable dumper1 version 3.0.0
-taper: using label `maitreyee-010' date `20090410074759'
+taper: wrote label 'maitreyee-010'
 driver: result time 2.928 from taper: TAPER-OK 
 driver: state time 2.937 free kps: 8000 space: 1215488 taper: idle idle-dumpers: 4 qlen tapeq: 0 runq: 0 roomq: 0 wakeup: 0 driver-idle: not-idle
 driver: interface-state time 2.937 if default: free 8000
diff --git a/server-src/amstatus.pl b/server-src/amstatus.pl
index 79803c1..7d46425 100644
--- a/server-src/amstatus.pl
+++ b/server-src/amstatus.pl
@@ -740,31 +740,10 @@ while($lineX = ) {
 		}
 	}
 	elsif($line[0] eq "taper") {
-		if($line[1] eq "slot") {
-			#2:slot 3:"wrote" 4:"label" 5:corrupted...
+		if($line[1] eq "wrote") {
+			#1:"wrote" 2:"label" 3:label
 			$nb_tape++;
-			$lineX =~ /wrote label `(\S*)'/;
-			$label = $1;
-			$ntlabel{$nb_tape} = $label;
-			$ntpartition{$nb_tape} = 0;
-			$ntsize{$nb_tape} = 0;
-			$ntesize{$nb_tape} = 0;
-		}
-		elsif($line[1] eq "wrote") {
-			#1:"wrote" 2:"label" 3:corrupted
-			$nb_tape++;
-			$lineX =~ /wrote label `(\S*)'/;
-			$label = $1;
-			$ntlabel{$nb_tape} = $label;
-			$ntpartition{$nb_tape} = 0;
-			$ntsize{$nb_tape} = 0;
-			$ntesize{$nb_tape} = 0;
-		}
-		elsif($line[1] eq "using") {
-			#1:"using" #2:"label" #3:`label' #4:date #5 `timestamp'
-			$nb_tape++;
-			$lineX =~ /using label `(\S*)'/;
-			$label = $1;
+			$label = $line[3];
 			$ntlabel{$nb_tape} = $label;
 			$ntpartition{$nb_tape} = 0;
 			$ntsize{$nb_tape} = 0;
diff --git a/server-src/taper.pl b/server-src/taper.pl
index 0b2883e..8c42493 100644
--- a/server-src/taper.pl
+++ b/server-src/taper.pl
@@ -344,7 +344,7 @@ sub notif_new_tape {
 		++$self->{'tape_num'}));
 
 	# and the amdump log
-	print STDERR "taper: wrote label `$self->{label}'\n";
+	print STDERR "taper: wrote label '$self->{label}'\n";
 
 	# and inform the driver
 	$self->{'proto'}->send(main::Protocol::NEW_TAPE,


Re: amstatus tape usage enhancement

2010-07-16 Thread Dustin J. Mitchell
On Fri, Jul 16, 2010 at 1:55 PM, Jean-Francois Malouin
 wrote:
> taped           :  14    341794m    341794m (100.00%) ( 28.18%)
>  tape 1        :  14    520994m    520994m (134.96%) av24-2_right2_V00023L3 
> (128 chunks)
>
> I liked that feature. Possible to get it back?

In particular, the last line is now missing, right?

I'm guessing that amstatus is missing some log line it used to expect.
 Amstatus basically parses the free-form stderr of the driver and
taper, so changes to the taper have probably removed the log messages
that amstatus is keying on.

Amstatus desperately needs to be rewritten to use some sort of
well-defined API to get its status information - the current solution
is basically unmaintainable.  I've tried to summarize the lines that
amstatus looks for here:
  http://wiki.zmanda.com/index.php/Amanda_log_files/Amdump_Logs
based on a skimming of amstatus.  I've taken pains to ensure that the
new taper writes the corresponding lines.  Scanning amstatus is hard,
though, because it is full of deeply nested conditionals, where
different words of the same log line are matched several pages apart.
Perhaps I've missed something.   Can you take a look and see if you
can track it down?

Dustin

-- 
Open Source Storage Engineer
http://www.zmanda.com



amstatus tape usage enhancement

2010-07-16 Thread Jean-Francois Malouin
Hi,

With amanda-3.1 seems we lost the tape usage in the summary report
output by amstatus. Prior versions were showning which tape has been
used along with its usage like (2.6.1p2):


SUMMARY  part  real  estimated
   size   size
partition   :  35
estimated   :  21   871112m
flush   :  14341794m
failed  :   00m   (  0.00%)
wait for dumping:   00m   (  0.00%)
dumping to tape :   00m   (  0.00%)
dumping :   0 0m 0m (  0.00%) (  0.00%)
dumped  :  21872030m871112m (100.11%) (100.11%)
wait for writing:  20472157m471239m (100.19%) ( 54.20%)
wait to flush   :   0 0m 0m (100.00%) (  0.00%)
writing to tape :   1399873m399873m (100.00%) ( 45.90%)
failed to tape  :   0 0m 0m (  0.00%) (  0.00%)
taped   :  14341794m341794m (100.00%) ( 28.18%)
  tape 1:  14520994m520994m (134.96%) av24-2_right2_V00023L3 
(128 chunks)

I liked that feature. Possible to get it back?

Thanks!
jf
-- 
<° >< Jean-François Malouin  McConnell Brain Imaging Centre
Systems/Network Administrator   Montréal Neurological Institute
3801 Rue University, Suite WB219  Montréal, Québec, H3A 2B4
Phone: 514-398-8924   Fax: 514-398-8948


Re: amstatus error on 2.6.1

2009-02-26 Thread stan
On Thu, Feb 26, 2009 at 07:49:21AM -0500, Jean-Louis Martineau wrote:
> stan wrote:
> >On Thu, Feb 26, 2009 at 07:31:44AM -0500, Jean-Louis Martineau wrote:
> >  
> >>Stan,
> >>
> >>It's known bug, remove the line 1042.
> >>
> >>
> >So, that element of the structure does not need to get aded to the
> >acumulation
> It's done a few line below.

Cool, thanks.
-- 
One of the main causes of the fall of the roman empire was that, lacking
zero, they had no way to indicate successful termination of their C
programs.


Re: amstatus error on 2.6.1

2009-02-26 Thread Jean-Louis Martineau

stan wrote:

On Thu, Feb 26, 2009 at 07:31:44AM -0500, Jean-Louis Martineau wrote:
  

Stan,

It's known bug, remove the line 1042.



So, that element of the structure does not need to get aded to the
acumulation

It's done a few line below.

Jean-Louis


Re: amstatus error on 2.6.1

2009-02-26 Thread stan
On Thu, Feb 26, 2009 at 07:31:44AM -0500, Jean-Louis Martineau wrote:
> Stan,
> 
> It's known bug, remove the line 1042.
> 
So, that element of the structure does not need to get aded to the
acumulation?

-- 
One of the main causes of the fall of the roman empire was that, lacking
zero, they had no way to indicate successful termination of their C
programs.


Re: amstatus error on 2.6.1

2009-02-26 Thread Jean-Louis Martineau

Stan,

It's known bug, remove the line 1042.

Jean-Louis

stan wrote:

When I ran amstatus to check on last nights run, I got a serries of the
follwing error messages:

:Use of uninitialized value in addition (+) at /opt/amanda/sbin/amstatus
line 1042.

Looking at that line of the script makes me suspect that the acumulation
that's going on here needs to have (if defined) protecting it. Does this
make sense?

  




amstatus error on 2.6.1

2009-02-26 Thread stan
When I ran amstatus to check on last nights run, I got a serries of the
follwing error messages:

:Use of uninitialized value in addition (+) at /opt/amanda/sbin/amstatus
line 1042.

Looking at that line of the script makes me suspect that the acumulation
that's going on here needs to have (if defined) protecting it. Does this
make sense?

-- 
One of the main causes of the fall of the roman empire was that, lacking
zero, they had no way to indicate successful termination of their C
programs.


Re: amanda 2.6.1, Solaris 10/Sparc, amstatus error

2009-02-03 Thread Jean-Louis Martineau

This bug is already fixed.
Fix will be in 2.6.1p1.

Jean-Louis

Brian Cuttler wrote:

Successful run of the new version yesterday, performing a new
run this morning with autoflush (production capacity tape drive
is on the current production system).

Ran amstatus and noted an error, thought I'd pass it on. Please
let me know what additional detail I can provide.

I do not consider this a serious problem, I don't need a fix, just an FYI.

Note, this error did NOT occur while the flush of that first partition
was still in progress. I did NOT see any amstatus errors during the
original amdump yesterday.

thanks,

Brian

  

amstatus griffy


Using /usr/local/etc/amanda/griffy/DailySet1/amdump
From Tue Feb 3 10:24:01 EST 2009

griffy:/   0  3331m flushed (10:36:45)
Use of uninitialized value in addition (+) at /usr/local/sbin/amstatus line 
1042.
griffy:/   0  7208m estimate done
griffy:/griffyp/climsgl0   227m estimate done
griffy:/griffyp/csssoft0 0m estimate done
griffy:/griffyp/dew0  1292m estimate done
griffy:/griffyp/encphrev   0 1m estimate done
griffy:/griffyp/export 0   674m estimate done
griffy:/griffyp/grifadmin  0   957m estimate done
griffy:/griffyp/hiu0 37812m flushing to tape (10:36:45)
griffy:/griffyp/hiu getting estimate
griffy:/griffyp/hiu2getting estimate
griffy:/griffyp/ivcpgetting estimate
griffy:/griffyp/virologypt 015m estimate done
griffy:/var0   889m estimate done

SUMMARY  part  real  estimated
   size   size
partition   :  14
estimated   :   911266m
flush   :   2 41143m
failed  :   00m   (  0.00%)
wait for dumping:   00m   (  0.00%)
dumping to tape :   00m   (  0.00%)
dumping :   0 0m 0m (  0.00%) (  0.00%)
dumped  :   1  3331m  3331m (100.00%) ( 29.57%)
wait for writing:   0 0m 0m (  0.00%) (  0.00%)
wait to flush   :   0 0m 0m (100.00%) (  0.00%)
writing to tape :   1 37812m 37812m (100.00%) (335.61%)
failed to tape  :   0 0m 0m (  0.00%) (  0.00%)
taped   :   1  3331m  3331m (100.00%) (  6.36%)
  tape 1:   1  3331m  3331m (  4.76%) Griffy02 (2 chunks)
4 dumpers idle  : runq
taper writing, tapeq: 0
network free kps:80
holding space   : 36833m (109.94%)
   taper busy   :  0:12:35  ( 99.02%)
 0 dumpers busy :  0:12:35  ( 99.06%)runq:  0:12:35  (100.00%)


---
   Brian R Cuttler brian.cutt...@wadsworth.org
   Computer Systems Support(v) 518 486-1697
   Wadsworth Center(f) 518 473-6384
   NYS Department of HealthHelp Desk 518 473-0773



IMPORTANT NOTICE: This e-mail and any attachments may contain
confidential or sensitive information which is, or may be, legally
privileged or otherwise protected by law from further disclosure.  It
is intended only for the addressee.  If you received this in error or
from someone who was not authorized to send it to you, please do not
distribute, copy or use it or any attachments.  Please notify the
sender immediately by reply e-mail and delete this from your
system. Thank you for your cooperation.


  




Re: amanda 2.6.1, Solaris 10/Sparc, amstatus error

2009-02-03 Thread Jean-Louis Martineau

Brian Cuttler wrote:

On Tue, Feb 03, 2009 at 11:13:40AM -0500, Jean-Louis Martineau wrote:
  

This bug is already fixed.
Fix will be in 2.6.1p1.



Cool! thanks.
  

The fix is simple, you only need to delete the line 1042.

Jean-Louis
  

Jean-Louis

Brian Cuttler wrote:


Successful run of the new version yesterday, performing a new
run this morning with autoflush (production capacity tape drive
is on the current production system).

Ran amstatus and noted an error, thought I'd pass it on. Please
let me know what additional detail I can provide.

I do not consider this a serious problem, I don't need a fix, just an FYI.

Note, this error did NOT occur while the flush of that first partition
was still in progress. I did NOT see any amstatus errors during the
original amdump yesterday.

thanks,

Brian

 
  

amstatus griffy
   


Using /usr/local/etc/amanda/griffy/DailySet1/amdump
  

>From Tue Feb 3 10:24:01 EST 2009


griffy:/   0  3331m flushed (10:36:45)
Use of uninitialized value in addition (+) at /usr/local/sbin/amstatus 
line 1042.

griffy:/   0  7208m estimate done
griffy:/griffyp/climsgl0   227m estimate done
griffy:/griffyp/csssoft0 0m estimate done
griffy:/griffyp/dew0  1292m estimate done
griffy:/griffyp/encphrev   0 1m estimate done
griffy:/griffyp/export 0   674m estimate done
griffy:/griffyp/grifadmin  0   957m estimate done
griffy:/griffyp/hiu0 37812m flushing to tape (10:36:45)
griffy:/griffyp/hiu getting estimate
griffy:/griffyp/hiu2getting estimate
griffy:/griffyp/ivcpgetting estimate
griffy:/griffyp/virologypt 015m estimate done
griffy:/var0   889m estimate done

SUMMARY  part  real  estimated
  size   size
partition   :  14
estimated   :   911266m
flush   :   2 41143m
failed  :   00m   (  0.00%)
wait for dumping:   00m   (  0.00%)
dumping to tape :   00m   (  0.00%)
dumping :   0 0m 0m (  0.00%) (  0.00%)
dumped  :   1  3331m  3331m (100.00%) ( 29.57%)
wait for writing:   0 0m 0m (  0.00%) (  0.00%)
wait to flush   :   0 0m 0m (100.00%) (  0.00%)
writing to tape :   1 37812m 37812m (100.00%) (335.61%)
failed to tape  :   0 0m 0m (  0.00%) (  0.00%)
taped   :   1  3331m  3331m (100.00%) (  6.36%)
 tape 1:   1  3331m  3331m (  4.76%) Griffy02 (2 chunks)
4 dumpers idle  : runq
taper writing, tapeq: 0
network free kps:80
holding space   : 36833m (109.94%)
  taper busy   :  0:12:35  ( 99.02%)
0 dumpers busy :  0:12:35  ( 99.06%)runq:  0:12:35  
(100.00%)



---
  Brian R Cuttler brian.cutt...@wadsworth.org
  Computer Systems Support(v) 518 486-1697
  Wadsworth Center(f) 518 473-6384
  NYS Department of HealthHelp Desk 518 473-0773



IMPORTANT NOTICE: This e-mail and any attachments may contain
confidential or sensitive information which is, or may be, legally
privileged or otherwise protected by law from further disclosure.  It
is intended only for the addressee.  If you received this in error or
  

>from someone who was not authorized to send it to you, please do not


distribute, copy or use it or any attachments.  Please notify the
sender immediately by reply e-mail and delete this from your
system. Thank you for your cooperation.


 
  

---
   Brian R Cuttler brian.cutt...@wadsworth.org
   Computer Systems Support(v) 518 486-1697
   Wadsworth Center(f) 518 473-6384
   NYS Department of HealthHelp Desk 518 473-0773



IMPORTANT NOTICE: This e-mail and any attachments may contain
confidential or sensitive information which is, or may be, legally
privileged or otherwise protected by law from further disclosure.  It
is intended only for the addressee.  If you received this in error or
from someone who was not authorized to send it to you, please do not
distribute, copy or use it or any attachments.  Please notify the
sender immediately by reply e-mail and delete this from your
system. Thank you for your cooperation.


  




Re: amanda 2.6.1, Solaris 10/Sparc, amstatus error

2009-02-03 Thread Brian Cuttler
On Tue, Feb 03, 2009 at 11:13:40AM -0500, Jean-Louis Martineau wrote:
> This bug is already fixed.
> Fix will be in 2.6.1p1.

Cool! thanks.

> Jean-Louis
> 
> Brian Cuttler wrote:
> >Successful run of the new version yesterday, performing a new
> >run this morning with autoflush (production capacity tape drive
> >is on the current production system).
> >
> >Ran amstatus and noted an error, thought I'd pass it on. Please
> >let me know what additional detail I can provide.
> >
> >I do not consider this a serious problem, I don't need a fix, just an FYI.
> >
> >Note, this error did NOT occur while the flush of that first partition
> >was still in progress. I did NOT see any amstatus errors during the
> >original amdump yesterday.
> >
> > thanks,
> >
> > Brian
> >
> >  
> >>amstatus griffy
> >>
> >Using /usr/local/etc/amanda/griffy/DailySet1/amdump
> >From Tue Feb 3 10:24:01 EST 2009
> >
> >griffy:/   0  3331m flushed (10:36:45)
> >Use of uninitialized value in addition (+) at /usr/local/sbin/amstatus 
> >line 1042.
> >griffy:/   0  7208m estimate done
> >griffy:/griffyp/climsgl0   227m estimate done
> >griffy:/griffyp/csssoft0 0m estimate done
> >griffy:/griffyp/dew0  1292m estimate done
> >griffy:/griffyp/encphrev   0 1m estimate done
> >griffy:/griffyp/export 0   674m estimate done
> >griffy:/griffyp/grifadmin  0   957m estimate done
> >griffy:/griffyp/hiu0 37812m flushing to tape (10:36:45)
> >griffy:/griffyp/hiu getting estimate
> >griffy:/griffyp/hiu2getting estimate
> >griffy:/griffyp/ivcpgetting estimate
> >griffy:/griffyp/virologypt 015m estimate done
> >griffy:/var0   889m estimate done
> >
> >SUMMARY  part  real  estimated
> >   size   size
> >partition   :  14
> >estimated   :   911266m
> >flush   :   2 41143m
> >failed  :   00m   (  0.00%)
> >wait for dumping:   00m   (  0.00%)
> >dumping to tape :   00m   (  0.00%)
> >dumping :   0 0m 0m (  0.00%) (  0.00%)
> >dumped  :   1  3331m  3331m (100.00%) ( 29.57%)
> >wait for writing:   0 0m 0m (  0.00%) (  0.00%)
> >wait to flush   :   0 0m 0m (100.00%) (  0.00%)
> >writing to tape :   1 37812m 37812m (100.00%) (335.61%)
> >failed to tape  :   0 0m 0m (  0.00%) (  0.00%)
> >taped   :   1  3331m  3331m (100.00%) (  6.36%)
> >  tape 1:   1  3331m  3331m (  4.76%) Griffy02 (2 chunks)
> >4 dumpers idle  : runq
> >taper writing, tapeq: 0
> >network free kps:80
> >holding space   : 36833m (109.94%)
> >   taper busy   :  0:12:35  ( 99.02%)
> > 0 dumpers busy :  0:12:35  ( 99.06%)runq:  0:12:35  
> > (100.00%)
> >
> >
> >---
> >   Brian R Cuttler brian.cutt...@wadsworth.org
> >   Computer Systems Support(v) 518 486-1697
> >   Wadsworth Center(f) 518 473-6384
> >   NYS Department of HealthHelp Desk 518 473-0773
> >
> >
> >
> >IMPORTANT NOTICE: This e-mail and any attachments may contain
> >confidential or sensitive information which is, or may be, legally
> >privileged or otherwise protected by law from further disclosure.  It
> >is intended only for the addressee.  If you received this in error or
> >from someone who was not authorized to send it to you, please do not
> >distribute, copy or use it or any attachments.  Please notify the
> >sender immediately by reply e-mail and delete this from your
> >system. Thank you for your cooperation.
> >
> >
> >  
> 
---
   Brian R Cuttler brian.cutt...@wadsworth.org
   Computer Systems Support(v) 518 486-1697
   Wadsworth Center(f) 518 473-6384
   NYS Department of HealthHelp Desk 518 473-0773



IMPORTANT NOTICE: This e-mail and any attachments may contain
confidential or sensitive information which is, or may be, legally
privileged or otherwise protected by law from further disclosure.  It
is intended only for the addressee.  If you received this in error or
from someone who was not authorized to send it to you, please do not
distribute, copy or use it or any attachments.  Please notify the
sender immediately by reply e-mail and delete this from your
system. Thank you for your cooperation.




amanda 2.6.1, Solaris 10/Sparc, amstatus error

2009-02-03 Thread Brian Cuttler

Successful run of the new version yesterday, performing a new
run this morning with autoflush (production capacity tape drive
is on the current production system).

Ran amstatus and noted an error, thought I'd pass it on. Please
let me know what additional detail I can provide.

I do not consider this a serious problem, I don't need a fix, just an FYI.

Note, this error did NOT occur while the flush of that first partition
was still in progress. I did NOT see any amstatus errors during the
original amdump yesterday.

thanks,

    Brian

> amstatus griffy
Using /usr/local/etc/amanda/griffy/DailySet1/amdump
>From Tue Feb 3 10:24:01 EST 2009

griffy:/   0  3331m flushed (10:36:45)
Use of uninitialized value in addition (+) at /usr/local/sbin/amstatus line 
1042.
griffy:/   0  7208m estimate done
griffy:/griffyp/climsgl0   227m estimate done
griffy:/griffyp/csssoft0 0m estimate done
griffy:/griffyp/dew0  1292m estimate done
griffy:/griffyp/encphrev   0 1m estimate done
griffy:/griffyp/export 0   674m estimate done
griffy:/griffyp/grifadmin  0   957m estimate done
griffy:/griffyp/hiu0 37812m flushing to tape (10:36:45)
griffy:/griffyp/hiu getting estimate
griffy:/griffyp/hiu2getting estimate
griffy:/griffyp/ivcpgetting estimate
griffy:/griffyp/virologypt 015m estimate done
griffy:/var0   889m estimate done

SUMMARY  part  real  estimated
   size   size
partition   :  14
estimated   :   911266m
flush   :   2 41143m
failed  :   00m   (  0.00%)
wait for dumping:   00m   (  0.00%)
dumping to tape :   00m   (  0.00%)
dumping :   0 0m 0m (  0.00%) (  0.00%)
dumped  :   1  3331m  3331m (100.00%) ( 29.57%)
wait for writing:   0 0m 0m (  0.00%) (  0.00%)
wait to flush   :   0 0m 0m (100.00%) (  0.00%)
writing to tape :   1 37812m 37812m (100.00%) (335.61%)
failed to tape  :   0 0m 0m (  0.00%) (  0.00%)
taped   :   1  3331m  3331m (100.00%) (  6.36%)
  tape 1:   1  3331m  3331m (  4.76%) Griffy02 (2 chunks)
4 dumpers idle  : runq
taper writing, tapeq: 0
network free kps:80
holding space   : 36833m (109.94%)
   taper busy   :  0:12:35  ( 99.02%)
 0 dumpers busy :  0:12:35  ( 99.06%)runq:  0:12:35  (100.00%)


---
   Brian R Cuttler brian.cutt...@wadsworth.org
   Computer Systems Support(v) 518 486-1697
   Wadsworth Center(f) 518 473-6384
   NYS Department of HealthHelp Desk 518 473-0773



IMPORTANT NOTICE: This e-mail and any attachments may contain
confidential or sensitive information which is, or may be, legally
privileged or otherwise protected by law from further disclosure.  It
is intended only for the addressee.  If you received this in error or
from someone who was not authorized to send it to you, please do not
distribute, copy or use it or any attachments.  Please notify the
sender immediately by reply e-mail and delete this from your
system. Thank you for your cooperation.




Re: Day '31' out of range 1..30 at /usr/sbin/amstatus line 1148

2008-06-09 Thread Dominik Schips
Hello Jean-Louis,

Am Mittwoch, den 04.06.2008, 20:07 +0200 schrieb Dominik Schips:
> Hello Jean-Louis,
> 
> Am Mittwoch, den 04.06.2008, 09:34 -0400 schrieb Jean-Louis Martineau:
> > A bug in amstatus should not prevent your backup to run correctly.
> > Changing amdump.1 can't help to start a backup run (amdump).
> > Your backup should run correctly even if amstatus crash.
> > 
> > Can you send me a copy of the bogus amdump.1 file? I would like to look 
> > at it.
> 
> I have to check if I can do this. Amanda is running on a customer server
> I administer so I have to check the file for any critical customer
> information before I can send it to you.

The backup run on saturday like nothing happend. I didn't change
anything at the configuration or the system setup.

Look like my amanda had some kind of strange hiccup.
Anyway thanks for helping.

Best regards

Dominik



Re: Day '31' out of range 1..30 at /usr/sbin/amstatus line 1148

2008-06-05 Thread Dominik Schips
Hello,

I am still busy.
The backup run the last night correct. I try to send the bad dump file
next week or as fast as possible.

Thanks,

Dominik

Am Mittwoch, den 04.06.2008, 20:07 +0200 schrieb Dominik Schips:
> Hello Jean-Louis,
> 
> Am Mittwoch, den 04.06.2008, 09:34 -0400 schrieb Jean-Louis Martineau:
> > A bug in amstatus should not prevent your backup to run correctly.
> > Changing amdump.1 can't help to start a backup run (amdump).
> > Your backup should run correctly even if amstatus crash.
> > 
> > Can you send me a copy of the bogus amdump.1 file? I would like to look 
> > at it.
> 
> I have to check if I can do this. Amanda is running on a customer server
> I administer so I have to check the file for any critical customer
> information before I can send it to you.
> 
> > Which locale are you using?
> 
> At the moment I don't have access to the system. Tomorrow I send the
> status of the amanda run I started today after the little change on the
> amdump.1 file.
> 
> So more information tomorrow. 
> 
> Thanks,
> 
> Dominik
> 



Re: Day '31' out of range 1..30 at /usr/sbin/amstatus line 1148

2008-06-04 Thread Dominik Schips
Hello Jean-Louis,

Am Mittwoch, den 04.06.2008, 09:34 -0400 schrieb Jean-Louis Martineau:
> A bug in amstatus should not prevent your backup to run correctly.
> Changing amdump.1 can't help to start a backup run (amdump).
> Your backup should run correctly even if amstatus crash.
> 
> Can you send me a copy of the bogus amdump.1 file? I would like to look 
> at it.

I have to check if I can do this. Amanda is running on a customer server
I administer so I have to check the file for any critical customer
information before I can send it to you.

> Which locale are you using?

At the moment I don't have access to the system. Tomorrow I send the
status of the amanda run I started today after the little change on the
amdump.1 file.

So more information tomorrow. 

Thanks,

Dominik




Re: Day '31' out of range 1..30 at /usr/sbin/amstatus line 1148

2008-06-04 Thread Jean-Louis Martineau

A bug in amstatus should not prevent your backup to run correctly.
Changing amdump.1 can't help to start a backup run (amdump).
Your backup should run correctly even if amstatus crash.

Can you send me a copy of the bogus amdump.1 file? I would like to look 
at it.

Which locale are you using?

Jean-Louis

Dominik Schips wrote:

Hello,

since Saturday (2008-05-31) my ananda backup didn't run.
A closer look when I run the command /usr/sbin/amstatus DailyFull get me
this output:

Day '31' out of range 1..30 at /usr/sbin/amstatus line 1148
using /var/lib/amanda/DailyFull/log/amdump.1 from Sa Mai 21 15:30:08
CEST

I coudn't find any errors or someting else in the logs.

Because I didn't know what went wrong I had to fake (change) the date
in /var/lib/amanda/DailyFull/log/amdump.1 from  31 to 30...

Know the backup starts and I hope that it finish correct without
problem.

The question is what can cause this error
"Day '31' out of range 1..30 at /usr/sbin/amstatus line 1148"?

I use Amanda 2.4.4p3 from Debian sarge. I know it's old, but there
wasn't time to make a update to Etch. This is on my todo within the nex
4 weeks.

Best regards,

Dominik

  




Day '31' out of range 1..30 at /usr/sbin/amstatus line 1148

2008-06-04 Thread Dominik Schips
Hello,

since Saturday (2008-05-31) my ananda backup didn't run.
A closer look when I run the command /usr/sbin/amstatus DailyFull get me
this output:

Day '31' out of range 1..30 at /usr/sbin/amstatus line 1148
using /var/lib/amanda/DailyFull/log/amdump.1 from Sa Mai 21 15:30:08
CEST

I coudn't find any errors or someting else in the logs.

Because I didn't know what went wrong I had to fake (change) the date
in /var/lib/amanda/DailyFull/log/amdump.1 from  31 to 30...

Know the backup starts and I hope that it finish correct without
problem.

The question is what can cause this error
"Day '31' out of range 1..30 at /usr/sbin/amstatus line 1148"?

I use Amanda 2.4.4p3 from Debian sarge. I know it's old, but there
wasn't time to make a update to Etch. This is on my todo within the nex
4 weeks.

Best regards,

Dominik



Re: amstatus question

2007-11-09 Thread Paul Lussier
"Krahn, Anderson" <[EMAIL PROTECTED]> writes:

> While a DLE is dumping to tape does the amstatus page dynamically
> updates the amount dumped to tape during the dump. Or does it wait until
> its done?
>
> Its been sitting pretty at 15071m for some time.

I'm fairly certain it updates dynamically, though slowly.
You can verify this by using the watch command on amstatus:

 watch -n 2 'amstatus  | grep dumping'

Also, I wrote the following script to keep an eye on amstatus.  It's
just a wrapper around amstatus, but pulls out only the most
interesting information from it.  You can see the status of the DLEs
changing in something approximating "real time".

-- 
Thanks,
Paul

#!/bin/sh

DEFAULT='weekly'
CONF=${1:-$DEFAULT}
AMSTAT_CMD="amstatus $CONF"
AMSTAT_FLAGS='--dumping --dumpingtape --waitdumping  --waittaper --writingtape'
TMPFILE="/tmp/stat.$$"
SLEEPTIME=60

function cleanup {
rm -f $TMPFILE
exit 1;
}

trap cleanup SIGHUP SIGINT SIGTERM

clear

while true
do
  estimate=`$AMSTAT_CMD --gestimate | grep -v Using`
  if [ "$estimate" != "" ]; then
  $AMSTAT_CMD --gestimate | grep -v Using
  else
  $AMSTAT_CMD $AMSTAT_FLAGS > $TMPFILE
  dumping=`egrep '(k|m|g) (dump|flush)ing' $TMPFILE`
  writing=`egrep '(k|m|g) writing to' $TMPFILE`
  action=`echo $dumping | perl -pe 's/.* (\w+ing).*/\u$1/'`
#  count=`awk '!/^Using/ && /wait for (dump|flush)|(writing to|dumping)/ 
{print $1}' $TMPFILE | wc -l`
  count=`awk '!/^Using/ && /wait|dump|flush|writing to/ {print $1}' 
$TMPFILE | wc -l`
  date
  echo "Waiting on: $count file systems"
  echo ""
  if [ ! -z "$dumping" ]; then
  echo "$action:"
  echo $dumping | perl -pe 's/\) (\w)/\)\n$1/g;s/dumping//g' |\
awk '{print "   ",sprintf("%-31s",$1)," ",$2,
  sprintf("%3s",$3),sprintf("%3s",$4),$5,$6}'
  echo ""
  fi

  if [ ! -z "$writing" ]; then
  echo -n "Writing:"
  echo $writing | perl -pe 's/\) (\w)/\)\n$1/g;s/writing to tape//g' | 
awk '{print ""$1""$2,$3,$4,$5,$6}'
  echo ""
  fi


  TAPES_CMD=$($AMSTAT_CMD --summary | awk '/^ +tape/ {print}')
  if [ ! -z "$TAPES_CMD" ]; then
  echo "Tapes written to so far:"
  echo "$TAPES_CMD"
  echo ""
  fi

  if [ $count == 0 ]; then
  cleanup
  fi

  # Print out the file systems waiting waiting to be dealt with
  awk '!/^Using|(k|m|g) (writ|dump|flush)ing|^ *$/' $TMPFILE | colrm 30 40
  fi
  sleep $SLEEPTIME
  clear
done


Re: amstatus question

2007-11-06 Thread Marc Muehlfeld

Hi,

Krahn, Anderson schrieb:

While a DLE is dumping to tape does the amstatus page dynamically
updates the amount dumped to tape during the dump. Or does it wait until
its done?


There's a percentage value that is changing while amanda is dumping. I'm not 
sure when this value was introduced. Maybe around version 2.5, when I remember 
right. I currently use amanda-2.5.2p1.


I backuped a small DLE for you:

nucleus.mr.lfmg.de:/ 097m dumping   21m ( 22.54%) (7:18:24)
...
nucleus.mr.lfmg.de:/ 097m dumping   50m ( 52.43%) (7:18:24)
...
nucleus.mr.lfmg.de:/ 097m dumping   79m ( 81.67%) (7:18:24)
...
nucleus.mr.lfmg.de:/ 0   117m finished (7:19:12)


Marc



--
Marc Muehlfeld (Leitung Systemadministration)
Zentrum fuer Humangenetik und Laboratoriumsmedizin Dr. Klein und Dr. Rost
Lochhamer Str. 29 - D-82152 Martinsried
Telefon: +49(0)89/895578-0 - Fax: +49(0)89/895578-78
http://www.medizinische-genetik.de


amstatus question

2007-11-06 Thread Krahn, Anderson
While a DLE is dumping to tape does the amstatus page dynamically
updates the amount dumped to tape during the dump. Or does it wait until
its done?

Its been sitting pretty at 15071m for some time.

amstatus --config FullQADB

Using /var/log/debug/FullDBQA/amdump from Tue Nov  6 16:14:01 CST 2007

 

prddb11-bkup.1sync.org:/0  3023m finished (16:38:35)

prddb11-bkup.1sync.org:/PRDDB11 0 0m finished (16:17:59)

prddb11-bkup.1sync.org:/backup  0281098m dumping to tape (16:42:24)

prddb11-bkup.1sync.org:/home0   456m finished (16:22:54)

prddb11-bkup.1sync.org:/opt 0  5914m finished (16:42:23)

prddb11-bkup.1sync.org:/u01 0  3496m finished (16:40:36)

prddb11-bkup.1sync.org:/u04 0  1233m finished (16:26:07)

prddb11-bkup.1sync.org:/var 0   948m finished (16:26:46)

 

SUMMARY  part  real  estimated

   size   size

partition   :   8

estimated   :   8   296057m

flush   :   0 0m

failed  :   00m   (  0.00%)

wait for dumping:   00m   (  0.00%)

dumping to tape :   1   281098m   ( 94.95%)

dumping :   0 0m 0m (  0.00%) (  0.00%)

dumped  :   8 15071m296057m (  5.09%) (  5.09%)

wait for writing:   0 0m 0m (  0.00%) (  0.00%)

wait to flush   :   0 0m 0m (100.00%) (  0.00%)

writing to tape :   0 0m 0m (  0.00%) (  0.00%)

failed to tape  :   0 0m 0m (  0.00%) (  0.00%)

taped   :   7 15071m 14959m (100.75%) (  5.09%)

  tape 1:   7 15071m 14959m (  2.58%) EGV022



Re: amstatus: no estimate and disk was stranded on waitq

2007-07-26 Thread Marc Muehlfeld
fedora schrieb:

> Actually I am using tapeless. I made HDD as virtual tape (tapetype
> HARD-DISK). The tape was writable (drwxrwx---), I think it could be broken
>  tape because I found "slot 13: not an amanda tape (Read 0 bytes)" when
> amcheck. How do I recover the broken tape or have to recreate a new tape?

Label the tape again.




> I cant run amflush and it sent me email as error like this "*** A TAPE
> ERROR
> OCCURRED: [No writable valid tape found]"

Then you need one more writeable tape.




> here is my amanda.conf settings: runtapes 1 use 6 Mb (holdingdisk) of
> 1.1T HDD. I don't think the compression size is bigger than tape. Can u
> guys suggest me what should I do? Run multiple tapes with splitting? I've
> no idea how to do splitting. Pls advice.
>
> here is my amanda.conf settings:
> dumpcycle 14 days
> #runspercycle 20 (commented)
> tapecycle 14 tapes
> bumpsize 20 Mb
> maxdumpsize -1

First of all you should have more (v)tapes. If you haven't set
runspercycle, then it's the same than dumpcycle. Also you have 14 tapes in
rotation. That's not a good idea. Maybe one dump doesn't fit on one tape
or you do a not planed backup. Then you have the problem that amanda don't
have a writable tape, because she had to overwrite one out of the
dumpcycle. Allways have more tapes than you require for your regular
backup plan. I suggest for your current configuration at least
tapecycle=17 tapes. Just to have some tapes left.

To configure splitting, you have at first to set runtapes>1 and then you
have to configure tape_splitsize for your (global) dumptype. Here I use
60GB vtapes and a splitsize of 3072M.



Marc


-- 
Marc Muehlfeld (Leitung Systemadministration)
Zentrum fuer Humangenetik und Laboratoriumsmedizin Dr. Klein und Dr. Rost
Lochhamer Str. 29 - D-82152 Martinsried
Telefon: +49(0)89/895578-0 - Fax: +49(0)89/895578-78
http://www.medizinische-genetik.de




Re: amstatus: no estimate and disk was stranded on waitq

2007-07-26 Thread Jon LaBadie
On Thu, Jul 26, 2007 at 03:10:40AM -0700, fedora wrote:
> 
> Hello,
> 
> >There was no tape in your drive/changer or it was not writeable (write 
> >protected, broken tape, no tape with valid label,...)
> 
> Actually I am using tapeless. I made HDD as virtual tape (tapetype
> HARD-DISK). The tape was writable (drwxrwx---), I think it could be broken
> tape. How do we know the tape is broken or not? Do I need to delete the
> broken tape and do amlabel again if the tape broken?

Vtape or Ptape, big deal.  I don't think anything that has been said has
pertained to physical or virtual tapes only.

IIRC (I didn't look back) your run was looking for tape number 11.
Tape number 11 could not be located.  Why was not clear to anyone
who read your posting.  But amanda could not find a tape 11.

As you don't have more tapes available than exactly the number
required, not locating any single tape means amanda can't write
to any tape.  Because it must write to a tapecycle (minus 1)
tapes before it can overwrite any of the other tapes.  This
would be true for amflush or amdump.

So the thing you need to do is:
A) find out why tape 11 is not accessible
B) add many more vtapes to your cycle


-- 
Jon H. LaBadie  [EMAIL PROTECTED]
 JG Computing
 4455 Province Line Road(609) 252-0159
 Princeton, NJ  08540-4322  (609) 683-7220 (fax)


Re: amstatus: no estimate and disk was stranded on waitq

2007-07-26 Thread fedora

Hello,

>There was no tape in your drive/changer or it was not writeable (write 
>protected, broken tape, no tape with valid label,...)

Actually I am using tapeless. I made HDD as virtual tape (tapetype
HARD-DISK). The tape was writable (drwxrwx---), I think it could be broken
tape because I found "slot 13: not an amanda tape (Read 0 bytes)" when
amcheck. How do I recover the broken tape or have to recreate a new tape ?

>If the folder isn't empty, you have to run amflush, if you want to have
your 
>data on tape.

I cant run amflush and it sent me email as error like this "*** A TAPE ERROR
OCCURRED: [No writable valid tape found]"

>Amanda can't do a full dump on your tape, because of to less space. And 
>without having a full dump, Amanda can't do an incremental dump. This could 
>happen if you only use one tape and the estimated size is compressed (if
you 
>use compression) bigger than your tape. Or you use more tapes and you don't 
>use splitting, ...

here is my amanda.conf settings:
runtapes 1
use 6 Mb (holdingdisk) of 1.1T HDD
I don't think the compression size is bigger than tape. Can u guys suggest
me what should I do? Run multiple tapes with splitting? I've no idea how to
do splitting. Pls advice.

>Seams you have too less tapes configured for one dumpcycle. Dumpcycle is
the 
>number of of days in a backup cycle (e.g. one week). Amanda tries to do a
full 
>backup at least this often (e.g. once per week). When you have e.g. 
>runspercycle=7 (how often you let amanda do amdump in dumpcycle) and a 
>tapecycle of 6, then amanda had to overwrite the first tape again to do the 
>backup on the last day. And this is what the message means. You can't 
>overwrite the tape, because you could never do a restore then, because of
the 
>overwritten first tape. Amanda prevent you doing that.

here is my amanda.conf settings:
dumpcycle 14 days
#runspercycle 20 (commented)
tapecycle 14 tapes
bumpsize 20 Mb 
maxdumpsize -1

>Somethings wrong with your changer. :-) Any more information?

here is my amanda.conf settings:
tapedev "/dev/null"
tpchanger "chg-multi"
#changerfile "/usr/local/etc/amanda/DailySet1/changer" (commented)
#changerfile "/usr/local/etc/amanda/DailySet1/changer-status" (commented)
changerfile "/usr/local/etc/amanda/DailySet1/changer.conf" 
#changerdev "/dev/null" (commented)

>Amanda clients running? Firewalls? Any messages inside the logs?

Amanda client is running checking by netstat -auv and /etc/initd/xinetd
status. I didn't change the firewall. I think if firewall is blocked I will
be return failed or missing in dump summary. I got error "cannot overwrite
active tape" in logs. I already refer to this
http://www.amanda.org/docs/faq.html#id345570 but he asked me to use amrmtape
and will delete information about backups stored in that tape from the
Amanda databases. Any good solutions instead of removing the tape?

Sorry and Thanks.



-- 
View this message in context: 
http://www.nabble.com/amstatus%3A-no-estimate-and-disk-was-stranded-on-waitq-tf4108528.html#a11808140
Sent from the Amanda - Users mailing list archive at Nabble.com.



Re: amstatus: no estimate and disk was stranded on waitq

2007-07-26 Thread fedora

Hello,

>There was no tape in your drive/changer or it was not writeable (write 
>protected, broken tape, no tape with valid label,...)

Actually I am using tapeless. I made HDD as virtual tape (tapetype
HARD-DISK). The tape was writable (drwxrwx---), I think it could be broken
tape. How do we know the tape is broken or not? Do I need to delete the
broken tape and do amlabel again if the tape broken?

> If the folder isn't empty, you have to run amflush, if you want to have
> your 
> data on tape.

If I run amflush, would it same integrity with actual files in client? Why
does it remains in holding disk? Is it because it can't  writing to tape?
  
>Amanda can't do a full dump on your tape, because of to less space. And 
>without having a full dump, Amanda can't do an incremental dump. This could 
>happen if you only use one tape and the estimated size is compressed (if
you 
>use compression) bigger than your tape. Or you use more tapes and you don't 
>use splitting, ...

here is my amanda.conf settings:
runtapes 1
use 6 Mb (holdingdisk) of 1.1T HDD
I don't think the compression size is bigger than tape. Can u guys suggest
me what should I do? Run multiple tapes with splitting? I've no idea how to
do splitting. Pls advice.

>Seams you have too less tapes configured for one dumpcycle. Dumpcycle is
the 
>number of of days in a backup cycle (e.g. one week). Amanda tries to do a
full 
>backup at least this often (e.g. once per week). When you have e.g. 
>runspercycle=7 (how often you let amanda do amdump in dumpcycle) and a 
>tapecycle of 6, then amanda had to overwrite the first tape again to do the 
>backup on the last day. And this is what the message means. You can't 
>overwrite the tape, because you could never do a restore then, because of
the 
>overwritten first tape. Amanda prevent you doing that.

># man amanda.conf
>- > tapecycle
>This is calculated by multiplying the number of amdump runs per dump cycle 
>(runspercycle parameter) times the number of tapes used per run (runtapes 
>parameter). Typically two to four times this calculated number of tapes are
in 
>rotation.

here is my amanda.conf settings:
dumpcycle 14 days
#runspercycle 20 (commented)
tapecycle 14 tapes
bumpsize 20 Mb 
maxdumpsize -1

>Somethings wrong with your changer. :-) Any more information?

here is my amanda.conf settings:
tapedev "/dev/null"
tpchanger "chg-multi"
#changerfile "/usr/local/etc/amanda/DailySet1/changer" (commented)
#changerfile "/usr/local/etc/amanda/DailySet1/changer-status" (commented)
changerfile "/usr/local/etc/amanda/DailySet1/changer.conf" 
#changerdev "/dev/null" (commented)

>Amanda clients running? Firewalls? Any messages inside the logs?

No error in logfiles. Amanda client is running checking by netstat -auv and
/etc/initd/xinetd status. I didn't change the firewall. I think if firewall
is blocked I will be return failed or missing in dump summary. However the
dumper stats has value.  May I know in this situation is the backup
successfull?

DUMP SUMMARY:
   DUMPER STATS   TAPER
STATS 
HOSTNAME DISKL ORIG-MB  OUT-MB  COMP%  MMM:SS   KB/s MMM:SS  
KB/s
-- -
-
doamin1 -/lib/mysql 157821307   22.6  651:09   34.3   N/AN/A 
domain2 -/lib/mysql 1   0   0   10.00:011.1   N/AN/A 


Any helps would be appreciated. 



-- 
View this message in context: 
http://www.nabble.com/amstatus%3A-no-estimate-and-disk-was-stranded-on-waitq-tf4108528.html#a11807711
Sent from the Amanda - Users mailing list archive at Nabble.com.



Re: amstatus: no estimate and disk was stranded on waitq

2007-07-25 Thread Marc Muehlfeld

fedora schrieb:

Hi guys. Hopefully I will get the answer as soon as possible :confused:


I'm sure you'll get answers, when you learn how to quote 
(http://www.netmeister.org/news/learn2quote.html).


Your last two mails I just skipped, because it seems that you just wrote your 
text somewhere inside the mail. It's not fun to search in a 100 lines mail the 
new ones.


Make it easy for the supporting people who provide free help.


Marc


--
Marc Muehlfeld (Leitung Systemadministration)
Zentrum fuer Humangenetik und Laboratoriumsmedizin Dr. Klein und Dr. Rost
Lochhamer Str. 29 - D-82152 Martinsried
Telefon: +49(0)89/895578-0 - Fax: +49(0)89/895578-78
http://www.medizinische-genetik.de


Re: amstatus: no estimate and disk was stranded on waitq

2007-07-25 Thread fedora

Hi guys. Hopefully I will get the answer as soon as possible :confused:


fedora wrote:
> 
> Hello,
> 
> fedora schrieb:
>> Questions:
>> 1) "No writable valid tape found". What does it mean?
> 
> There was no tape in your drive/changer or it was not writeable (write 
> protected, broken tape, no tape with valid label,...)
> 
> 
>> Actually I am using tapeless. I made HDD as virtual tape (tapetype
>> HARD-DISK). The tape was writable (drwxrwx---), I think it could be
>> broken tape. How do we know the tape is broken or not? Do I need to
>> delete the broken tape and do amlabel again if the tape broken?
>> 
> 
>> 2) Do I need to run amflush? I found out a folder in holding disk dated
>> 23
>> July 2007.
> 
> If the folder isn't empty, you have to run amflush, if you want to have
> your 
> data on tape.
> 
> 
>> If I run amflush, would it same integrity with actual files in client?
>> Why does it remains in holding disk? Is it because it can't  writing to
>> tape?
>>   
> 
>> 3) "can't switch to incremental dump". What does it mean? (causing failed
>> for the dump summary "domin11 -/lib/mysql 0 FAILED..." )
> 
> Amanda can't do a full dump on your tape, because of to less space. And 
> without having a full dump, Amanda can't do an incremental dump. This
> could 
> happen if you only use one tape and the estimated size is compressed (if
> you 
> use compression) bigger than your tape. Or you use more tapes and you
> don't 
> use splitting, ...
> 
> 
>> here is my amanda.conf settings:
>> runtapes 1
>> use 6 Mb (holdingdisk) of 1.1T HDD
>> I don't think the compression size is bigger than tape. Can u guys
>> suggest me what should I do? Run multiple tapes with splitting? I've no
>> idea how to do splitting. Pls advice.
>> 
> 
>> 4) "cannot overwrite active tape DailySet1*".  What does it mean?
> 
> Seams you have too less tapes configured for one dumpcycle. Dumpcycle is
> the 
> number of of days in a backup cycle (e.g. one week). Amanda tries to do a
> full 
> backup at least this often (e.g. once per week). When you have e.g. 
> runspercycle=7 (how often you let amanda do amdump in dumpcycle) and a 
> tapecycle of 6, then amanda had to overwrite the first tape again to do
> the 
> backup on the last day. And this is what the message means. You can't 
> overwrite the tape, because you could never do a restore then, because of
> the 
> overwritten first tape. Amanda prevent you doing that.
> 
> # man amanda.conf
> - > tapecycle
> This is calculated by multiplying the number of amdump runs per dump cycle 
> (runspercycle parameter) times the number of tapes used per run (runtapes 
> parameter). Typically two to four times this calculated number of tapes
> are in 
> rotation.
> 
> 
>> here is my amanda.conf settings:
>> dumpcycle 14 days
>> #runspercycle 20 (commented)
>> tapecycle 14 tapes
>> bumpsize 20 Mb 
>> maxdumpsize -1
>> 
> 
> 
>> 5) "taper: changer problem: 11 file:/backup/amanda/dumps/tape11". What
>> does
>> it mean?
> 
> Somethings wrong with your changer. :-) Any more information?
> 
> 
>> here is my amanda.conf settings:
>> tapedev "/dev/null"
>> tpchanger "chg-multi"
>> #changerfile "/usr/local/etc/amanda/DailySet1/changer" (commented)
>> #changerfile "/usr/local/etc/amanda/DailySet1/changer-status" (commented)
>> changerfile "/usr/local/etc/amanda/DailySet1/changer.conf" 
>> #changerdev "/dev/null" (commented)
>> 
> 
>> 7) Lastly, all my client servers got N/A result for the taper stats
>> accept
>> domain11. Why?
> Amanda clients running? Firewalls? Any messages inside the logs?
> 
> 
>> No error in logfiles. Amanda client is running checking by netstat -auv
>> and /etc/initd/xinetd status. I didn't change the firewall. I think if
>> firewall is blocked I will be return failed or missing in dump summary.
>> However the dumper stats has value.  May I know in this situation is the
>> backup successfull?
>> 
>> DUMP SUMMARY:
>>DUMPER STATS   TAPER
>> STATS 
>> HOSTNAME DISKL ORIG-MB  OUT-MB  COMP%  MMM:SS   KB/s MMM:SS  
>> KB/s
>> -- -
>> -
>> doamin1 -/lib/mysql 157821307   22.6  651:09   34.3   N/AN/A 
>> domain2 -/lib/mysql 1   0   0   10.00:011.1   N/AN/A 
>> 
> 
> Any helps would be appreciated. 
> 
> 
> 

-- 
View this message in context: 
http://www.nabble.com/amstatus%3A-no-estimate-and-disk-was-stranded-on-waitq-tf4108528.html#a11804128
Sent from the Amanda - Users mailing list archive at Nabble.com.



Re: amstatus: no estimate and disk was stranded on waitq

2007-07-24 Thread fedora

Hello,

fedora schrieb:
> Questions:
> 1) "No writable valid tape found". What does it mean?

There was no tape in your drive/changer or it was not writeable (write 
protected, broken tape, no tape with valid label,...)


> Actually I am using tapeless. I made HDD as virtual tape (tapetype
> HARD-DISK). The tape was writable (drwxrwx---), I think it could be broken
> tape. How do we know the tape is broken or not? Do I need to delete the
> broken tape and do amlabel again if the tape broken?
> 

> 2) Do I need to run amflush? I found out a folder in holding disk dated 23
> July 2007.

If the folder isn't empty, you have to run amflush, if you want to have your 
data on tape.


> If I run amflush, would it same integrity with actual files in client? Why
> does it remains in holding disk? Is it because it can't  writing to tape?
>   

> 3) "can't switch to incremental dump". What does it mean? (causing failed
> for the dump summary "domin11 -/lib/mysql 0 FAILED..." )

Amanda can't do a full dump on your tape, because of to less space. And 
without having a full dump, Amanda can't do an incremental dump. This could 
happen if you only use one tape and the estimated size is compressed (if you 
use compression) bigger than your tape. Or you use more tapes and you don't 
use splitting, ...


> here is my amanda.conf settings:
> runtapes 1
> use 6 Mb (holdingdisk) of 1.1T HDD
> I don't think the compression size is bigger than tape. Can u guys suggest
> me what should I do? Run multiple tapes with splitting? I've no idea how
> to do splitting. Pls advice.
> 

> 4) "cannot overwrite active tape DailySet1*".  What does it mean?

Seams you have too less tapes configured for one dumpcycle. Dumpcycle is the 
number of of days in a backup cycle (e.g. one week). Amanda tries to do a
full 
backup at least this often (e.g. once per week). When you have e.g. 
runspercycle=7 (how often you let amanda do amdump in dumpcycle) and a 
tapecycle of 6, then amanda had to overwrite the first tape again to do the 
backup on the last day. And this is what the message means. You can't 
overwrite the tape, because you could never do a restore then, because of
the 
overwritten first tape. Amanda prevent you doing that.

# man amanda.conf
- > tapecycle
This is calculated by multiplying the number of amdump runs per dump cycle 
(runspercycle parameter) times the number of tapes used per run (runtapes 
parameter). Typically two to four times this calculated number of tapes are
in 
rotation.


> here is my amanda.conf settings:
> dumpcycle 14 days
> #runspercycle 20 (commented)
> tapecycle 14 tapes
> bumpsize 20 Mb 
> maxdumpsize -1
> 


> 5) "taper: changer problem: 11 file:/backup/amanda/dumps/tape11". What
> does
> it mean?

Somethings wrong with your changer. :-) Any more information?


> here is my amanda.conf settings:
> tapedev "/dev/null"
> tpchanger "chg-multi"
> #changerfile "/usr/local/etc/amanda/DailySet1/changer" (commented)
> #changerfile "/usr/local/etc/amanda/DailySet1/changer-status" (commented)
> changerfile "/usr/local/etc/amanda/DailySet1/changer.conf" (commented)
> #changerdev "/dev/null" (commented)
> 

> 7) Lastly, all my client servers got N/A result for the taper stats accept
> domain11. Why?
Amanda clients running? Firewalls? Any messages inside the logs?


> No error in logfiles. Amanda client is running checking by netstat -auv
> and /etc/initd/xinetd status. I didn't change the firewall. I think if
> firewall is blocked I will be return failed or missing in dump summary.
> However the dumper stats has value.  May I know in this situation is the
> backup successfull?
> 
> DUMP SUMMARY:
>DUMPER STATS   TAPER
> STATS 
> HOSTNAME DISKL ORIG-MB  OUT-MB  COMP%  MMM:SS   KB/s MMM:SS  
> KB/s
> -- -
> -
> cancer.lilos -/lib/mysql 157821307   22.6  651:09   34.3   N/A   
> N/A 
> cn1.emospy.c -/lib/mysql 1   0   0   10.00:011.1   N/A   
> N/A 
> 

Any helps would be appreciated. 


-- 
View this message in context: 
http://www.nabble.com/amstatus%3A-no-estimate-and-disk-was-stranded-on-waitq-tf4108528.html#a11775626
Sent from the Amanda - Users mailing list archive at Nabble.com.



Re: amstatus: no estimate and disk was stranded on waitq

2007-07-24 Thread fedora


Marc Muehlfeld wrote:
> 
> Hello,
> 
> fedora schrieb:
>> Questions:
>> 1) "No writable valid tape found". What does it mean?
> 
> 
>> There was no tape in your drive/changer or it was not writeable (write 
>> protected, broken tape, no tape with valid label,...)
>> 
> sdfadfadfa
> 
>> 2) Do I need to run amflush? I found out a folder in holding disk dated
>> 23
>> July 2007.
> 
> If the folder isn't empty, you have to run amflush, if you want to have
> your 
> data on tape.
> 
> 
> 
>> 3) "can't switch to incremental dump". What does it mean? (causing failed
>> for the dump summary "domin11 -/lib/mysql 0 FAILED..." )
> 
> Amanda can't do a full dump on your tape, because of to less space. And 
> without having a full dump, Amanda can't do an incremental dump. This
> could 
> happen if you only use one tape and the estimated size is compressed (if
> you 
> use compression) bigger than your tape. Or you use more tapes and you
> don't 
> use splitting, ...
> 
> 
> 
>> 4) "cannot overwrite active tape DailySet1*".  What does it mean?
> 
> Seams you have too less tapes configured for one dumpcycle. Dumpcycle is
> the 
> number of of days in a backup cycle (e.g. one week). Amanda tries to do a
> full 
> backup at least this often (e.g. once per week). When you have e.g. 
> runspercycle=7 (how often you let amanda do amdump in dumpcycle) and a 
> tapecycle of 6, then amanda had to overwrite the first tape again to do
> the 
> backup on the last day. And this is what the message means. You can't 
> overwrite the tape, because you could never do a restore then, because of
> the 
> overwritten first tape. Amanda prevent you doing that.
> 
> # man amanda.conf
> - > tapecycle
> This is calculated by multiplying the number of amdump runs per dump cycle 
> (runspercycle parameter) times the number of tapes used per run (runtapes 
> parameter). Typically two to four times this calculated number of tapes
> are in 
> rotation.
> 
> 
> 
>> 5) "taper: changer problem: 11 file:/backup/amanda/dumps/tape11". What
>> does
>> it mean?
> 
> Somethings wrong with your changer. :-) Any more information?
> 
> 
> 
>> 6) For  big/small estimate, What is the different between them? As I know
>> level 0 is full backup, level 1 is incremental and what about level 2?
> 
> I'm not sure about the meaning of big/small estimate.
> Level 0 is full
> Level 1 is incremental incremental since last level 0
> Level 2 is incremental incremental since last level 1
> ...
> 
> 
> 
>> 7) Lastly, all my client servers got N/A result for the taper stats
>> accept
>> domain11. Why?
> Amanda clients running? Firewalls? Any messages inside the logs?
> 
> 
> 
> Marc
> 
> 
> -- 
> Marc Muehlfeld (Leitung Systemadministration)
> Zentrum fuer Humangenetik und Laboratoriumsmedizin Dr. Klein und Dr. Rost
> Lochhamer Str. 29 - D-82152 Martinsried
> Telefon: +49(0)89/895578-0 - Fax: +49(0)89/895578-78
> http://www.medizinische-genetik.de
> 
> 

-- 
View this message in context: 
http://www.nabble.com/amstatus%3A-no-estimate-and-disk-was-stranded-on-waitq-tf4108528.html#a11774950
Sent from the Amanda - Users mailing list archive at Nabble.com.



Re: amstatus: no estimate and disk was stranded on waitq

2007-07-24 Thread Jon LaBadie
On Tue, Jul 24, 2007 at 08:46:56AM +0200, Marc Muehlfeld wrote:
> Hello,
> 
> fedora schrieb:
> >Questions:
> >1) "No writable valid tape found". What does it mean?
> 
> There was no tape in your drive/changer or it was not writeable (write 
> protected, broken tape, no tape with valid label,...)
> 

Just an addition:  Your tapecycle IIRC was 14.  Another meaning of "no valid"
tape is none that have NOT been used in the last 14 "successful tapings"
(a tapecycle) of amdump.  So suppose your tapecycle is 14 and you really
have exactly 14 tapes in rotation.  Then one of them goes bad for any
reason whatsoever.  Then you can not tape anything since all remaining
13 tapes were used in the last 14 tapings.


> 
> 
> >3) "can't switch to incremental dump". What does it mean? (causing failed
> >for the dump summary "domin11 -/lib/mysql 0 FAILED..." )
> 
> Amanda can't do a full dump on your tape, because of to less space. And 
> without having a full dump, Amanda can't do an incremental dump. This could 
> happen if you only use one tape and the estimated size is compressed (if 
> you use compression) bigger than your tape. Or you use more tapes and you 
> don't use splitting, ...
> 

Without being able to tape, the entire full dump of that DLE must go to the
holding disk.  But if there is insufficient space on the holding disk, or
if the reserve parameter is set high to ensure incrementals get saved on
the holding disk, then there is no place to put a level 0 of that DLE,
so it fails.  It can't switch to incremental because there is no other
level 0 on which to base the incremental.


> 
> >4) "cannot overwrite active tape DailySet1*".  What does it mean?
> 
> Seams you have too less tapes configured for one dumpcycle. Dumpcycle is 
> the number of of days in a backup cycle (e.g. one week). Amanda tries to do 
> a full backup at least this often (e.g. once per week). When you have e.g. 
> runspercycle=7 (how often you let amanda do amdump in dumpcycle) and a 
> tapecycle of 6, then amanda had to overwrite the first tape again to do the 
> backup on the last day. And this is what the message means. You can't 
> overwrite the tape, because you could never do a restore then, because of 
> the overwritten first tape. Amanda prevent you doing that.
> 
> # man amanda.conf
> - > tapecycle
> This is calculated by multiplying the number of amdump runs per dump cycle 
> (runspercycle parameter) times the number of tapes used per run (runtapes 
> parameter). Typically two to four times this calculated number of tapes are 
  ^^
> in rotation.

Your parameters have just exactly one times this calculated number.



-- 
Jon H. LaBadie  [EMAIL PROTECTED]
 JG Computing
 4455 Province Line Road(609) 252-0159
 Princeton, NJ  08540-4322  (609) 683-7220 (fax)


Re: amstatus: no estimate and disk was stranded on waitq

2007-07-24 Thread Marc Muehlfeld

Hello,

fedora schrieb:

Questions:
1) "No writable valid tape found". What does it mean?


There was no tape in your drive/changer or it was not writeable (write 
protected, broken tape, no tape with valid label,...)





2) Do I need to run amflush? I found out a folder in holding disk dated 23
July 2007.


If the folder isn't empty, you have to run amflush, if you want to have your 
data on tape.





3) "can't switch to incremental dump". What does it mean? (causing failed
for the dump summary "domin11 -/lib/mysql 0 FAILED..." )


Amanda can't do a full dump on your tape, because of to less space. And 
without having a full dump, Amanda can't do an incremental dump. This could 
happen if you only use one tape and the estimated size is compressed (if you 
use compression) bigger than your tape. Or you use more tapes and you don't 
use splitting, ...





4) "cannot overwrite active tape DailySet1*".  What does it mean?


Seams you have too less tapes configured for one dumpcycle. Dumpcycle is the 
number of of days in a backup cycle (e.g. one week). Amanda tries to do a full 
backup at least this often (e.g. once per week). When you have e.g. 
runspercycle=7 (how often you let amanda do amdump in dumpcycle) and a 
tapecycle of 6, then amanda had to overwrite the first tape again to do the 
backup on the last day. And this is what the message means. You can't 
overwrite the tape, because you could never do a restore then, because of the 
overwritten first tape. Amanda prevent you doing that.


# man amanda.conf
- > tapecycle
This is calculated by multiplying the number of amdump runs per dump cycle 
(runspercycle parameter) times the number of tapes used per run (runtapes 
parameter). Typically two to four times this calculated number of tapes are in 
rotation.





5) "taper: changer problem: 11 file:/backup/amanda/dumps/tape11". What does
it mean?


Somethings wrong with your changer. :-) Any more information?




6) For  big/small estimate, What is the different between them? As I know
level 0 is full backup, level 1 is incremental and what about level 2?


I'm not sure about the meaning of big/small estimate.
Level 0 is full
Level 1 is incremental incremental since last level 0
Level 2 is incremental incremental since last level 1
...




7) Lastly, all my client servers got N/A result for the taper stats accept
domain11. Why?

Amanda clients running? Firewalls? Any messages inside the logs?



Marc


--
Marc Muehlfeld (Leitung Systemadministration)
Zentrum fuer Humangenetik und Laboratoriumsmedizin Dr. Klein und Dr. Rost
Lochhamer Str. 29 - D-82152 Martinsried
Telefon: +49(0)89/895578-0 - Fax: +49(0)89/895578-78
http://www.medizinische-genetik.de


Re: amstatus: no estimate and disk was stranded on waitq

2007-07-23 Thread fedora
20070717213001'
  taper: cannot overwrite active tape DailySet1-04
  taper: slot 5: read label `DailySet1-05', date `20070718050529'
  taper: cannot overwrite active tape DailySet1-05
  taper: slot 6: read label `DailySet1-06', date `20070718213001'
  taper: cannot overwrite active tape DailySet1-06
  taper: slot 7: read label `DailySet1-07', date `20070719031513'
  taper: cannot overwrite active tape DailySet1-07
  taper: slot 8: read label `DailySet1-08', date `20070719213002'
  taper: cannot overwrite active tape DailySet1-08
  taper: slot 9: read label `DailySet1-09', date `20070720213001'
  taper: cannot overwrite active tape DailySet1-09
  taper: slot 10: read label `DailySet1-10', date `20070721213001'
  taper: cannot overwrite active tape DailySet1-10
  taper: slot 11: read label `DailySet1-11', date `20070722213001'
  taper: cannot overwrite active tape DailySet1-11
  taper: changer problem: 11 file:/backup/amanda/dumps/tape11
  big estimate: domain1.com /var/lib/mysql 2
est: 569Mout 459M
  big estimate: domain2.com /var/lib/mysql 1
est: 0Mout 0M
  small estimate: domain5.com /var/lib/mysql 1
  est: 466Mout 596M


DUMP SUMMARY:
   DUMPER STATS   TAPER
STATS 
HOSTNAME DISKL ORIG-MB  OUT-MB  COMP%  MMM:SS   KB/s MMM:SS  
KB/s
-- -
-
domain4 -/lib/mysql 157821307   22.6  651:09   34.3   N/AN/A 

.. output truncated 

domain11 -/lib/mysql 0 FAILED 

(brought to you by Amanda version 2.5.1p3)

Questions:
1) "No writable valid tape found". What does it mean?
2) Do I need to run amflush? I found out a folder in holding disk dated 23
July 2007.
3) "can't switch to incremental dump". What does it mean? (causing failed
for the dump summary "domin11 -/lib/mysql 0 FAILED..." )
4) "cannot overwrite active tape DailySet1*".  What does it mean?
5) "taper: changer problem: 11 file:/backup/amanda/dumps/tape11". What does
it mean?
6) For  big/small estimate, What is the different between them? As I know
level 0 is full backup, level 1 is incremental and what about level 2?
7) Lastly, all my client servers got N/A result for the taper stats accept
domain11. Why?

Can u guys explain to all the questions? I have been backing up for 1 month
ago but since a few days ago , I encountered these problem. Very appreciate
ur helps.

fedora wrote:
> 
> Dear Marc Muehlfeld,
> 
> After I increased etimeout (300s to 1800s), I found no "planner: [hmm,
> disk was stranded on waitq]" error on amstatus. But now I am having bigger
> problem. All my servers return "no estimate" with level 0. What should I
> do guys??
> 
> 
> fedora wrote:
>> 
>> hi guys,
>> I am having problem with my amanda status. Here are the details:
>> 
>> [EMAIL PROTECTED] ~]$ amstatus DailySet1
>> Using /usr/local/etc/amanda/DailySet1/amdump.1 from Thu Jul 19 03:15:13
>> MYT 2007
>> 
>> domain1.com:/var/lib/mysql   1  549m flushed (3:15:38)
>> domain1.com:/var/lib/mysql   1 1301m finished (8:26:06)
>> domain2.com:/var/lib/mysql00m finished (6:43:04)
>> domain3.com:/var/lib/mysql 0   23m finished (6:55:13)
>> domain4.com:/var/lib/mysql   0 planner: [hmm, disk was stranded on waitq]
>> domain5.com:/var/lib/mysql  0   no estimate
>> domain6.com:/var/lib/mysql  0   no estimate
>> 
>> This is the first time I received this error. Amanda has finished backup
>> but I did not receive the report via email. Anyone can help me to explain
>> on this (no estimate and disk was stranded on waitq)?? 
>> 
>> 
>> 
> 
> 

-- 
View this message in context: 
http://www.nabble.com/amstatus%3A-no-estimate-and-disk-was-stranded-on-waitq-tf4108528.html#a11757384
Sent from the Amanda - Users mailing list archive at Nabble.com.

--- End Message ---


Re: amstatus: no estimate and disk was stranded on waitq

2007-07-20 Thread fedora

Dear Marc Muehlfeld,

After I increased etimeout (300s to 1800s), I found no "planner: [hmm, disk
was stranded on waitq]" error on amstatus. But now I am having bigger
problem. All my servers return "no estimate" with level 0. What should I do
guys??


fedora wrote:
> 
> hi guys,
> I am having problem with my amanda status. Here are the details:
> 
> [EMAIL PROTECTED] ~]$ amstatus DailySet1
> Using /usr/local/etc/amanda/DailySet1/amdump.1 from Thu Jul 19 03:15:13
> MYT 2007
> 
> domain1.com:/var/lib/mysql   1  549m flushed (3:15:38)
> domain1.com:/var/lib/mysql   1 1301m finished (8:26:06)
> domain2.com:/var/lib/mysql00m finished (6:43:04)
> domain3.com:/var/lib/mysql 0   23m finished (6:55:13)
> domain4.com:/var/lib/mysql   0 planner: [hmm, disk was stranded on waitq]
> domain5.com:/var/lib/mysql  0   no estimate
> domain6.com:/var/lib/mysql  0   no estimate
> 
> This is the first time I received this error. Amanda has finished backup
> but I did not receive the report via email. Anyone can help me to explain
> on this (no estimate and disk was stranded on waitq)?? 
> 
> 
> 

-- 
View this message in context: 
http://www.nabble.com/amstatus%3A-no-estimate-and-disk-was-stranded-on-waitq-tf4108528.html#a11704398
Sent from the Amanda - Users mailing list archive at Nabble.com.



Re: amstatus: no estimate and disk was stranded on waitq

2007-07-19 Thread Marc Muehlfeld

Hi,

fedora schrieb:

domain4.com:/var/lib/mysql   0 planner: [hmm, disk was stranded on waitq]
Try increasing etimeout in amanda.conf if it appears on next backup again. If 
not, maybe there was just some load on that machine, and sendsize could not 
finish during the time in etimeout.



Marc


--
Marc Muehlfeld (Leitung Systemadministration)
Zentrum fuer Humangenetik und Laboratoriumsmedizin Dr. Klein und Dr. Rost
Lochhamer Str. 29 - D-82152 Martinsried
Telefon: +49(0)89/895578-0 - Fax: +49(0)89/895578-78
http://www.medizinische-genetik.de


amstatus: no estimate and disk was stranded on waitq

2007-07-19 Thread fedora

hi guys,
I am having problem with my amanda status. Here are the details:

[EMAIL PROTECTED] ~]$ amstatus DailySet1
Using /usr/local/etc/amanda/DailySet1/amdump.1 from Thu Jul 19 03:15:13 MYT
2007

domain1.com:/var/lib/mysql   1  549m flushed (3:15:38)
domain1.com:/var/lib/mysql   1 1301m finished (8:26:06)
domain2.com:/var/lib/mysql00m finished (6:43:04)
domain3.com:/var/lib/mysql 0   23m finished (6:55:13)
domain4.com:/var/lib/mysql   0 planner: [hmm, disk was stranded on waitq]
domain5.com:/var/lib/mysql  0   no estimate
domain6.com:/var/lib/mysql  0   no estimate

This is the first time I received this error. Amanda has finished backup but
I did not receive the report via email. Anyone can help me to explain on
this (no estimate and disk was stranded on waitq)?? 


-- 
View this message in context: 
http://www.nabble.com/amstatus%3A-no-estimate-and-disk-was-stranded-on-waitq-tf4108528.html#a11683405
Sent from the Amanda - Users mailing list archive at Nabble.com.



Re: autoflush and amstatus

2007-05-31 Thread Marc Muehlfeld
Hi,

James Brown schrieb:
> I have 500GB of data from a single dump to flush that
> barely overflows a single tape.  I'd prefer not to flush and waste all
> that space on the second tape. What I have been doing in these cases is to
> fill the rest of our terabyte holdingdisk, and then flush to two tapes.

You really prefer to save some Dollars/Euros/... and live with the risk of
loosing 500 GB of *allready backuped* data? When the disk is gone, your
"backup" in holding disk is too. Or if the second tape later is
unreadable, you loose a part of the old and the new backup. And maybe then
you loose data of two days if both were on that one tape.

Our decission is: Better to spend some money on extra tapes, than loosing
data of days we could have on tape. The stored data is our business.

Regards
Marc


-- 
Marc Muehlfeld (Leitung Systemadministration)
Zentrum fuer Humangenetik und Laboratoriumsmedizin Dr. Klein und Dr. Rost
Lochhamer Str. 29 - D-82152 Martinsried
Telefon: +49(0)89/895578-0 - Fax: +49(0)89/895578-78
http://www.medizinische-genetik.de





Re: autoflush and amstatus

2007-05-30 Thread Jean-Louis Martineau

I still don't understand what you want?
You should be a lot more descriptive about your setup how you expect 
amanda to behave.


How many DLE? total size?

You said you ask only for full dump? how do you configured amanda?
You want a full dump done at which interval? every days? week? ...

autoflush will not change the schedule amanda generate, unless you run 
amdump more often.


attach you config file and all useful information.

remember, amdump will always ask for estimate of level 0 dump, but it 
will not necessarily do it.


autoflush do exactly the right things, it will flush what is already on 
the holding disk and it will dump all dle?

What is strange is why you don't want a backup of all dle?

Jean-Louis

James Brown wrote:

I have 500GB of data from a single dump to flush that
barely overflows a single tape.  I'd prefer not to
flush and waste all that space on the second tape. 
What I have been doing in these cases is to fill the

rest of our terabyte holdingdisk, and then flush to
two tapes.  It would be nice if the autoflush could
take away a step for me.

BTW, we have reserve set to 0.  


JB

--- Jean-Louis Martineau <[EMAIL PROTECTED]> wrote:

  

Why run amdump if you don't want new dump? Why not
use amflush?

If the answer is: Because amdump only dump a few
dle.
Then you should set the "reserve" to a value below
100.

With reserve==100, amdump record full dump only once
they are put on 
tape, that's why it retry it.

With reserve < 100, amdump record full dump once
they are on holding disk.

Jean-Louis

James Brown wrote:


--- Jean-Louis Martineau <[EMAIL PROTECTED]>
  

wrote:

  
  

James Brown wrote:



Hi,

After enabling 'autoflush' in amanda.conf, and
starting new amdump, I saw two entries for a
  
  

backup



job that was already on holding disk.  One was
  
  

waiting


to be flushed, the other was getting estimates. 
  
  

(I



had to kill the job
since I didn't want to run the 500GB backup
  
  

again!).



With autoflush enabled, will Amanda attempt to
  

run


another backup of the DLE that needs to be
  
  

flushed?


  
  
  

Yes, amdump always do a dump of all dle.
autoflush allow amdump to also flush to tape the
dump that are already 
on holding disk.

autoflush doesn't change what will be dumped.


Jean-Louis



This is a problem for me.  The particular
configuration I am using does FULL backups only.
In this case, I don't need the extra backup and I
can't remove the job from the disklist since
  

Amanda


won't flush otherwise.

-JB


 
  


  

Park yourself in front of a world of choices in
  

alternative vehicles. Visit the Yahoo! Auto Green
Center.

http://autos.yahoo.com/green_center/ 
  
  





  Shape Yahoo! in your own image.  Join our Network Research Panel today!   http://surveylink.yahoo.com/gmrs/yahoo_panel_invite.asp?a=7 



  




Re: autoflush and amstatus

2007-05-30 Thread Jon LaBadie
On Wed, May 30, 2007 at 11:28:22AM -0700, James Brown wrote:
> 
> I have 500GB of data from a single dump to flush that
> barely overflows a single tape.  I'd prefer not to
> flush and waste all that space on the second tape. 

Set runtapes to 1 and do an amflush.

It will only fill one tape and the rest will be left
on the holding disk for a later amflush or on your
autoflush/amdump.

-- 
Jon H. LaBadie  [EMAIL PROTECTED]
 JG Computing
 4455 Province Line Road(609) 252-0159
 Princeton, NJ  08540-4322  (609) 683-7220 (fax)


Re: autoflush and amstatus

2007-05-30 Thread James Brown

I have 500GB of data from a single dump to flush that
barely overflows a single tape.  I'd prefer not to
flush and waste all that space on the second tape. 
What I have been doing in these cases is to fill the
rest of our terabyte holdingdisk, and then flush to
two tapes.  It would be nice if the autoflush could
take away a step for me.

BTW, we have reserve set to 0.  

JB

--- Jean-Louis Martineau <[EMAIL PROTECTED]> wrote:

> Why run amdump if you don't want new dump? Why not
> use amflush?
> 
> If the answer is: Because amdump only dump a few
> dle.
> Then you should set the "reserve" to a value below
> 100.
> 
> With reserve==100, amdump record full dump only once
> they are put on 
> tape, that's why it retry it.
> With reserve < 100, amdump record full dump once
> they are on holding disk.
> 
> Jean-Louis
> 
> James Brown wrote:
> > --- Jean-Louis Martineau <[EMAIL PROTECTED]>
> wrote:
> >
> >   
> >> James Brown wrote:
> >> 
> >>> Hi,
> >>>
> >>> After enabling 'autoflush' in amanda.conf, and
> >>> starting new amdump, I saw two entries for a
> >>>   
> >> backup
> >> 
> >>> job that was already on holding disk.  One was
> >>>   
> >> waiting
> >> 
> >>> to be flushed, the other was getting estimates. 
> >>>   
> >> (I
> >> 
> >>> had to kill the job
> >>> since I didn't want to run the 500GB backup
> >>>   
> >> again!).
> >> 
> >>> With autoflush enabled, will Amanda attempt to
> run
> >>> another backup of the DLE that needs to be
> >>>   
> >> flushed?
> >> 
> >>>   
> >>>   
> >> Yes, amdump always do a dump of all dle.
> >> autoflush allow amdump to also flush to tape the
> >> dump that are already 
> >> on holding disk.
> >> autoflush doesn't change what will be dumped.
> >>
> >>
> >> Jean-Louis
> >> 
> >
> >
> > This is a problem for me.  The particular
> > configuration I am using does FULL backups only.
> > In this case, I don't need the extra backup and I
> > can't remove the job from the disklist since
> Amanda
> > won't flush otherwise.
> >
> > -JB
> >
> >
> >  
>

> > Park yourself in front of a world of choices in
> alternative vehicles. Visit the Yahoo! Auto Green
> Center.
> > http://autos.yahoo.com/green_center/ 
> >   
> 
> 



  
Shape
 Yahoo! in your own image.  Join our Network Research Panel today!   
http://surveylink.yahoo.com/gmrs/yahoo_panel_invite.asp?a=7 




Re: autoflush and amstatus

2007-05-29 Thread Jon LaBadie
On Tue, May 29, 2007 at 07:36:20AM -0700, James Brown wrote:
> 
> This is a problem for me.  The particular
> configuration I am using does FULL backups only.
> In this case, I don't need the extra backup and I
> can't remove the job from the disklist since Amanda
> won't flush otherwise.
> 

What is the problem with running amflush instead of amdump ???


-- 
Jon H. LaBadie  [EMAIL PROTECTED]
 JG Computing
 4455 Province Line Road(609) 252-0159
 Princeton, NJ  08540-4322  (609) 683-7220 (fax)


Re: autoflush and amstatus

2007-05-29 Thread James Brown

--- Jean-Louis Martineau <[EMAIL PROTECTED]> wrote:

> James Brown wrote:
> > Hi,
> >
> > After enabling 'autoflush' in amanda.conf, and
> > starting new amdump, I saw two entries for a
> backup
> > job that was already on holding disk.  One was
> waiting
> > to be flushed, the other was getting estimates. 
> (I
> > had to kill the job
> > since I didn't want to run the 500GB backup
> again!).
> >
> > With autoflush enabled, will Amanda attempt to run
> > another backup of the DLE that needs to be
> flushed?
> >   
> 
> Yes, amdump always do a dump of all dle.
> autoflush allow amdump to also flush to tape the
> dump that are already 
> on holding disk.
> autoflush doesn't change what will be dumped.
> 
> 
> Jean-Louis


This is a problem for me.  The particular
configuration I am using does FULL backups only.
In this case, I don't need the extra backup and I
can't remove the job from the disklist since Amanda
won't flush otherwise.

-JB


  

Park yourself in front of a world of choices in alternative vehicles. Visit the 
Yahoo! Auto Green Center.
http://autos.yahoo.com/green_center/ 


Re: autoflush and amstatus

2007-05-29 Thread Jean-Louis Martineau

Why run amdump if you don't want new dump? Why not use amflush?

If the answer is: Because amdump only dump a few dle.
Then you should set the "reserve" to a value below 100.

With reserve==100, amdump record full dump only once they are put on 
tape, that's why it retry it.

With reserve < 100, amdump record full dump once they are on holding disk.

Jean-Louis

James Brown wrote:

--- Jean-Louis Martineau <[EMAIL PROTECTED]> wrote:

  

James Brown wrote:


Hi,

After enabling 'autoflush' in amanda.conf, and
starting new amdump, I saw two entries for a
  

backup


job that was already on holding disk.  One was
  

waiting

to be flushed, the other was getting estimates. 
  

(I


had to kill the job
since I didn't want to run the 500GB backup
  

again!).


With autoflush enabled, will Amanda attempt to run
another backup of the DLE that needs to be
  

flushed?

  
  

Yes, amdump always do a dump of all dle.
autoflush allow amdump to also flush to tape the
dump that are already 
on holding disk.

autoflush doesn't change what will be dumped.


Jean-Louis




This is a problem for me.  The particular
configuration I am using does FULL backups only.
In this case, I don't need the extra backup and I
can't remove the job from the disklist since Amanda
won't flush otherwise.

-JB


  

Park yourself in front of a world of choices in alternative vehicles. Visit the 
Yahoo! Auto Green Center.
http://autos.yahoo.com/green_center/ 
  




Re: autoflush and amstatus

2007-05-28 Thread Bruce Thompson


On May 28, 2007, at 11:36 AM, James Brown wrote:



Hi,

After enabling 'autoflush' in amanda.conf, and
starting new amdump, I saw two entries for a backup
job that was already on holding disk.  One was waiting
to be flushed, the other was getting estimates.  (I
had to kill the job
since I didn't want to run the 500GB backup again!).

With autoflush enabled, will Amanda attempt to run
another backup of the DLE that needs to be flushed?

I am currently running 2.5.1p2.

Thanks,
JB


The other piece you probably want to know is that even though it's  
dumping the DLE that is being flushed, the dump will be an  
incremental, not another full dump!


Cheers,
Bruce.


--
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.


Re: autoflush and amstatus

2007-05-28 Thread Toomas Aas

E, 28 mai   2007 kirjutas James Brown <[EMAIL PROTECTED]>:


After enabling 'autoflush' in amanda.conf, and
starting new amdump, I saw two entries for a backup
job that was already on holding disk.  One was waiting
to be flushed, the other was getting estimates.  (I
had to kill the job
since I didn't want to run the 500GB backup again!).

With autoflush enabled, will Amanda attempt to run
another backup of the DLE that needs to be flushed?


In my experience, yes. 'Autoflush' just means that as part of the  
amdump run, Amanda also tries to flush the contents of holding disk to  
tape. If an earlier dump of a particular DLE is on the holding disk  
and new amdump is run, that doesn't mean that this particular DLE  
doesn't get dumped. Why should it? Do you not want to have the up-to  
date backup of this DLE? At least on holding disk?



I am currently running 2.5.1p2.


Me too.

--
Toomas Aas


Re: autoflush and amstatus

2007-05-28 Thread Jean-Louis Martineau

James Brown wrote:

Hi,

After enabling 'autoflush' in amanda.conf, and
starting new amdump, I saw two entries for a backup
job that was already on holding disk.  One was waiting
to be flushed, the other was getting estimates.  (I
had to kill the job
since I didn't want to run the 500GB backup again!).

With autoflush enabled, will Amanda attempt to run
another backup of the DLE that needs to be flushed?
  


Yes, amdump always do a dump of all dle.
autoflush allow amdump to also flush to tape the dump that are already 
on holding disk.

autoflush doesn't change what will be dumped.


Jean-Louis


autoflush and amstatus

2007-05-28 Thread James Brown

Hi,

After enabling 'autoflush' in amanda.conf, and
starting new amdump, I saw two entries for a backup
job that was already on holding disk.  One was waiting
to be flushed, the other was getting estimates.  (I
had to kill the job
since I didn't want to run the 500GB backup again!).

With autoflush enabled, will Amanda attempt to run
another backup of the DLE that needs to be flushed?

I am currently running 2.5.1p2.

Thanks,
JB



  
Shape
 Yahoo! in your own image.  Join our Network Research Panel today!   
http://surveylink.yahoo.com/gmrs/yahoo_panel_invite.asp?a=7 




Re: Amanda-2.5.2-20070523 amstatus

2007-05-28 Thread Jean-Louis Martineau

Try the attached patch.

Jean-Louis

McGraw, Robert P. wrote:


 

 


When I run amstatus --config daily --date"

 


zorn->[13] > amstatus --config daily --date

Using /var/amanda/daily/amdump from Fri May 25 07:51:23 EDT 2007

 

20070525075123 bers:/   0  
4649m finished (9:20:41)


20070525075123 bessel:/ 0  
4621m finished (9:28:22)


20070525075123 bohr:/   0  
4967m finished (9:13:19)


  :

taper writing, tapeq: 0

network free kps:   2090751

holding space   :  8733m (  2.13%)

chunker1 busy   :  0:00:00  (  0.00%)

chunker2 busy   :  0:00:00  (  0.00%)

chunker3 busy   :  0:00:00  (  0.00%)

chunker4 busy   :  0:00:00  (  0.00%)

chunker5 busy   :  0:00:00  (  0.00%)

chunker6 busy   :  0:00:00  (  0.00%)

chunker7 busy   :  0:00:00  (  0.00%)

chunker8 busy   :  0:00:00  (  0.00%)

chunker9 busy   :  0:00:00  (  0.00%)

 dumper0 busy   :  7:05:09  ( 87.98%)

 dumper1 busy   :  6:16:22  ( 77.88%)

 dumper2 busy   :  6:47:01  ( 84.22%)

 

 


My chunkers shows zero, but I have chunkers running.

 


My build is

 

 


zorn->[15] > amadmin daily version

build: VERSION="Amanda-2.5.2-20070523"

   BUILT_DATE="Thu May 24 10:07:47 EDT 2007"

   BUILT_MACH="SunOS zorn.math.purdue.edu 5.10 Generic_118833-03 
sun4u sparc SUNW,Sun-Fire-280R"


 

 


Robert

 

  

 



 


_

Robert P. McGraw, Jr.

Manager, Computer System EMAIL: [EMAIL PROTECTED]

Purdue University ROOM: MATH-807

Department of MathematicsPHONE: (765) 494-6055

150 N. University Street   FAX: (419) 821-0540

West Lafayette, IN 47907-2067   

 

 



diff -u -r --show-c-function --new-file --exclude-from=/home/martinea/src.orig/amanda.diff --ignore-matching-lines='$Id:' amanda-2.5.2/server-src/amstatus.pl.in amanda-2.5.2.amstatus.chunker/server-src/amstatus.pl.in
--- amanda-2.5.2/server-src/amstatus.pl.in	2007-05-23 08:04:53.0 -0400
+++ amanda-2.5.2.amstatus.chunker/server-src/amstatus.pl.in	2007-05-28 07:59:31.0 -0400
@@ -407,7 +407,7 @@ while() {
 		$serial=$4;
 		$serial{$serial}=$hostpart;
 		#$chunk_started{$hostpart}=1;
-		#$chunk_time{$hostpart}=$1;
+		$chunk_time{$hostpart}=$1;
 		#$chunk_finished{$hostpart}=0;
 		$holding_file{$hostpart}=$5;
 	}
@@ -421,7 +421,7 @@ while() {
 		$serial=$4;
 		$serial{$serial}=$hostpart;
 		#$chunk_started{$hostpart}=1;
-		#$chunk_time{$hostpart}=$1;
+		$chunk_time{$hostpart}=$1;
 		#$chunk_finished{$hostpart}=0;
 		$holding_file{$hostpart}=$5;
 	}
@@ -552,9 +552,9 @@ while() {
 		$hostpart=$serial{$serial};
 		$size{$hostpart}=$outputsize;
 		$dump_finished{$hostpart}=1;
-		$busy_time{$2}+=($1-$dump_time{$hostpart});
+		$busy_time{$2}+=($1-$chunk_time{$hostpart});
 		$running_dumper{$2} = "0";
-		$dump_time{$hostpart}=$1;
+		$chunk_time{$hostpart}=$1;
 		$error{$hostpart}="";
 		if ($3 eq "PARTIAL") {
 			$partial{$hostpart} = 1;


Amanda-2.5.2-20070523 amstatus

2007-05-25 Thread McGraw, Robert P.
 

 

When I run amstatus --config daily --date"

 

zorn->[13] > amstatus --config daily --date

Using /var/amanda/daily/amdump from Fri May 25 07:51:23 EDT 2007

 

20070525075123 bers:/   0  4649m
finished (9:20:41)

20070525075123 bessel:/ 0  4621m
finished (9:28:22)

20070525075123 bohr:/   0  4967m
finished (9:13:19)

  :

taper writing, tapeq: 0

network free kps:   2090751

holding space   :  8733m (  2.13%)

chunker1 busy   :  0:00:00  (  0.00%)

chunker2 busy   :  0:00:00  (  0.00%)

chunker3 busy   :  0:00:00  (  0.00%)

chunker4 busy   :  0:00:00  (  0.00%)

chunker5 busy   :  0:00:00  (  0.00%)

chunker6 busy   :  0:00:00  (  0.00%)

chunker7 busy   :  0:00:00  (  0.00%)

chunker8 busy   :  0:00:00  (  0.00%)

chunker9 busy   :  0:00:00  (  0.00%)

 dumper0 busy   :  7:05:09  ( 87.98%)

 dumper1 busy   :  6:16:22  ( 77.88%)

 dumper2 busy   :  6:47:01  ( 84.22%)

 

 

My chunkers shows zero, but I have chunkers running.

 

My build is 

 

 

zorn->[15] > amadmin daily version

build: VERSION="Amanda-2.5.2-20070523"

   BUILT_DATE="Thu May 24 10:07:47 EDT 2007"

   BUILT_MACH="SunOS zorn.math.purdue.edu 5.10 Generic_118833-03 sun4u
sparc SUNW,Sun-Fire-280R"

 

 

Robert

 

   

 


 

_

Robert P. McGraw, Jr.

Manager, Computer System EMAIL: [EMAIL PROTECTED]

Purdue University ROOM: MATH-807

Department of MathematicsPHONE: (765) 494-6055

150 N. University Street   FAX: (419) 821-0540

West Lafayette, IN 47907-2067

 

 



smime.p7s
Description: S/MIME cryptographic signature


Re: amstatus returns uninitialized value!

2007-05-07 Thread FL

On 5/7/07, Jean-Louis Martineau <[EMAIL PROTECTED] > wrote:


Can you post the amdump. log file that show this error in amstatus.

Jean-Louis




I don't see the error in amdump.1, only the command
  "amstatus Daily"
shows  this error. But here are the last 40 lines of
/var/log/amanda/Daily/amdump.1


sh-3.1# tail -n 40 amdump.1
driver: FINISHED time 5506.906
amdump: end at Sun May  6 17:35:15 EDT 2007
line 111 of log is bogus: 
 Scan failed at: 
line 115 of log is bogus: 
 Scan failed at: 
line 123 of log is bogus: 
 Scan failed at: 
line 139 of log is bogus: 
 Scan failed at: 
line 171 of log is bogus: 
 Scan failed at: 
line 183 of log is bogus: 
 Scan failed at: 
line 187 of log is bogus: 
 Scan failed at: 
line 207 of log is bogus: 
 Scan failed at: 
line 219 of log is bogus: 
 Scan failed at: 
line 223 of log is bogus: 
 Scan failed at: 
line 231 of log is bogus: 
 Scan failed at: 
line 235 of log is bogus: 
 Scan failed at: 
line 243 of log is bogus: 
 Scan failed at: <20070506160328 0 [sec 169 nkb 1399552 ckb 1002784 kps
5913]>
line 274 of log is bogus: 
 Scan failed at: <20070506160328 1 [sec 26244 nkb 63713912 ckb 26498144 kps
1010]>
line 279 of log is bogus: 
 Scan failed at: <20070506160328 1 [sec 95733 nkb 50800142 ckb 37340832 kps
390]>
Scanning /home/amanda...
0
0
0
0
0
0
0
sh-3.1#


FL wrote:

> I'm using amanda version 2.5.1p3
>
> amstatus Daily works for a while, then returns the following:
>
> MP> line 3238.
> Use of uninitialized value in hash element at /usr/sbin/amstatus line
> 681,  line 3238.
> Use of uninitialized value in hash element at /usr/sbin/amstatus line
> 685,  line 3238.
> Use of uninitialized value in hash element at /usr/sbin/amstatus line
> 691,  line 3238.
> Use of uninitialized value in hash element at /usr/sbin/amstatus line
> 534,  line 3391.
> Use of uninitialized value in hash element at /usr/sbin/amstatus line
> 535,  line 3391.
> Use of uninitialized value in hash element at /usr/sbin/amstatus line
> 536,  line 3391.
> Use of uninitialized value in hash element at /usr/sbin/amstatus line
> 538,  line 3391.
> Use of uninitialized value in hash element at /usr/sbin/amstatus line
> 539,  line 3391.
> Use of uninitialized value in hash element at /usr/sbin/amstatus line
> 545,  line 3391.
> Modification of non-creatable array value attempted, subscript -3 at
> /usr/sbin/amstatus line 751,  line 3394.
> [EMAIL PROTECTED]:/home/amanda$ 
>
> Could this be due to an error in a configuration file?




Re: amstatus returns uninitialized value!

2007-05-07 Thread Jean-Louis Martineau

Can you post the amdump. log file that show this error in amstatus.

Jean-Louis

FL wrote:

I'm using amanda version 2.5.1p3
 
amstatus Daily works for a while, then returns the following:
 
MP> line 3238.
Use of uninitialized value in hash element at /usr/sbin/amstatus line 
681,  line 3238.
Use of uninitialized value in hash element at /usr/sbin/amstatus line 
685,  line 3238.
Use of uninitialized value in hash element at /usr/sbin/amstatus line 
691,  line 3238.
Use of uninitialized value in hash element at /usr/sbin/amstatus line 
534,  line 3391.
Use of uninitialized value in hash element at /usr/sbin/amstatus line 
535,  line 3391.
Use of uninitialized value in hash element at /usr/sbin/amstatus line 
536,  line 3391.
Use of uninitialized value in hash element at /usr/sbin/amstatus line 
538,  line 3391.
Use of uninitialized value in hash element at /usr/sbin/amstatus line 
539,  line 3391.
Use of uninitialized value in hash element at /usr/sbin/amstatus line 
545,  line 3391.
Modification of non-creatable array value attempted, subscript -3 at 
/usr/sbin/amstatus line 751,  line 3394.

[EMAIL PROTECTED]:/home/amanda$ <mailto:[EMAIL PROTECTED]:/home/amanda$>
 
Could this be due to an error in a configuration file?




amstatus returns uninitialized value!

2007-05-05 Thread FL

I'm using amanda version 2.5.1p3

amstatus Daily works for a while, then returns the following:

MP> line 3238.
Use of uninitialized value in hash element at /usr/sbin/amstatus line 681,
 line 3238.
Use of uninitialized value in hash element at /usr/sbin/amstatus line 685,
 line 3238.
Use of uninitialized value in hash element at /usr/sbin/amstatus line 691,
 line 3238.
Use of uninitialized value in hash element at /usr/sbin/amstatus line 534,
 line 3391.
Use of uninitialized value in hash element at /usr/sbin/amstatus line 535,
 line 3391.
Use of uninitialized value in hash element at /usr/sbin/amstatus line 536,
 line 3391.
Use of uninitialized value in hash element at /usr/sbin/amstatus line 538,
 line 3391.
Use of uninitialized value in hash element at /usr/sbin/amstatus line 539,
 line 3391.
Use of uninitialized value in hash element at /usr/sbin/amstatus line 545,
 line 3391.
Modification of non-creatable array value attempted, subscript -3 at
/usr/sbin/amstatus line 751,  line 3394.
[EMAIL PROTECTED]:/home/amanda$

Could this be due to an error in a configuration file?


Use of unintialized value in hash element in amstatus repoty

2007-03-07 Thread FL

Has anyone seen this kind of result from an amstatus? (amanda.conf and
disklist to follow)

[EMAIL PROTECTED]:~$ amstatus Daily
Using /var/log/amanda/Daily/amdump from Wed Mar  7 23:18:01 EST 2007
Use of uninitialized value in hash element at /usr/sbin/amstatus line 520,
 line 2777.
Use of uninitialized value in hash element at /usr/sbin/amstatus line 521,
 line 2777.
Use of uninitialized value in hash element at /usr/sbin/amstatus line 522,
 line 2777.
Use of uninitialized value in subtraction (-) at /usr/sbin/amstatus line
522,  line 2777.
Use of uninitialized value in hash element at /usr/sbin/amstatus line 524,
 line 2777.
Use of uninitialized value in hash element at /usr/sbin/amstatus line 525,
 line 2777.
Use of uninitialized value in hash element at /usr/sbin/amstatus line 531,
 line 2777.
Use of uninitialized value in hash element at /usr/sbin/amstatus line 539,
 line 2783.
Use of uninitialized value in hash element at /usr/sbin/amstatus line 540,
 line 2783.
Use of uninitialized value in hash element at /usr/sbin/amstatus line 541,
 line 2783.
Use of uninitialized value in hash element at /usr/sbin/amstatus line 543,
 line 2783.
Use of uninitialized value in hash element at /usr/sbin/amstatus line 544,
 line 2783.
Use of uninitialized value in hash element at /usr/sbin/amstatus line 549,
 line 2783.
Use of uninitialized value in hash element at /usr/sbin/amstatus line 655,
 line 2794.
Use of uninitialized value in hash element at /usr/sbin/amstatus line 656,
 line 2794.
Use of uninitialized value in subtraction (-) at /usr/sbin/amstatus line
656,  line 2794.
Use of uninitialized value in hash element at /usr/sbin/amstatus line 657,
 line 2794.
Use of uninitialized value in hash element at /usr/sbin/amstatus line 658,
 line 2794.
Use of uninitialized value in hash element at /usr/sbin/amstatus line 662,
 line 2794.
Use of uninitialized value in hash element at /usr/sbin/amstatus line 663,
 line 2794.
Use of uninitialized value in hash element at /usr/sbin/amstatus line 667,
 line 2794.
Use of uninitialized value in hash element at /usr/sbin/amstatus line 673,
 line 2794.
Use of uninitialized value in hash element at /usr/sbin/amstatus line 520,
 line 2950.
Use of uninitialized value in hash element at /usr/sbin/amstatus line 521,
 line 2950.
Use of uninitialized value in hash element at /usr/sbin/amstatus line 522,
 line 2950.
Use of uninitialized value in hash element at /usr/sbin/amstatus line 524,
 line 2950.
Use of uninitialized value in hash element at /usr/sbin/amstatus line 525,
 line 2950.
Use of uninitialized value in hash element at /usr/sbin/amstatus line 531,
 line 2950.
Modification of non-creatable array value attempted, subscript -2 at
/usr/sbin/amstatus line 730,  line 2953.
[EMAIL PROTECTED]:~$


Re: bug from amstatus (amanda 2.5.1p2)

2007-02-01 Thread Jean-Louis Martineau

amanda is not localized, it doesn't understand the french date.

The attached patch fit it by using a not localized time
This patch works only run done with the patch.

Jean-Louis

Thomas Ginestet wrote:

Hi list,

When using amstatus this morning (february the first) and I've got the 
folllowing error:


Using /var/log/amanda/week/amdump.1 from mercredi 31 janvier 2007, 
23:45:01 (UTC+0100)

Day '31' out of range 1..28 at /usr/local/sbin/amstatus line 1350

The dump was started january 31st but is it possible that amstatus 
looks for a february 31st instead ?



Cheers,


Thomas Ginestet





diff -u -r --show-c-function --new-file --exclude-from=/home/martinea/src.orig/amanda.diff --ignore-matching-lines='$Id:' amanda-2.5.1p2.new/server-src/amdump.sh.in amanda-2.5.1p2.new.date/server-src/amdump.sh.in
--- amanda-2.5.1p2.new/server-src/amdump.sh.in	2007-01-26 09:43:38.0 -0500
+++ amanda-2.5.1p2.new.date/server-src/amdump.sh.in	2007-02-01 07:57:15.0 -0500
@@ -112,6 +112,7 @@ exit_code=$?
 [ $exit_code -ne 0 ] && exit_status=$exit_code
 echo "amdump: start at `date`"
 echo "amdump: datestamp `date +%Y%m%d`"
+echo "amdump: starttime `date +%Y%m%d%H%M%S`"
 $libexecdir/planner$SUF $conf "$@" | $libexecdir/driver$SUF $conf "$@"
 exit_code=$?
 [ $exit_code -ne 0 ] && exit_status=$exit_code
diff -u -r --show-c-function --new-file --exclude-from=/home/martinea/src.orig/amanda.diff --ignore-matching-lines='$Id:' amanda-2.5.1p2.new/server-src/amflush.c amanda-2.5.1p2.new.date/server-src/amflush.c
--- amanda-2.5.1p2.new/server-src/amflush.c	2006-11-29 07:36:16.0 -0500
+++ amanda-2.5.1p2.new.date/server-src/amflush.c	2007-02-01 07:57:08.0 -0500
@@ -299,6 +299,7 @@ main(
 	error("BAD DATE"); /* should never happen */
 fprintf(stderr, "amflush: start at %s\n", date_string);
 fprintf(stderr, "amflush: datestamp %s\n", amflush_timestamp);
+fprintf(stderr, "amflush: starttime %s\n", construct_timestamp(NULL));
 log_add(L_START, "date %s", amflush_timestamp);
 
 /* START DRIVER */
diff -u -r --show-c-function --new-file --exclude-from=/home/martinea/src.orig/amanda.diff --ignore-matching-lines='$Id:' amanda-2.5.1p2.new/server-src/amstatus.pl.in amanda-2.5.1p2.new.date/server-src/amstatus.pl.in
--- amanda-2.5.1p2.new/server-src/amstatus.pl.in	2007-01-26 08:02:33.0 -0500
+++ amanda-2.5.1p2.new.date/server-src/amstatus.pl.in	2007-02-01 07:58:14.0 -0500
@@ -181,14 +181,17 @@ while() {
 	chomp;
 	if(/(amdump|amflush): start at (.*)/) {
 		print " from $2\n";
-		$starttime=&unctime(split(/[ 	]+/,$2));
 	}
-	elsif(/amdump: datestamp (\S+)/) {
-		$gdatestamp = $1;
+	elsif(/(amdump|amflush): datestamp (\S+)/) {
+		$gdatestamp = $2;
 		if(!defined $datestamp{$gdatestamp}) {
 			$datestamp{$gdatestamp} = 1;
 			push @datestamp, $gdatestamp;
 		}
+		$starttime=&set_starttime($2);
+	}
+	elsif(/(amdump|amflush): starttime (\S+)/) {
+		$starttime=&set_starttime($2);
 	}
 	elsif(/planner: timestamp (\S+)/) {
 		$gdatestamp = $1;
@@ -1410,6 +1413,30 @@ sub unctime() {
 	return $time;
 }
 
+sub set_starttime() {
+	my (@tl);
+	my ($time);
+	my ($date);
+
+	# Preset an array of values in case some parts are not passed as
+	# arguments.  This lets the date, etc, be omitted and default to
+	# today.
+
+	($date)[EMAIL PROTECTED];
+	@tl = localtime;
+
+	$tl[5] = substr($date,  0, 4)   if(length($date) >= 4);
+	$tl[4] = substr($date,  4, 2)-1 if(length($date) >= 6);
+	$tl[3] = substr($date,  6, 2)   if(length($date) >= 8);
+	$tl[2] = substr($date,  8, 2)   if(length($date) >= 10);
+	$tl[1] = substr($date, 10, 2)   if(length($date) >= 12);
+	$tl[0] = substr($date, 12, 2)   if(length($date) >= 14);
+
+	$time = &timelocal (@tl);
+
+	return $time;
+}
+
 sub showtime() {
 	my($delta)=shift;
 	my($oneday)=24*60*60;


bug from amstatus (amanda 2.5.1p2)

2007-02-01 Thread Thomas Ginestet

Hi list,

When using amstatus this morning (february the first) and I've got the 
folllowing error:


Using /var/log/amanda/week/amdump.1 from mercredi 31 janvier 2007, 
23:45:01 (UTC+0100)

Day '31' out of range 1..28 at /usr/local/sbin/amstatus line 1350

The dump was started january 31st but is it possible that amstatus looks 
for a february 31st instead ?



Cheers,


Thomas Ginestet





Amstatus report question.

2006-10-27 Thread McGraw, Robert P.
I get this when I run amstatus --config daily --date.

8 dumpers busy :  0:00:39  (  0.19%)  no-dumpers:  0:00:35
(89.89%)
  start-wait:  0:00:03  (   10.11%)

9 dumpers busy :  0:21:16  (  6.13%)  no-dumpers:  0:19:34
(92.00%)
 start-wait:  0:01:42  (8.00%)

10 dumpers busy :  1:02:29  ( 18.01%)  no-dumpers:  1:02:25
(99.89%)
 start-wait:  0:00:04  (

What does the "no-dumpers" and "start-wait" mean.

Thanks

Robert

_
Robert P. McGraw, Jr.
Manager, Computer System EMAIL: [EMAIL PROTECTED]
Purdue University ROOM: MATH-807
Department of MathematicsPHONE: (765) 494-6055
150 N. University Street   FAX: (419) 821-0540
West Lafayette, IN 47907-2067




smime.p7s
Description: S/MIME cryptographic signature


Re: RE Unravel amstatus output

2006-07-19 Thread Joe Donner (sent by Nabble.com)

Well, for what it's worth:

I ran a backup job with just the DLEs that failed to backup/flush, and it
all went well.

I then ran the exact same job I did on Friday, and it succeeded with no
errors this time.  I'm beginning to think that maybe the tape used on Friday
may be damaged in some way.  I'm now using 5 tapes for testing, and will run
that job until all tapes have been used, and see whether the job fails
consistently on any particular tape.

Thanks very much for your input.  It is much appreciated!

Regards,

Joe
-- 
View this message in context: 
http://www.nabble.com/Unravel-amstatus-output-tf1953587.html#a5393206
Sent from the Amanda - Users forum at Nabble.com.



Re: RE Unravel amstatus output

2006-07-17 Thread Joe Donner (sent by Nabble.com)

Sorry, I've already went and deleted that file...


Alexander Jolk wrote:
> 
> Joe Donner (sent by Nabble.com) wrote:
>> FAILURE AND STRANGE DUMP SUMMARY:
>>   minerva/usr/local/clients lev 0 FAILED [input: Can't read data: :
>> Input/output error]
>> 
>> And the holding disk still contains a folder with Friday's date and a
>> 30GB
>> file for the DLE mentioned above.
> 
> Can you try cat'ting the file to /dev/null?  My first guess would be 
> that some blocks of the holding disk file are unreadable due to a disk 
> failure.
> 
> Alex
> 
> 
> -- 
> Alexander Jolk / BUF Compagnie
> tel +33-1 42 68 18 28 /  fax +33-1 42 68 18 29
> 
> 

-- 
View this message in context: 
http://www.nabble.com/Unravel-amstatus-output-tf1953587.html#a5361151
Sent from the Amanda - Users forum at Nabble.com.



Re: RE Unravel amstatus output

2006-07-17 Thread Alexander Jolk

Joe Donner (sent by Nabble.com) wrote:

FAILURE AND STRANGE DUMP SUMMARY:
  minerva/usr/local/clients lev 0 FAILED [input: Can't read data: :
Input/output error]

And the holding disk still contains a folder with Friday's date and a 30GB
file for the DLE mentioned above.


Can you try cat'ting the file to /dev/null?  My first guess would be 
that some blocks of the holding disk file are unreadable due to a disk 
failure.


Alex


--
Alexander Jolk / BUF Compagnie
tel +33-1 42 68 18 28 /  fax +33-1 42 68 18 29


Re: RE Unravel amstatus output

2006-07-17 Thread Joe Donner (sent by Nabble.com)

Ok, so I ran amflush again.  It flushed 2 of the 3 outstanding DLE's data to
daily-1, but the email I received includes:

The dumps were flushed to tape daily-1.
The next tape Amanda expects to use is: daily-2.

FAILURE AND STRANGE DUMP SUMMARY:
  minerva/usr/local/clients lev 0 FAILED [input: Can't read data: :
Input/output error]

And the holding disk still contains a folder with Friday's date and a 30GB
file for the DLE mentioned above.

What on earth is going on??


Joe Donner wrote:
> 
> Red Hat Enterprise 3 doesn't seem to have strace as a command.
> 
> I thought rather than killing the processes manually, I'd reboot the
> server and see if amcleanup runs as included in /etc/rc.d/rc.local
> (thought I may as well test that).
> 
> Now the server came back up, and none of the amanda services are active
> anymore (unsurprisingly).  Nothing seemed to happen, so I did a manual
> amcleanup, with these results:
> 
> amcleanup: no unprocessed logfile to clean up.
> Scanning /mnt/hdb1...
>   20060714: found Amanda directory.
> 
> So I'm thinking that this backup run is now finally broken.
> 
> Next I thought I'll run amflush and see what happens.  It outputs this:
> 
> Scanning /mnt/hdb1...
>   20060714: found Amanda directory.
> 
> Today is: 20060717
> Flushing dumps in 20060714 to tape drive "/dev/nst0".
> Expecting tape daily-1 or a new tape.  (The last dumps were to tape
> daily-3)
> Are you sure you want to do this [yN]? y
> Running in background, you can log off now.
> You'll get mail when amflush is finished.
> 
> Now what I notice is that it asks for the tape called daily-1, whereas the
> tape I used for Friday's backup was daily-3.  Does this mean that daily-3
> was filled up and caused this whole issue?
> 
> Which brings me to another question.  I've used these tapes before for
> testing.  Will Amanda have appended Friday's backup to what was already on
> the tape daily-3, or does it overwrite data previously written to that
> tape each time a new backup runs?  The reason I ask this is that the tape
> drive capacity is 160GB, and I believe that I'm trying to back up a lot
> less data than that.
> 
> After I rebooted, I got this email from Amanda.  As you can see, it only
> used 4.7% of the tape:
> 
> *** THE DUMPS DID NOT FINISH PROPERLY!
> 
> These dumps were to tape daily-3.
> The next tape Amanda expects to use is: daily-1.
> 
> FAILURE AND STRANGE DUMP SUMMARY:
>   cerberus   /.fonts.cache-1 lev 0 FAILED [disk /.fonts.cache-1 offline on
> cerberus?]
>   cerberus   /.autofsck lev 0 FAILED [disk /.autofsck offline on
> cerberus?]
> 
> 
> STATISTICS:
>   Total   Full  Daily
>       
> Estimate Time (hrs:min)0:04
> Run Time (hrs:min) 0:16
> Dump Time (hrs:min)3:07   3:07   0:00
> Output Size (meg)   56785.856785.80.0
> Original Size (meg)136236.1   136236.10.0
> Avg Compressed Size (%)41.7   41.7-- 
> Filesystems Dumped  107107  0
> Avg Dump Rate (k/s)  5169.8 5169.8-- 
> 
> Tape Time (hrs:min)0:13   0:13   0:00
> Tape Size (meg)  7259.3 7259.30.0
> Tape Used (%)   4.74.70.0
> Filesystems Taped   104104  0
> Avg Tp Write Rate (k/s)  9801.6 9801.6-- 
> 
> USAGE BY TAPE:
>   Label Time  Size  %Nb
>   daily-3   0:137259.34.7   104
> 
> And then, after I ran amflush, I got an email saying this (I didn't
> actually put daily-1 into the drive):
> 
> *** A TAPE ERROR OCCURRED: [cannot overwrite active tape daily-3].
> Some dumps may have been left in the holding disk.
> Run amflush again to flush them to tape.
> The next tape Amanda expects to use is: daily-1.
> 
> And when I now do amstatus daily, I get:
> 
> Using /var/lib/amanda/daily/amflush.1 from Mon Jul 17 12:58:42 BST 2006
>  
> minerva:/home  0  8774296k waiting to flush
> minerva:/usr/local/clients 0 32253287k waiting to flush
> minerva:/usr/local/development 0  9687648k waiting to flush
> 
> I feel a headache coming on again...
> 
> Any suggestions as how to best proceed?
> 
> 
> 
> Paul Bijnens wrote:
>> 
>> On 2006-07-17 13:32, Joe Donner (sent by Nabble.com) wrote:
>>> and ps -fu amanda outputs:
>>> 
>>> UIDPID  PPID  C STIME TTY  TIME CMD
>>> amanda2136  2135  0 Jul14 ?00:00:00 /bin/sh /usr/sbin/amdump
>>> daily
>>> amanda2145

Re: RE Unravel amstatus output

2006-07-17 Thread Joe Donner (sent by Nabble.com)

Red Hat Enterprise 3 doesn't seem to have strace as a command.

I thought rather than killing the processes manually, I'd reboot the server
and see if amcleanup runs as included in /etc/rc.d/rc.local (thought I may
as well test that).

Now the server came back up, and none of the amanda services are active
anymore (unsurprisingly).  Nothing seemed to happen, so I did a manual
amcleanup, with these results:

amcleanup: no unprocessed logfile to clean up.
Scanning /mnt/hdb1...
  20060714: found Amanda directory.

So I'm thinking that this backup run is now finally broken.

Next I thought I'll run amflush and see what happens.  It outputs this:

Scanning /mnt/hdb1...
  20060714: found Amanda directory.

Today is: 20060717
Flushing dumps in 20060714 to tape drive "/dev/nst0".
Expecting tape daily-1 or a new tape.  (The last dumps were to tape daily-3)
Are you sure you want to do this [yN]? y
Running in background, you can log off now.
You'll get mail when amflush is finished.

Now what I notice is that it asks for the tape called daily-1, whereas the
tape I used for Friday's backup was daily-3.  Does this mean that daily-3
was filled up and caused this whole issue?

Which brings me to another question.  I've used these tapes before for
testing.  Will Amanda have appended Friday's backup to what was already on
the tape daily-3, or does it overwrite data previously written to that tape
each time a new backup runs?  The reason I ask this is that the tape drive
capacity is 160GB, and I believe that I'm trying to back up a lot less data
than that.

After I rebooted, I got this email from Amanda.  As you can see, it only
used 4.7% of the tape:

*** THE DUMPS DID NOT FINISH PROPERLY!

These dumps were to tape daily-3.
The next tape Amanda expects to use is: daily-1.

FAILURE AND STRANGE DUMP SUMMARY:
  cerberus   /.fonts.cache-1 lev 0 FAILED [disk /.fonts.cache-1 offline on
cerberus?]
  cerberus   /.autofsck lev 0 FAILED [disk /.autofsck offline on cerberus?]


STATISTICS:
  Total   Full  Daily
      
Estimate Time (hrs:min)0:04
Run Time (hrs:min) 0:16
Dump Time (hrs:min)3:07   3:07   0:00
Output Size (meg)   56785.856785.80.0
Original Size (meg)136236.1   136236.10.0
Avg Compressed Size (%)41.7   41.7-- 
Filesystems Dumped  107107  0
Avg Dump Rate (k/s)  5169.8 5169.8-- 

Tape Time (hrs:min)0:13   0:13   0:00
Tape Size (meg)  7259.3 7259.30.0
Tape Used (%)   4.74.70.0
Filesystems Taped   104104  0
Avg Tp Write Rate (k/s)  9801.6 9801.6-- 

USAGE BY TAPE:
  Label Time  Size  %Nb
  daily-3   0:137259.34.7   104

And then, after I ran amflush, I got an email saying this (I didn't actually
put daily-1 into the drive):

*** A TAPE ERROR OCCURRED: [cannot overwrite active tape daily-3].
Some dumps may have been left in the holding disk.
Run amflush again to flush them to tape.
The next tape Amanda expects to use is: daily-1.

And when I now do amstatus daily, I get:

Using /var/lib/amanda/daily/amflush.1 from Mon Jul 17 12:58:42 BST 2006
 
minerva:/home  0  8774296k waiting to flush
minerva:/usr/local/clients 0 32253287k waiting to flush
minerva:/usr/local/development 0  9687648k waiting to flush

I feel a headache coming on again...

Any suggestions as how to best proceed?



Paul Bijnens wrote:
> 
> On 2006-07-17 13:32, Joe Donner (sent by Nabble.com) wrote:
>> and ps -fu amanda outputs:
>> 
>> UIDPID  PPID  C STIME TTY  TIME CMD
>> amanda2136  2135  0 Jul14 ?00:00:00 /bin/sh /usr/sbin/amdump
>> daily
>> amanda2145  2136  0 Jul14 ?00:00:02 /usr/lib/amanda/driver
>> daily
>> amanda2146  2145  0 Jul14 ?00:00:52 taper daily
>> amanda2147  2146  0 Jul14 ?00:00:34 taper daily
>> amanda2148  2145  0 Jul14 ?00:12:55 dumper0 daily
>> amanda2153  2145  0 Jul14 ?00:00:19 dumper1 daily
>> amanda2154  2145  0 Jul14 ?00:00:00 dumper2 daily
>> amanda2155  2145  0 Jul14 ?00:00:00 dumper3 daily
>> 
>> Does this tell anyone anything?
> 
> It means the processes are still alive.
> 
> Just a wild guess... Maybe you have specified a manual changer, and
> Amanda is just waiting for you to manually insert the next tape?
> 
> Now find out what they are doing, and why it takes days to proceed.
> 
> As root or amanda you can trace a process and see if it does somehting
> else, or is just sleeping on some event that will not happen:
> 
>strace -p pid-of-the-process
> 
> There are two taper proces

Re: RE Unravel amstatus output

2006-07-17 Thread Paul Bijnens

On 2006-07-17 13:32, Joe Donner (sent by Nabble.com) wrote:

and ps -fu amanda outputs:

UIDPID  PPID  C STIME TTY  TIME CMD
amanda2136  2135  0 Jul14 ?00:00:00 /bin/sh /usr/sbin/amdump
daily
amanda2145  2136  0 Jul14 ?00:00:02 /usr/lib/amanda/driver daily
amanda2146  2145  0 Jul14 ?00:00:52 taper daily
amanda2147  2146  0 Jul14 ?00:00:34 taper daily
amanda2148  2145  0 Jul14 ?00:12:55 dumper0 daily
amanda2153  2145  0 Jul14 ?00:00:19 dumper1 daily
amanda2154  2145  0 Jul14 ?00:00:00 dumper2 daily
amanda2155  2145  0 Jul14 ?00:00:00 dumper3 daily

Does this tell anyone anything?


It means the processes are still alive.

Just a wild guess... Maybe you have specified a manual changer, and
Amanda is just waiting for you to manually insert the next tape?

Now find out what they are doing, and why it takes days to proceed.

As root or amanda you can trace a process and see if it does somehting
else, or is just sleeping on some event that will not happen:

  strace -p pid-of-the-process

There are two taper processes, one reads from the holdingdisk file
into a shared memory region, while the other one writes the bytes
from shared memory to tape.  When there is no holdingdisk file, then
maybe the reader-taper is reading from a network socket?
And maybe you specified a long dtimeout?


--
Paul Bijnens, xplanation Technology ServicesTel  +32 16 397.511
Technologielaan 21 bus 2, B-3001 Leuven, BELGIUMFax  +32 16 397.512
http://www.xplanation.com/  email:  [EMAIL PROTECTED]
***
* I think I've got the hang of it now:  exit, ^D, ^C, ^\, ^Z, ^Q, ^^, *
* F6, quit, ZZ, :q, :q!, M-Z, ^X^C, logoff, logout, close, bye, /bye, *
* stop, end, F3, ~., ^]c, +++ ATH, disconnect, halt,  abort,  hangup, *
* PF4, F20, ^X^X, :D::D, KJOB, F14-f-e, F8-e,  kill -1 $$,  shutdown, *
* init 0, kill -9 1, Alt-F4, Ctrl-Alt-Del, AltGr-NumLock, Stop-A, ... *
* ...  "Are you sure?"  ...   YES   ...   Phew ...   I'm out  *
***



Re: RE Unravel amstatus output

2006-07-17 Thread Joe Donner (sent by Nabble.com)

When I execute the top command (Red Hat Enterprise 3) for user Amanda, I get:

 PID USER PRI  NI  SIZE  RSS SHARE STAT %CPU %MEM   TIME CPU COMMAND
 2136 amanda15   0   948  948   836 S 0.0  0.0   0:00   1 amdump
 2145 amanda15   0  1072 1072   844 S 0.0  0.1   0:02   1 driver
 2146 amanda16   0  1536 1536  1388 S 0.0  0.1   0:52   0 taper
 2147 amanda16   0  1560 1560  1396 D 0.0  0.1   0:34   0 taper
 2148 amanda22   0  1120 1120   876 S 0.0  0.1  12:55   0 dumper
 2153 amanda15   0  1120 1120   876 S 0.0  0.1   0:19   0 dumper
 2154 amanda15   0  1044 1044   816 S 0.0  0.1   0:00   1 dumper
 2155 amanda25   0   852  852   708 S 0.0  0.0   0:00   0 dumper

and ps -fu amanda outputs:

UIDPID  PPID  C STIME TTY  TIME CMD
amanda2136  2135  0 Jul14 ?00:00:00 /bin/sh /usr/sbin/amdump
daily
amanda2145  2136  0 Jul14 ?00:00:02 /usr/lib/amanda/driver daily
amanda2146  2145  0 Jul14 ?00:00:52 taper daily
amanda2147  2146  0 Jul14 ?00:00:34 taper daily
amanda2148  2145  0 Jul14 ?00:12:55 dumper0 daily
amanda2153  2145  0 Jul14 ?00:00:19 dumper1 daily
amanda2154  2145  0 Jul14 ?00:00:00 dumper2 daily
amanda2155  2145  0 Jul14 ?00:00:00 dumper3 daily

Does this tell anyone anything?


Paul Bijnens wrote:
> 
> On 2006-07-17 11:36, Joe Donner (sent by Nabble.com) wrote:
>> Good point - and that is why I need help unravelling what it all means. 
>> My
>> question now would be:  0.41% of what?  What would 100% of that something
>> represent?  Constant streaming of data to tape from holding disk?
> 
> of the total elapsed time since the program started.
> 
> But there is some caveat.  The amstatus command works by parsing the log
> file.  And the logfile is written to only when there is a change in 
> state in the backup process.  So the 0.41% probably means that since the
> last status message written by taper in the logfile is already long ago.
> It could well be that taper is taping one very large file, but has not
> yet written that into the log file which amstatus parses.
> 
> So, to find out if really anything is still running, do
>ps -fu amanda
> on the tape server, and verify if there is still a taper process (and
> other processes like driver).
> If they are, then what are they doing ("strace -p" help here).
> 
> You may kill them all, and then clean up the broken pieces by running 
> "amcleanup".
> 
> 
> 
>> 
>> I've just left it alone to see if I get different results when
>> subsequently
>> running amstatus, but it seems stuck at wherever it is at the moment. 
>> The
>> tape drive itself is doing nothing...
>> 
>> It really seems as if all went reasonably well and then froze up for some
>> reason.
>> 
>> Please help if at all possible.
>> 
>> 
>> Cyrille Bollu wrote:
>>> Looking with my newbie's eyes it seems that Amanda is running well. Just 
>>> very slowly.
>>>
>>> And Amanda's log seems to indicate that the problem is on the tape drive 
>>> side.
>>>
>>> The only thing strange that I see is the following line which say that 
>>> your drive is busy only 0,41% of the time:
>>>
>>>>taper busy   :  0:12:38  (  0.41%)
>>> What does it do the rest of the time???
>>>
>>> [EMAIL PROTECTED] a écrit sur 17/07/2006 10:54:55 :
>>>
>>>> I set up Amanda on Friday to do an almost real backup job.  I thought 
>>> this
>>>> would be the final test before putting it into operation.
>>>>
>>>> When I arrived at work this morning, I was somewhat surprised to see 
>>> that
>>>> the Amanda run doesn't seem to have finished.  amstatus daily gives me 
>>> some
>>>> information, but I'm not sure how to interpret it.
>>>>
>>>> There are still 3 files on the holding disk, adding up to about 48GB. 
>>> The
>>>> tape drive doesn't seem to be doing anything - just sitting there 
>>> quietly at
>>>> the moment with no sign of activity.
>>>>
>>>> I won't include the entire output of amstatus daily, but here are 
>>> extracts,
>>>> if someone can please tell me if they see something wrong.
>>>>
>>>> I have many entries like these - seems to be one for each DLE:
>>>> cerberus:/home   0  1003801k finished 
>>> (22:18:15)
>>>> Then these entries, which I think are the 2 that failed, as shown later 
>>&g

Re: RE Unravel amstatus output

2006-07-17 Thread Joe Donner (sent by Nabble.com)

Well, one thing I've noticed is that the DLEs in question are the ones with
largest overall size:

+/-  8GB
+/-  9GB
+/-  32GB

All the other DLEs (except for the two I mentioned, which are in fact hidden
files) have successfully been written to tape and are less than
approximately 2GB in size...



Cyrille Bollu wrote:
> 
> [EMAIL PROTECTED] a écrit sur 17/07/2006 11:36:21 :
> 
>> 
>> Good point - and that is why I need help unravelling what it all means. 
> My
>> question now would be:  0.41% of what?  What would 100% of that 
> something
>> represent?  Constant streaming of data to tape from holding disk?
> 
> AFAIK, 100% would mean constant streaming.
> 
>> 
>> I've just left it alone to see if I get different results when 
> subsequently
>> running amstatus, but it seems stuck at wherever it is at the moment. 
> The
>> tape drive itself is doing nothing...
>> 
>> It really seems as if all went reasonably well and then froze up for 
> some
>> reason.
> 
> Yep, it looks like...
> 
>> 
>> Please help if at all possible.
>> 
>> 
>> Cyrille Bollu wrote:
>> > 
>> > Looking with my newbie's eyes it seems that Amanda is running well. 
> Just 
>> > very slowly.
>> > 
>> > And Amanda's log seems to indicate that the problem is on the tape 
> drive 
>> > side.
>> > 
>> > The only thing strange that I see is the following line which say that 
> 
>> > your drive is busy only 0,41% of the time:
>> > 
>> >>taper busy   :  0:12:38  (  0.41%)
>> > 
>> > What does it do the rest of the time???
>> > 
>> > [EMAIL PROTECTED] a écrit sur 17/07/2006 10:54:55 :
>> > 
>> >> 
>> >> I set up Amanda on Friday to do an almost real backup job.  I thought 
> 
>> > this
>> >> would be the final test before putting it into operation.
>> >> 
>> >> When I arrived at work this morning, I was somewhat surprised to see 
>> > that
>> >> the Amanda run doesn't seem to have finished.  amstatus daily gives 
> me 
>> > some
>> >> information, but I'm not sure how to interpret it.
>> >> 
>> >> There are still 3 files on the holding disk, adding up to about 48GB. 
> 
>> > The
>> >> tape drive doesn't seem to be doing anything - just sitting there 
>> > quietly at
>> >> the moment with no sign of activity.
>> >> 
>> >> I won't include the entire output of amstatus daily, but here are 
>> > extracts,
>> >> if someone can please tell me if they see something wrong.
>> >> 
>> >> I have many entries like these - seems to be one for each DLE:
>> >> cerberus:/home   0  1003801k finished 
>> > (22:18:15)
>> >> 
>> >> Then these entries, which I think are the 2 that failed, as shown 
> later 
>> > in
>> >> the summary:
>> >> cerberus:/.autofsck  0 planner: [disk 
> /.autofsck
>> >> offline on cerberus?]
>> >> cerberus:/.fonts.cache-1 0 planner: [disk
>> >> /.fonts.cache-1 offline on cerberus?]
>> >> 
>> >> Then these 3 that are the ones still on the holding disk:
>> >> minerva:/home0  8774296k writing to 
> tape
>> >> (23:09:07)
>> >> minerva:/usr/local/clients   0 32253287k dump done
>> >> (1:08:27), wait for writing to tape
>> >> minerva:/usr/local/development   0  9687648k dump done
>> >> (23:48:17), wait for writing to tape
>> >> 
>> >> And then this summary, which I'm not sure how to interpret:
>> >> SUMMARY  part  real  estimated
>> >>size   size
>> >> partition   : 109
>> >> estimated   : 107 69631760k
>> >> flush   :   0 0k
>> >> failed  :   20k   (  0.00%)
>> >> wait for dumping:   00k   (  0.00%)
>> >> dumping to tape :   00k   (  0.00%)
>> >> dumping :   0 0k 0k (  0.00%) (  0.00%)
>> >> dumped  : 107  58148656k  69631760k ( 83.51%) ( 83.51%)
>> >> wait for writing:   2  41940935k  48107940k ( 87.18%) ( 60.2

Re: RE Unravel amstatus output

2006-07-17 Thread Paul Bijnens

On 2006-07-17 11:36, Joe Donner (sent by Nabble.com) wrote:

Good point - and that is why I need help unravelling what it all means.  My
question now would be:  0.41% of what?  What would 100% of that something
represent?  Constant streaming of data to tape from holding disk?


of the total elapsed time since the program started.

But there is some caveat.  The amstatus command works by parsing the log
file.  And the logfile is written to only when there is a change in 
state in the backup process.  So the 0.41% probably means that since the

last status message written by taper in the logfile is already long ago.
It could well be that taper is taping one very large file, but has not
yet written that into the log file which amstatus parses.

So, to find out if really anything is still running, do
  ps -fu amanda
on the tape server, and verify if there is still a taper process (and
other processes like driver).
If they are, then what are they doing ("strace -p" help here).

You may kill them all, and then clean up the broken pieces by running 
"amcleanup".






I've just left it alone to see if I get different results when subsequently
running amstatus, but it seems stuck at wherever it is at the moment.  The
tape drive itself is doing nothing...

It really seems as if all went reasonably well and then froze up for some
reason.

Please help if at all possible.


Cyrille Bollu wrote:
Looking with my newbie's eyes it seems that Amanda is running well. Just 
very slowly.


And Amanda's log seems to indicate that the problem is on the tape drive 
side.


The only thing strange that I see is the following line which say that 
your drive is busy only 0,41% of the time:



   taper busy   :  0:12:38  (  0.41%)

What does it do the rest of the time???

[EMAIL PROTECTED] a écrit sur 17/07/2006 10:54:55 :

I set up Amanda on Friday to do an almost real backup job.  I thought 

this

would be the final test before putting it into operation.

When I arrived at work this morning, I was somewhat surprised to see 

that
the Amanda run doesn't seem to have finished.  amstatus daily gives me 

some

information, but I'm not sure how to interpret it.

There are still 3 files on the holding disk, adding up to about 48GB. 

The
tape drive doesn't seem to be doing anything - just sitting there 

quietly at

the moment with no sign of activity.

I won't include the entire output of amstatus daily, but here are 

extracts,

if someone can please tell me if they see something wrong.

I have many entries like these - seems to be one for each DLE:
cerberus:/home   0  1003801k finished 

(22:18:15)
Then these entries, which I think are the 2 that failed, as shown later 

in

the summary:
cerberus:/.autofsck  0 planner: [disk /.autofsck
offline on cerberus?]
cerberus:/.fonts.cache-1 0 planner: [disk
/.fonts.cache-1 offline on cerberus?]

Then these 3 that are the ones still on the holding disk:
minerva:/home0  8774296k writing to tape
(23:09:07)
minerva:/usr/local/clients   0 32253287k dump done
(1:08:27), wait for writing to tape
minerva:/usr/local/development   0  9687648k dump done
(23:48:17), wait for writing to tape

And then this summary, which I'm not sure how to interpret:
SUMMARY  part  real  estimated
   size   size
partition   : 109
estimated   : 107 69631760k
flush   :   0 0k
failed  :   20k   (  0.00%)
wait for dumping:   00k   (  0.00%)
dumping to tape :   00k   (  0.00%)
dumping :   0 0k 0k (  0.00%) (  0.00%)
dumped  : 107  58148656k  69631760k ( 83.51%) ( 83.51%)
wait for writing:   2  41940935k  48107940k ( 87.18%) ( 60.23%)
wait to flush   :   0 0k 0k (100.00%) (  0.00%)
writing to tape :   1   8774296k  12515695k ( 70.11%) ( 12.60%)
failed to tape  :   0 0k 0k (  0.00%) (  0.00%)
taped   : 104   7433425k   9008125k ( 82.52%) ( 10.68%)
4 dumpers idle  : not-idle
taper writing, tapeq: 2
network free kps:  2000
holding space   :  50295358k ( 49.79%)
 dumper0 busy   :  2:53:47  (  5.67%)
 dumper1 busy   :  0:13:48  (  0.45%)
 dumper2 busy   :  0:00:00  (  0.00%)
   taper busy   :  0:12:38  (  0.41%)
 0 dumpers busy : 2+0:07:56  ( 94.22%)not-idle: 2+0:00:04  (
99.73%)
start-wait:  0:07:51  ( 
0.27%)

 1 dumper busy  :  2:46:29  (  5.43%)not-idle:  1:20:10  (
48.15%)
   client-constrained:  1:18:08  (
46.93%)
 no-bandwidth:  0:04:16  ( 
2.57%)
   start-wait:  0:03:54  ( 
2.35%)

 2 dumpers busy 

Re: RE Unravel amstatus output

2006-07-17 Thread Cyrille Bollu


[EMAIL PROTECTED] a écrit sur 17/07/2006
11:36:21 :

> 
> Good point - and that is why I need help unravelling what it all means.
 My
> question now would be:  0.41% of what?  What would 100%
of that something
> represent?  Constant streaming of data to tape from holding disk?

AFAIK, 100% would mean constant streaming.

> 
> I've just left it alone to see if I get different results when subsequently
> running amstatus, but it seems stuck at wherever it is at the moment.
 The
> tape drive itself is doing nothing...
> 
> It really seems as if all went reasonably well and then froze up for
some
> reason.

Yep, it looks like...

> 
> Please help if at all possible.
> 
> 
> Cyrille Bollu wrote:
> > 
> > Looking with my newbie's eyes it seems that Amanda is running
well. Just 
> > very slowly.
> > 
> > And Amanda's log seems to indicate that the problem is on the
tape drive 
> > side.
> > 
> > The only thing strange that I see is the following line which
say that 
> > your drive is busy only 0,41% of the time:
> > 
> >>    taper busy   :  0:12:38  (  0.41%)
> > 
> > What does it do the rest of the time???
> > 
> > [EMAIL PROTECTED] a écrit sur 17/07/2006 10:54:55
:
> > 
> >> 
> >> I set up Amanda on Friday to do an almost real backup job.
 I thought 
> > this
> >> would be the final test before putting it into operation.
> >> 
> >> When I arrived at work this morning, I was somewhat surprised
to see 
> > that
> >> the Amanda run doesn't seem to have finished.  amstatus
daily gives me 
> > some
> >> information, but I'm not sure how to interpret it.
> >> 
> >> There are still 3 files on the holding disk, adding up to
about 48GB. 
> > The
> >> tape drive doesn't seem to be doing anything - just sitting
there 
> > quietly at
> >> the moment with no sign of activity.
> >> 
> >> I won't include the entire output of amstatus daily, but
here are 
> > extracts,
> >> if someone can please tell me if they see something wrong.
> >> 
> >> I have many entries like these - seems to be one for each
DLE:
> >> cerberus:/home            
                  0  1003801k
finished 
> > (22:18:15)
> >> 
> >> Then these entries, which I think are the 2 that failed,
as shown later 
> > in
> >> the summary:
> >> cerberus:/.autofsck            
             0 planner: [disk /.autofsck
> >> offline on cerberus?]
> >> cerberus:/.fonts.cache-1          
          0 planner: [disk
> >> /.fonts.cache-1 offline on cerberus?]
> >> 
> >> Then these 3 that are the ones still on the holding disk:
> >> minerva:/home              
                 0  8774296k
writing to tape
> >> (23:09:07)
> >> minerva:/usr/local/clients          
        0 32253287k dump done
> >> (1:08:27), wait for writing to tape
> >> minerva:/usr/local/development        
      0  9687648k dump done
> >> (23:48:17), wait for writing to tape
> >> 
> >> And then this summary, which I'm not sure how to interpret:
> >> SUMMARY          part    
 real  estimated
> >>                  
         size       size
> >> partition       : 109
> >> estimated       : 107      
      69631760k
> >> flush           :   0  
      0k
> >> failed          :   2  
                 0k  
        (  0.00%)
> >> wait for dumping:   0          
         0k          
(  0.00%)
> >> dumping to tape :   0          
         0k          
(  0.00%)
> >> dumping         :   0    
    0k         0k (  0.00%) (  0.00%)
> >> dumped          : 107  58148656k
 69631760k ( 83.51%) ( 83.51%)
> >> wait for writing:   2  41940935k  48107940k
( 87.18%) ( 60.23%)
> >> wait to flush   :   0        
0k         0k (100.00%) (  0.00%)
> >> writing to tape :   1   8774296k  12515695k
( 70.11%) ( 12.60%)
> >> failed to tape  :   0        
0k         0k (  0.00%) (  0.00%)
> >> taped           : 104   7433425k
  9008125k ( 82.52%) ( 10.68%)
> >> 4 dumpers idle  : not-idle
> >> taper writing, tapeq: 2
> >> network free kps:      2000
> >> holding space   :  50295358k ( 49.79%)
> >>  dumper0 busy   :  2:53:47  (  5.67%)
> >>  dumper1 busy   :  0:13:48  (  0.45%)
> >>  dumper2 busy   :  0:00:00  (  0.00%)
> >>    taper busy   :  0:12:38  (  0.41%)
> >>  0 dumpers busy : 2+0:07:56  ( 94.22%)    
       not-idle: 2+0:00:04  (
> >> 99.73%)
> >>                  

Re: RE Unravel amstatus output

2006-07-17 Thread Joe Donner (sent by Nabble.com)

Good point - and that is why I need help unravelling what it all means.  My
question now would be:  0.41% of what?  What would 100% of that something
represent?  Constant streaming of data to tape from holding disk?

I've just left it alone to see if I get different results when subsequently
running amstatus, but it seems stuck at wherever it is at the moment.  The
tape drive itself is doing nothing...

It really seems as if all went reasonably well and then froze up for some
reason.

Please help if at all possible.


Cyrille Bollu wrote:
> 
> Looking with my newbie's eyes it seems that Amanda is running well. Just 
> very slowly.
> 
> And Amanda's log seems to indicate that the problem is on the tape drive 
> side.
> 
> The only thing strange that I see is the following line which say that 
> your drive is busy only 0,41% of the time:
> 
>>taper busy   :  0:12:38  (  0.41%)
> 
> What does it do the rest of the time???
> 
> [EMAIL PROTECTED] a écrit sur 17/07/2006 10:54:55 :
> 
>> 
>> I set up Amanda on Friday to do an almost real backup job.  I thought 
> this
>> would be the final test before putting it into operation.
>> 
>> When I arrived at work this morning, I was somewhat surprised to see 
> that
>> the Amanda run doesn't seem to have finished.  amstatus daily gives me 
> some
>> information, but I'm not sure how to interpret it.
>> 
>> There are still 3 files on the holding disk, adding up to about 48GB. 
> The
>> tape drive doesn't seem to be doing anything - just sitting there 
> quietly at
>> the moment with no sign of activity.
>> 
>> I won't include the entire output of amstatus daily, but here are 
> extracts,
>> if someone can please tell me if they see something wrong.
>> 
>> I have many entries like these - seems to be one for each DLE:
>> cerberus:/home   0  1003801k finished 
> (22:18:15)
>> 
>> Then these entries, which I think are the 2 that failed, as shown later 
> in
>> the summary:
>> cerberus:/.autofsck  0 planner: [disk /.autofsck
>> offline on cerberus?]
>> cerberus:/.fonts.cache-1 0 planner: [disk
>> /.fonts.cache-1 offline on cerberus?]
>> 
>> Then these 3 that are the ones still on the holding disk:
>> minerva:/home0  8774296k writing to tape
>> (23:09:07)
>> minerva:/usr/local/clients   0 32253287k dump done
>> (1:08:27), wait for writing to tape
>> minerva:/usr/local/development   0  9687648k dump done
>> (23:48:17), wait for writing to tape
>> 
>> And then this summary, which I'm not sure how to interpret:
>> SUMMARY  part  real  estimated
>>size   size
>> partition   : 109
>> estimated   : 107 69631760k
>> flush   :   0 0k
>> failed  :   20k   (  0.00%)
>> wait for dumping:   00k   (  0.00%)
>> dumping to tape :   00k   (  0.00%)
>> dumping :   0 0k 0k (  0.00%) (  0.00%)
>> dumped  : 107  58148656k  69631760k ( 83.51%) ( 83.51%)
>> wait for writing:   2  41940935k  48107940k ( 87.18%) ( 60.23%)
>> wait to flush   :   0 0k 0k (100.00%) (  0.00%)
>> writing to tape :   1   8774296k  12515695k ( 70.11%) ( 12.60%)
>> failed to tape  :   0 0k 0k (  0.00%) (  0.00%)
>> taped   : 104   7433425k   9008125k ( 82.52%) ( 10.68%)
>> 4 dumpers idle  : not-idle
>> taper writing, tapeq: 2
>> network free kps:  2000
>> holding space   :  50295358k ( 49.79%)
>>  dumper0 busy   :  2:53:47  (  5.67%)
>>  dumper1 busy   :  0:13:48  (  0.45%)
>>  dumper2 busy   :  0:00:00  (  0.00%)
>>taper busy   :  0:12:38  (  0.41%)
>>  0 dumpers busy : 2+0:07:56  ( 94.22%)not-idle: 2+0:00:04  (
>> 99.73%)
>> start-wait:  0:07:51  ( 
>> 0.27%)
>>  1 dumper busy  :  2:46:29  (  5.43%)not-idle:  1:20:10  (
>> 48.15%)
>>client-constrained:  1:18:08  (
>> 46.93%)
>>  no-bandwidth:  0:04:16  ( 
>> 2.57%)
>>start-wait:  0:03:54  ( 
>> 2.35%)
>>  2 dumpers busy :  0:10:34  (  0.35%)  client-constrained:  0:06:22  (
>> 60.27%)
>>    start-wait:  

RE Unravel amstatus output

2006-07-17 Thread Cyrille Bollu

Looking with my newbie's eyes it seems that
Amanda is running well. Just very slowly.

And Amanda's log seems to indicate that the
problem is on the tape drive side.

The only thing strange that I see is the
following line which say that your drive is busy only 0,41% of the time:

>    taper busy   :  0:12:38
 (  0.41%)

What does it do the rest of the time???

[EMAIL PROTECTED] a écrit sur 17/07/2006
10:54:55 :

> 
> I set up Amanda on Friday to do an almost real backup job.  I
thought this
> would be the final test before putting it into operation.
> 
> When I arrived at work this morning, I was somewhat surprised to see
that
> the Amanda run doesn't seem to have finished.  amstatus daily
gives me some
> information, but I'm not sure how to interpret it.
> 
> There are still 3 files on the holding disk, adding up to about 48GB.
 The
> tape drive doesn't seem to be doing anything - just sitting there
quietly at
> the moment with no sign of activity.
> 
> I won't include the entire output of amstatus daily, but here are
extracts,
> if someone can please tell me if they see something wrong.
> 
> I have many entries like these - seems to be one for each DLE:
> cerberus:/home                
              0  1003801k finished
(22:18:15)
> 
> Then these entries, which I think are the 2 that failed, as shown
later in
> the summary:
> cerberus:/.autofsck              
           0 planner: [disk /.autofsck
> offline on cerberus?]
> cerberus:/.fonts.cache-1            
        0 planner: [disk
> /.fonts.cache-1 offline on cerberus?]
> 
> Then these 3 that are the ones still on the holding disk:
> minerva:/home                
               0  8774296k
writing to tape
> (23:09:07)
> minerva:/usr/local/clients            
      0 32253287k dump done
> (1:08:27), wait for writing to tape
> minerva:/usr/local/development          
    0  9687648k dump done
> (23:48:17), wait for writing to tape
> 
> And then this summary, which I'm not sure how to interpret:
> SUMMARY          part      real
 estimated
>                    
       size       size
> partition       : 109
> estimated       : 107          
  69631760k
> flush           :   0    
    0k
> failed          :   2    
               0k    
      (  0.00%)
> wait for dumping:   0            
       0k           (  0.00%)
> dumping to tape :   0            
       0k           (  0.00%)
> dumping         :   0      
  0k         0k (  0.00%) (  0.00%)
> dumped          : 107  58148656k  69631760k
( 83.51%) ( 83.51%)
> wait for writing:   2  41940935k  48107940k ( 87.18%)
( 60.23%)
> wait to flush   :   0         0k  
      0k (100.00%) (  0.00%)
> writing to tape :   1   8774296k  12515695k ( 70.11%)
( 12.60%)
> failed to tape  :   0         0k  
      0k (  0.00%) (  0.00%)
> taped           : 104   7433425k  
9008125k ( 82.52%) ( 10.68%)
> 4 dumpers idle  : not-idle
> taper writing, tapeq: 2
> network free kps:      2000
> holding space   :  50295358k ( 49.79%)
>  dumper0 busy   :  2:53:47  (  5.67%)
>  dumper1 busy   :  0:13:48  (  0.45%)
>  dumper2 busy   :  0:00:00  (  0.00%)
>    taper busy   :  0:12:38  (  0.41%)
>  0 dumpers busy : 2+0:07:56  ( 94.22%)      
     not-idle: 2+0:00:04  (
> 99.73%)
>                    
                     
      start-wait:  0:07:51  ( 
> 0.27%)
>  1 dumper busy  :  2:46:29  (  5.43%)  
         not-idle:  1:20:10  (
> 48.15%)
>                    
                   client-constrained:
 1:18:08  (
> 46.93%)
>                    
                     
   no-bandwidth:  0:04:16  ( 
> 2.57%)
>                    
                     
     start-wait:  0:03:54  ( 
> 2.35%)
>  2 dumpers busy :  0:10:34  (  0.35%)  client-constrained:
 0:06:22  (
> 60.27%)
>                    
                     
     start-wait:  0:04:05  (
> 38.76%)
>                    
                     
   no-bandwidth:  0:00:06  ( 
> 0.96%)
>  3 dumpers busy :  0:00:00  (  0.00%)
> 
> I would highly appreciate your insight into what is going on, especially
for
> the 3 DLEs that are "waiting for writing to tape".
> -- 
> View this message in context: http://www.nabble.com/Unravel-
> amstatus-output-tf1953587.html#a5357597
> Sent from the Amanda - Users forum at Nabble.com.
> 


Unravel amstatus output

2006-07-17 Thread Joe Donner (sent by Nabble.com)

I set up Amanda on Friday to do an almost real backup job.  I thought this
would be the final test before putting it into operation.

When I arrived at work this morning, I was somewhat surprised to see that
the Amanda run doesn't seem to have finished.  amstatus daily gives me some
information, but I'm not sure how to interpret it.

There are still 3 files on the holding disk, adding up to about 48GB.  The
tape drive doesn't seem to be doing anything - just sitting there quietly at
the moment with no sign of activity.

I won't include the entire output of amstatus daily, but here are extracts,
if someone can please tell me if they see something wrong.

I have many entries like these - seems to be one for each DLE:
cerberus:/home   0  1003801k finished (22:18:15)

Then these entries, which I think are the 2 that failed, as shown later in
the summary:
cerberus:/.autofsck  0 planner: [disk /.autofsck
offline on cerberus?]
cerberus:/.fonts.cache-1 0 planner: [disk
/.fonts.cache-1 offline on cerberus?]

Then these 3 that are the ones still on the holding disk:
minerva:/home0  8774296k writing to tape
(23:09:07)
minerva:/usr/local/clients   0 32253287k dump done
(1:08:27), wait for writing to tape
minerva:/usr/local/development   0  9687648k dump done
(23:48:17), wait for writing to tape

And then this summary, which I'm not sure how to interpret:
SUMMARY  part  real  estimated
   size   size
partition   : 109
estimated   : 107 69631760k
flush   :   0 0k
failed  :   20k   (  0.00%)
wait for dumping:   00k   (  0.00%)
dumping to tape :   00k   (  0.00%)
dumping :   0 0k 0k (  0.00%) (  0.00%)
dumped  : 107  58148656k  69631760k ( 83.51%) ( 83.51%)
wait for writing:   2  41940935k  48107940k ( 87.18%) ( 60.23%)
wait to flush   :   0 0k 0k (100.00%) (  0.00%)
writing to tape :   1   8774296k  12515695k ( 70.11%) ( 12.60%)
failed to tape  :   0 0k 0k (  0.00%) (  0.00%)
taped   : 104   7433425k   9008125k ( 82.52%) ( 10.68%)
4 dumpers idle  : not-idle
taper writing, tapeq: 2
network free kps:  2000
holding space   :  50295358k ( 49.79%)
 dumper0 busy   :  2:53:47  (  5.67%)
 dumper1 busy   :  0:13:48  (  0.45%)
 dumper2 busy   :  0:00:00  (  0.00%)
   taper busy   :  0:12:38  (  0.41%)
 0 dumpers busy : 2+0:07:56  ( 94.22%)not-idle: 2+0:00:04  (
99.73%)
start-wait:  0:07:51  ( 
0.27%)
 1 dumper busy  :  2:46:29  (  5.43%)not-idle:  1:20:10  (
48.15%)
   client-constrained:  1:18:08  (
46.93%)
 no-bandwidth:  0:04:16  ( 
2.57%)
   start-wait:  0:03:54  ( 
2.35%)
 2 dumpers busy :  0:10:34  (  0.35%)  client-constrained:  0:06:22  (
60.27%)
   start-wait:  0:04:05  (
38.76%)
 no-bandwidth:  0:00:06  ( 
0.96%)
 3 dumpers busy :  0:00:00  (  0.00%)

I would highly appreciate your insight into what is going on, especially for
the 3 DLEs that are "waiting for writing to tape".
-- 
View this message in context: 
http://www.nabble.com/Unravel-amstatus-output-tf1953587.html#a5357597
Sent from the Amanda - Users forum at Nabble.com.



Re: amstatus during active dump

2006-07-12 Thread Jon LaBadie
On Wed, Jul 12, 2006 at 06:14:54AM -0700, Joe Donner (sent by Nabble.com) wrote:
> 
> When you run "amstatus config" while a dump is active, does it show you a
> "progress report" (for want of a better term)?
> 

The man command, or simply try it, is your friend here.

>From the man page:

   DESCRIPTION
   Amstatus gives the current state of the Amanda run specified
   by the config configuration.

   If there is no active Amanda running, it summarizes the result
   of the last run.

-- 
Jon H. LaBadie  [EMAIL PROTECTED]
 JG Computing
 4455 Province Line Road(609) 252-0159
 Princeton, NJ  08540-4322  (609) 683-7220 (fax)


amstatus during active dump

2006-07-12 Thread Joe Donner (sent by Nabble.com)

When you run "amstatus config" while a dump is active, does it show you a
"progress report" (for want of a better term)?

Thanks.
-- 
View this message in context: 
http://www.nabble.com/amstatus-during-active-dump-tf1930763.html#a5288402
Sent from the Amanda - Users forum at Nabble.com.



Re: amanda backup status - amstatus

2006-07-12 Thread Olivier Nicole
> How to verify how much data backed up? Is there anyway
> we can figure out from the amdump.1 log file.

The report that is emailed to you will tell you how much data was
dumped (before and after compression) and how much was saved to tape.

This is in the summary part of the report.

Olivier


amanda backup status - amstatus

2006-07-12 Thread silpa kala
Hi,

How to verify how much data backed up? Is there anyway
we can figure out from the amdump.1 log file.

Please clarify this doubt.

Thanks & Regards,
silpakala

__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 


Version 2.5.0p2: amstatus parse error for logfile from older version

2006-06-12 Thread Toralf Lund
I'm trying to run "amstatus" on existing logfiles after upgrading from 
version 2.4.4p3 to 2.5.0p2. Unfortunately, the command will most of the 
time fail with a message like:


amstatus ks --file  /dumps/amanda/ks/log/amdump.1
Using /dumps/amanda/ks/log/amdump.1 from Thu Jun  8 17:04:30 CEST 2006
ERROR getting estimates 0 (909420) -1 (-1) -1 (-1) at 
/usr/sbin/amstatus line 213,  line 74.


The error seems to come from the following section of PERL code:

   if(/getting estimates (-?\d) \(-2\) (-?\d) \(-2\) (-?\d) \(-2\)/) {
   if($1 != -1) { $getest{$hostpart} .= ":$1:" };
   if($2 != -1) { $getest{$hostpart} .= ":$2:" };
   if($3 != -1) { $getest{$hostpart} .= ":$3:" };
   }
   else {
   die("ERROR $_");
   }

Which does not really make sense to me. Am I missing something, or does 
the above match operator *require* a number of ocurreces of the string 
"(-2)" (as opposed to "some value in brackets")?


Isn't the new amstatus expected to work with old logfiles?

- Toralf



  1   2   >