Re: not a nitpick, feature question

2010-09-22 Thread Charles Curley
On Wed, 22 Sep 2010 18:03:01 -0500
"Dustin J. Mitchell"  wrote:

>  - rsync the entire catalog to another machine nightly
> 
> I just stuck that last one in because it was my technique back when I
> managed a fleet of Amanda servers.  Each would simply rsync its config
> and catalog to the other servers.  Since they were all backing up to
> shared storage (a SAN), I could do a restore / amfetchdump / recovery
> of any dump on any server without trouble.  It's a very
> non-Amanda-centric solution, but it's *very* effective.

Uh, what's "amanda-centric" about using gnu tar or dump? Just because
it's software that doesn't ship with amanda is no reason to avoid
it. :-) As you say, it works.

I do something similar. I started with a recent post on the Amanda
fora, and now have a script that first runs the dumps for the day, then
backs up the amanda metadata. These backups and the dumps are
propagated to another machine and to offsite USB drives using rsync.

The offsite capability I described at
http://www.charlescurley.com/blog/articles/off_site_backups_for_amanda/index.html
I haven't written up the metadata stuff yet.

-- 

Charles Curley  /"\ASCII Ribbon Campaign
Looking for fine software   \ /Respect for open standards
and/or writing?  X No HTML/RTF in email
http://www.charlescurley.com/ \No M$ Word docs in email

Key fingerprint = CE5C 6645 A45A 64E4 94C0  809C FFF6 4C48 4ECD DFDB


Re: not a nitpick, feature question

2010-09-22 Thread Dustin J. Mitchell
On Wed, Sep 22, 2010 at 8:05 PM, Christ Schlacta  wrote:
> Why keep a backup server up 24/7 when it only works durring bakup hours?

Often the backup backup server is a server ordinarily devoted to other
purposes, with a few megs available for catalogs.

> Why not append the entire catalog to each tape?  Or at least the entire
> catalog to date?

Well, the second question suggests a strict interpretation of "entire"
as "all tapes written past, present and future."  And obviously that's
impossible without time-travel. :)

I did outline some of the downsides of writing the present catalog, or
(worse) past and present, to tape.  In particular, the EOM problem
worries me, since a lot of Amanda users now utilize the
flush-threshold-* parameters to fill tapes to 100%, leaving 0% over
for the catalog.  Tapes don't generally give you any indication of how
close they are to EOM, so there's no good way to say "my catalog's
about 30M, so only write dumps up until 30M from the end."

It's a bit of a wonky solution to get catalog data that may be better
backed or regenerated up in other ways.  That's the bottom line, IMHO.

By the way, the amrecatalog utility that I mentioned earlier would be
pretty easy to write.  Amrestore would provide a nice basis for it.

Dustin

-- 
Open Source Storage Engineer
http://www.zmanda.com



Re: not a nitpick, feature question

2010-09-22 Thread Gene Heskett
On Wednesday, September 22, 2010 09:13:37 pm Christ Schlacta did opine:

> Why keep a backup server up 24/7 when it only works durring bakup hours?
 
Because its also my main play box.

> Why not append the entire catalog to each tape?  Or at least the
> entire catalog to date?

This is what my wrapper does, the entire catalog, including the 
cstalog/indices from the amdump just completed.
 
> Sent from my iPhone
> 
> On Sep 22, 2010, at 17:53, Gene Heskett  wrote:
> > On Wednesday, September 22, 2010 08:48:40 pm Dustin J. Mitchell did
> > 
> > opine:
> >> On Wed, Sep 22, 2010 at 10:07 AM, Jon LaBadie  wrote:
> >>> Any thoughts on how desireable you feel be a separate
> >>> copy of amanda data would be and other approaches?
> >> 
> >> This comes up often, and I've never found a solution I'm happy enough
> >> with to make the "official" solution.
> >> 
> >> Gene's approach is the obvious one, but has a few limitations:
> >> 
> >> - What do you do if you run out of space on that tape?  Start a new
> >> tape?  How do you reflect the use of that new tape in the catalog?
> >> 
> >> - How does recovery from that metadata backup work?  There's a
> >> chicken-and-egg problem here, too - you'll need an Amanda config to
> >> run any Amanda tools other than amrestore.
> >> 
> >> Let's break down "metadata" into its component parts, too:
> >> 1. configuration
> >> 2. catalog (logdir, tapelist)
> >> 3. indexes
> >> 4. performance data (curinfo)
> >> 5. debug logs
> >> 
> >> Configuration (1) can be backed up like a normal DLE.  The catalog
> >> (2)
> >> should technically be recoverable from a scan of the tapes
> >> themselves,
> >> although the tool to do this is still awaiting a happy hacker to
> >> bring
> >> it to life.  Indexes (3) are only required for amrecover, and if your
> >> Amanda server is down, you likely want whole DLEs restored, so you
> >> only need amfetchdump.  Performance data (4) will automatically
> >> regenerate itself over subsequent runs, so there's no need to back it
> >> up.  Similarly, debug logs (5) can get quite large, and generally
> >> need
> >> not be backed up.
> >> 
> >> So, to my mind, the only component that needs special handling is the
> >> catalog, and we have a menu of ways to handle that:
> >> 
> >> - append a copy of the entire catalog to the last tape in each run
> >> (hey, what is the "last tape" now that we have multiple simultaneous
> >> tapers?)
> >> 
> >> - append a copy of only the catalog for that run to the last tape in
> >> each run
> >> 
> >> - finally get around to writing 'amrecatalog'
> >> 
> >> - rsync the entire catalog to another machine nightly
> >> 
> >> I just stuck that last one in because it was my technique back when I
> >> managed a fleet of Amanda servers.  Each would simply rsync its
> >> config
> >> and catalog to the other servers.  Since they were all backing up to
> >> shared storage (a SAN), I could do a restore / amfetchdump / recovery
> >> of any dump on any server without trouble.  It's a very
> >> non-Amanda-centric solution, but it's *very* effective.
> >> 
> >> Dustin
> > 
> > And TBT, Dustin, if that other server is known to also be a 24/7
> > machine,
> > that sounds like a jolly good idea.  My milling machines drive, a
> > 60Gb,
> > would have more than enough space to do that, also including an
> > image of
> > /home/amanda.  But its uptime is possibly not on a par with this box
> > when
> > the weather & power supplies might take it down.


-- 
Cheers, Gene
"There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author)
Kliban's First Law of Dining:
Never eat anything bigger than your head.


Re: not a nitpick, feature question

2010-09-22 Thread Christ Schlacta

Why keep a backup server up 24/7 when it only works durring bakup hours?

Why not append the entire catalog to each tape?  Or at least the  
entire catalog to date?


Sent from my iPhone

On Sep 22, 2010, at 17:53, Gene Heskett  wrote:

On Wednesday, September 22, 2010 08:48:40 pm Dustin J. Mitchell did  
opine:



On Wed, Sep 22, 2010 at 10:07 AM, Jon LaBadie  wrote:

Any thoughts on how desireable you feel be a separate
copy of amanda data would be and other approaches?


This comes up often, and I've never found a solution I'm happy enough
with to make the "official" solution.

Gene's approach is the obvious one, but has a few limitations:

- What do you do if you run out of space on that tape?  Start a new
tape?  How do you reflect the use of that new tape in the catalog?

- How does recovery from that metadata backup work?  There's a
chicken-and-egg problem here, too - you'll need an Amanda config to
run any Amanda tools other than amrestore.

Let's break down "metadata" into its component parts, too:
1. configuration
2. catalog (logdir, tapelist)
3. indexes
4. performance data (curinfo)
5. debug logs

Configuration (1) can be backed up like a normal DLE.  The catalog  
(2)
should technically be recoverable from a scan of the tapes  
themselves,
although the tool to do this is still awaiting a happy hacker to  
bring

it to life.  Indexes (3) are only required for amrecover, and if your
Amanda server is down, you likely want whole DLEs restored, so you
only need amfetchdump.  Performance data (4) will automatically
regenerate itself over subsequent runs, so there's no need to back it
up.  Similarly, debug logs (5) can get quite large, and generally  
need

not be backed up.

So, to my mind, the only component that needs special handling is the
catalog, and we have a menu of ways to handle that:

- append a copy of the entire catalog to the last tape in each run
(hey, what is the "last tape" now that we have multiple simultaneous
tapers?)

- append a copy of only the catalog for that run to the last tape in
each run

- finally get around to writing 'amrecatalog'

- rsync the entire catalog to another machine nightly

I just stuck that last one in because it was my technique back when I
managed a fleet of Amanda servers.  Each would simply rsync its  
config

and catalog to the other servers.  Since they were all backing up to
shared storage (a SAN), I could do a restore / amfetchdump / recovery
of any dump on any server without trouble.  It's a very
non-Amanda-centric solution, but it's *very* effective.

Dustin


And TBT, Dustin, if that other server is known to also be a 24/7  
machine,
that sounds like a jolly good idea.  My milling machines drive, a  
60Gb,
would have more than enough space to do that, also including an  
image of
/home/amanda.  But its uptime is possibly not on a par with this box  
when

the weather & power supplies might take it down.


--
Cheers, Gene
"There are four boxes to be used in defense of liberty:
soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author)
Misery no longer loves company.  Nowadays it insists on it.
   -- Russell Baker


Re: not a nitpick, feature question

2010-09-22 Thread Gene Heskett
On Wednesday, September 22, 2010 08:48:40 pm Dustin J. Mitchell did opine:

> On Wed, Sep 22, 2010 at 10:07 AM, Jon LaBadie  wrote:
> > Any thoughts on how desireable you feel be a separate
> > copy of amanda data would be and other approaches?
> 
> This comes up often, and I've never found a solution I'm happy enough
> with to make the "official" solution.
> 
> Gene's approach is the obvious one, but has a few limitations:
> 
>  - What do you do if you run out of space on that tape?  Start a new
> tape?  How do you reflect the use of that new tape in the catalog?
> 
>  - How does recovery from that metadata backup work?  There's a
> chicken-and-egg problem here, too - you'll need an Amanda config to
> run any Amanda tools other than amrestore.
> 
> Let's break down "metadata" into its component parts, too:
>  1. configuration
>  2. catalog (logdir, tapelist)
>  3. indexes
>  4. performance data (curinfo)
>  5. debug logs
> 
> Configuration (1) can be backed up like a normal DLE.  The catalog (2)
> should technically be recoverable from a scan of the tapes themselves,
> although the tool to do this is still awaiting a happy hacker to bring
> it to life.  Indexes (3) are only required for amrecover, and if your
> Amanda server is down, you likely want whole DLEs restored, so you
> only need amfetchdump.  Performance data (4) will automatically
> regenerate itself over subsequent runs, so there's no need to back it
> up.  Similarly, debug logs (5) can get quite large, and generally need
> not be backed up.
> 
> So, to my mind, the only component that needs special handling is the
> catalog, and we have a menu of ways to handle that:
> 
>  - append a copy of the entire catalog to the last tape in each run
> (hey, what is the "last tape" now that we have multiple simultaneous
> tapers?)
> 
>  - append a copy of only the catalog for that run to the last tape in
> each run
> 
>  - finally get around to writing 'amrecatalog'
> 
>  - rsync the entire catalog to another machine nightly
> 
> I just stuck that last one in because it was my technique back when I
> managed a fleet of Amanda servers.  Each would simply rsync its config
> and catalog to the other servers.  Since they were all backing up to
> shared storage (a SAN), I could do a restore / amfetchdump / recovery
> of any dump on any server without trouble.  It's a very
> non-Amanda-centric solution, but it's *very* effective.
> 
> Dustin

And TBT, Dustin, if that other server is known to also be a 24/7 machine, 
that sounds like a jolly good idea.  My milling machines drive, a 60Gb, 
would have more than enough space to do that, also including an image of 
/home/amanda.  But its uptime is possibly not on a par with this box when 
the weather & power supplies might take it down.


-- 
Cheers, Gene
"There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author)
Misery no longer loves company.  Nowadays it insists on it.
-- Russell Baker


Re: not a nitpick, feature question

2010-09-22 Thread Dustin J. Mitchell
On Wed, Sep 22, 2010 at 10:07 AM, Jon LaBadie  wrote:
> Any thoughts on how desireable you feel be a separate
> copy of amanda data would be and other approaches?

This comes up often, and I've never found a solution I'm happy enough
with to make the "official" solution.

Gene's approach is the obvious one, but has a few limitations:

 - What do you do if you run out of space on that tape?  Start a new
tape?  How do you reflect the use of that new tape in the catalog?

 - How does recovery from that metadata backup work?  There's a
chicken-and-egg problem here, too - you'll need an Amanda config to
run any Amanda tools other than amrestore.

Let's break down "metadata" into its component parts, too:
 1. configuration
 2. catalog (logdir, tapelist)
 3. indexes
 4. performance data (curinfo)
 5. debug logs

Configuration (1) can be backed up like a normal DLE.  The catalog (2)
should technically be recoverable from a scan of the tapes themselves,
although the tool to do this is still awaiting a happy hacker to bring
it to life.  Indexes (3) are only required for amrecover, and if your
Amanda server is down, you likely want whole DLEs restored, so you
only need amfetchdump.  Performance data (4) will automatically
regenerate itself over subsequent runs, so there's no need to back it
up.  Similarly, debug logs (5) can get quite large, and generally need
not be backed up.

So, to my mind, the only component that needs special handling is the
catalog, and we have a menu of ways to handle that:

 - append a copy of the entire catalog to the last tape in each run
(hey, what is the "last tape" now that we have multiple simultaneous
tapers?)

 - append a copy of only the catalog for that run to the last tape in each run

 - finally get around to writing 'amrecatalog'

 - rsync the entire catalog to another machine nightly

I just stuck that last one in because it was my technique back when I
managed a fleet of Amanda servers.  Each would simply rsync its config
and catalog to the other servers.  Since they were all backing up to
shared storage (a SAN), I could do a restore / amfetchdump / recovery
of any dump on any server without trouble.  It's a very
non-Amanda-centric solution, but it's *very* effective.

Dustin

-- 
Open Source Storage Engineer
http://www.zmanda.com


RE: amrecover bug?

2010-09-22 Thread Titl Erich
Hi Dustin 

> -Original Message-
> From: djmit...@gmail.com [mailto:djmit...@gmail.com] On 
> Behalf Of Dustin J. Mitchell
> Sent: Wednesday, September 22, 2010 4:58 PM
> To: Titl Erich
> Cc: amanda-users@amanda.org
> Subject: Re: amrecover bug?
> 
> On Wed, Sep 22, 2010 at 3:17 AM, Titl Erich  wrote:
> > A few weeks ago I reported in the forum a problem with 
> amrecover and 
> > compressed dump files. Meanwhile I changed to uncompressed backup, 
> > still no luck and, unfortunately not much replies either. So please 
> > bear with me when I repeat this here.
> 
> No worries about the repetition - this mailing list is the 
> more actively-monitored spot.
> 
> My first guess is that Amanda is not seeking to the required 
> file appropriately.  Can you let us know what OS you're 
> using, what type of tape, and also send along the amrecover 
> debug log file from the client and the amidxtaped debug log 
> file (with matching datestamp) from the server?

OS Linux Debian Lenny
Tape is an overland REO 9000 virtual tape library iSCSI connected

> 
> Have a look at the FSF_AFTER_FILEMARK property in 
> amanda-devices(7), as that's my first guess for what's going wrong.

OK, looking at my config this is not defined so false at the current OS, now
I set this to true 

subversion:/backup/amanda# amrecover
AMRECOVER Version 3.1.2. Contacting server on amanda.ruf.ch ...
220 amanda AMANDA index server (3.1.2) ready.
Setting restore date to today (2010-09-22)
200 Working date set to 2010-09-22.
200 Config set to DailySet1.
501 Host subversion is not in your disklist.
Trying host subversion ...
501 Host subversion is not in your disklist.
Trying host subversion.ruf.ch ...
200 Dump host set to subversion.ruf.ch.
Use the setdisk command to choose dump disk to recover
amrecover> listdisk
200- List of disk for host subversion.ruf.ch
201- /
201- /home
201- /var
201- /usr
201- /data
200 List of disk for host subversion.ruf.ch
amrecover> setdisk /data
200 Disk set to /data.
amrecover> ls
2010-09-22-01-00-02 svn/
2010-09-22-01-00-02 lost+found/
2010-09-22-01-00-02 amanda/
2010-09-22-01-00-02 .
amrecover> add svn
Added dir /svn/ at date 2010-09-21-08-23-12
Added dir /svn/ at date 2010-09-22-01-00-02
amrecover> extract

Extracting files using tape drive changer on host amanda.ruf.ch.
The following tapes are needed: amanda-0004
amanda-0005

Extracting files using tape drive changer on host amanda.ruf.ch.
Load tape amanda-0004 now
Continue [?/Y/n/s/d]?
tar: ./svn: Not found in archive
tar: Error exit delayed from previous errors
Extractor child exited with status 2
Extracting files using tape drive changer on host amanda.ruf.ch.
Load tape amanda-0005 now
Continue [?/Y/n/s/d]?
tar: ./svn: Not found in archive
tar: Error exit delayed from previous errors
Extractor child exited with status 2
amrecover>

I infer from this that tar found a reasonable archive, but was unable to
extract the svn directory.

Herre is some debugging from the client

Wed Sep 22 17:14:28 2010: amrecover: user command: 'ls'
Wed Sep 22 17:14:34 2010: amrecover: user command: 'add svn'
Wed Sep 22 17:14:34 2010: amrecover: add_glob (svn) -> ^svn$
Wed Sep 22 17:14:34 2010: amrecover: add_file: Looking for "svn[/]*$"
Wed Sep 22 17:14:34 2010: amrecover: add_file: Converted path="svn[/]*$" to
path_on_disk="/svn[/]*$"
Wed Sep 22 17:14:34 2010: amrecover: add_file: Pondering ditem->path=/svn/
Wed Sep 22 17:14:34 2010: amrecover: sending: ORLD /svn^M


Wed Sep 22 17:14:34 2010: amrecover: add_file: (Successful) Added dir /svn/
at date 2010-09-21-08-23-12
Wed Sep 22 17:14:35 2010: amrecover: add_file: (Successful) Added dir /svn/
at date 2010-09-22-01-00-02
Wed Sep 22 17:14:35 2010: amrecover: add_file: Pondering
ditem->path=/lost+found/
Wed Sep 22 17:14:35 2010: amrecover: add_file: Pondering
ditem->path=/amanda/
Wed Sep 22 17:14:35 2010: amrecover: add_file: Pondering ditem->path=/.
Wed Sep 22 17:14:46 2010: amrecover: user command: 'extract'
Wed Sep 22 17:14:46 2010: amrecover: append_to_tapelist(tapelist=(nil),
label='amanda-0004', file=-1, partnum=-1,  isafile=0)
Wed Sep 22 17:14:46 2010: amrecover: append_to_tapelist(tapelist=0xde0910,
label='amanda-0004', file=13, partnum=-1,  isafile=0)
Wed Sep 22 17:14:46 2010: amrecover: append_to_tapelist(tapelist=(nil),
label='amanda-0005', file=-1, partnum=-1,  isafile=0)
Wed Sep 22 17:14:46 2010: amrecover: append_to_tapelist(tapelist=0xde0910,
label='amanda-0005', file=6, partnum=-1,  isafile=0)
Wed Sep 22 17:14:46 2010: amrecover: append_to_tapelist(tapelist=(nil),
label='amanda-0004', file=-1, partnum=-1,  isafile=0)
Wed Sep 22 17:14:46 2010: amrecover: append_to_tapelist(tapelist=0xde0910,
label='amanda-0004', file=13, partnum=-1,  isafile=0)
Wed Sep 22 17:14:46 2010: amrecover: Requesting tape amanda-0004 from user
Wed Sep 22 17:15:17 2010: amrecover: User prompt: 'Continue [?/Y/n/s/d]? ';
response: ''
Wed Sep 22 17:15:17 2010: amrecover: security_getdriver(name=bsd) returns
0x7f

not a nitpick, feature question

2010-09-22 Thread Jon LaBadie
Blueskying here:

Backing up amanda's data has always been a problem.

Gene's solution of running, after amdump, a separate,
backup program and adding it to the end of the last
amanda tape could work, but a method within amanda
itself would be the better solution.  Gene's
approach also has a big advantage of the data not
changing while it was being backed up.

I wonder, do the new changes and APIs would allow us
to revisit this "problem".  I'm thinking of something
like a new dumptype "run last" or "run after".
The DLE it is applied to could list the server's
root directory as the starting point and have several
"include" directives to specify the several amanda
directories to backup (even if on different filesystems).

The above approach would save the amanda data as the
last data on the media.  Two alternative approaches
would put the data at the head of the next dump.  One
would be to run an second amdump after the primary one
completed that dumped only the one DLE and left it on
the holding disk to be auto-flushed later.  The other
approach would be a "run before" dumptype.

Any thoughts on how desireable you feel be a separate
copy of amanda data would be and other approaches?

Jon
-- 
Jon H. LaBadie  j...@jgcomp.com
 JG Computing
 12027 Creekbend Drive  (703) 787-0884
 Reston, VA  20194  (703) 787-0922 (fax)


Re: amrecover bug?

2010-09-22 Thread Dustin J. Mitchell
On Wed, Sep 22, 2010 at 3:17 AM, Titl Erich  wrote:
> A few weeks ago I reported in the forum a problem with amrecover and
> compressed dump files. Meanwhile I changed to uncompressed backup, still no
> luck and, unfortunately not much replies either. So please bear with me when
> I repeat this here.

No worries about the repetition - this mailing list is the more
actively-monitored spot.

My first guess is that Amanda is not seeking to the required file
appropriately.  Can you let us know what OS you're using, what type of
tape, and also send along the amrecover debug log file from the client
and the amidxtaped debug log file (with matching datestamp) from the
server?

Have a look at the FSF_AFTER_FILEMARK property in amanda-devices(7),
as that's my first guess for what's going wrong.

Dustin

-- 
Open Source Storage Engineer
http://www.zmanda.com


Re: 3.2.0svn3411

2010-09-22 Thread Stefan G. Weichinger
Am 22.09.2010 12:11, schrieb Jean-Louis Martineau:
> Stefan,
> 
> Remove the log, amdump or amflush file from the log dir.

see attachment. nothing taped.

Stefan


amdump.1.gz
Description: GNU Zip compressed data


Re: 3.2.0svn3411

2010-09-22 Thread Gene Heskett
On Wednesday, September 22, 2010 08:21:45 am Jean-Louis Martineau did 
opine:

> What's the taperalgo? Is it firstfit, largestfit or smallest
> Have you enabled splitting?
LARGESTFIT
and no:r...@coyote Daily]# grep split 
/usr/local/etc/amanda/Daily/amanda.conf
#  N Kb/Mb/Gb split images in chunks of size N
##   tape_splitsize 1G
##   split_diskbuffer   "/dumps"
#   fallback_splitsize  64m


> 
> Try the attached patch

The email I got from this mornings amflush run said 'no file to flush' 32 
times.

Its working.  Not done yet, but it is pouring 100 megs/sec into slot25.

Darned typu's are hard to find. ;-)  I assume 3435 & up will carry these two 
patches forward?

Done, report printed, looks normal.  Email from the flush looks normal.  We 
appear to be back to the regularly scheduled programming.

Thank you very much.

-- 
Cheers, Gene
"There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author)
The rose of yore is but a name, mere names are left to us.


Re: 3.2.0svn3411

2010-09-22 Thread Jean-Louis Martineau

What's the taperalgo? Is it firstfit, largestfit or smallest
Have you enabled splitting?

Try the attached patch

Jean-Louis

Gene Heskett wrote:

On Wednesday, September 22, 2010 06:01:08 am Jean-Louis Martineau did opine:

  

Attachment included.

Jean-Louis

Jean-Louis Martineau wrote:


I can't find why it flush nothing.

Can you upgrade to latest SVN and try the attached patch, it add mode
degugging.
Add 'debug-driver 1' in amanda.conf

Retry amflush and send me the resulting amflush.1 file.
  


3.2.0.svn.3434 with patch. taper didn't write, 28Gb in holding disk.

Ran amflush -f Daily by hand.  Returned in about 2 seconds.
  


diff --git a/server-src/driver.c b/server-src/driver.c
index 6182ee0..148449f 100644
--- a/server-src/driver.c
+++ b/server-src/driver.c
@@ -780,6 +780,7 @@ startaflush_tape(
 disk_t *fit = NULL;
 char *datestamp;
 off_t extra_tapes_size = 0;
+off_t taper_left;
 char *qname;
 TapeAction result_tape_action;
 char *why_no_new_tape = NULL;
@@ -847,6 +848,11 @@ startaflush_tape(
 	}
 	}
 
+	if (taper->state & TAPER_STATE_TAPE_STARTED) {
+	taper_left = taper->left;
+	} else {
+	taper_left = tape_length;
+	}
 	dp = NULL;
 	datestamp = sched(tapeq.head)->datestamp;
 	switch(taperalgo) {
@@ -857,7 +863,7 @@ startaflush_tape(
 		fit = tapeq.head;
 		while (fit != NULL) {
 		if (sched(fit)->act_size <=
-		 (fit->splitsize ? extra_tapes_size : taper->left) &&
+		 (fit->splitsize ? extra_tapes_size : taper_left) &&
 			 strcmp(sched(fit)->datestamp, datestamp) <= 0) {
 			dp = fit;
 			fit = NULL;
@@ -883,9 +889,10 @@ startaflush_tape(
 		fit = tapeq.head;
 		while (fit != NULL) {
 		if(sched(fit)->act_size <=
-		   (fit->splitsize ? extra_tapes_size : taper->left) &&
+		   (fit->splitsize ? extra_tapes_size : taper_left) &&
 		   (!dp || sched(fit)->act_size > sched(dp)->act_size) &&
 		   strcmp(sched(fit)->datestamp, datestamp) <= 0) {
+g_debug("%ld %ld %ld", fit->splitsize, extra_tapes_size, taper->left);
 			dp = fit;
 		}
 		fit = fit->next;
@@ -907,7 +914,7 @@ startaflush_tape(
 		fit = dp = tapeq.head;
 		while (fit != NULL) {
 		if (sched(fit)->act_size <=
-			(fit->splitsize ? extra_tapes_size : taper->left) &&
+			(fit->splitsize ? extra_tapes_size : taper_left) &&
 			(!dp || sched(fit)->act_size < sched(dp)->act_size) &&
 			strcmp(sched(fit)->datestamp, datestamp) <= 0) {
 			dp = fit;
@@ -924,7 +931,7 @@ startaflush_tape(
 		fit = tapeq.tail;
 		while (fit != NULL) {
 		if (sched(fit)->act_size <=
-			(fit->splitsize ? extra_tapes_size : taper->left) &&
+			(fit->splitsize ? extra_tapes_size : taper_left) &&
 			(!dp || sched(fit)->act_size < sched(dp)->act_size) &&
 			strcmp(sched(fit)->datestamp, datestamp) <= 0) {
 			dp = fit;


Re: 3.2.0svn3411

2010-09-22 Thread Gene Heskett
On Wednesday, September 22, 2010 06:01:08 am Jean-Louis Martineau did opine:

> Attachment included.
> 
> Jean-Louis
> 
> Jean-Louis Martineau wrote:
> > I can't find why it flush nothing.
> > 
> > Can you upgrade to latest SVN and try the attached patch, it add mode
> > degugging.
> > Add 'debug-driver 1' in amanda.conf
> > 
> > Retry amflush and send me the resulting amflush.1 file.

3.2.0.svn.3434 with patch. taper didn't write, 28Gb in holding disk.

Ran amflush -f Daily by hand.  Returned in about 2 seconds.

amflush.20100922060304.debug:
Wed Sep 22 06:03:04 2010: amflush: pid 22495 ruid 501 euid 501 version 
3.2.0alpha.svn.3434: start at Wed Sep 22 06:03:04 2010
Wed Sep 22 06:03:04 2010: amflush: pid 22495 ruid 501 euid 501 version 
3.2.0alpha.svn.3434: rename at Wed Sep 22 06:03:04 2010
Wed Sep 22 06:03:14 2010: amflush: flushing 
'/usr/dumps/20100922005007/coyote._GenesAmandaHelper-0.6.2'
Wed Sep 22 06:03:14 2010: amflush: flushing 
'/usr/dumps/20100922005007/coyote._bin.1'
Wed Sep 22 06:03:14 2010: amflush: flushing 
'/usr/dumps/20100922005007/coyote._boot.1'
Wed Sep 22 06:03:14 2010: amflush: flushing 
'/usr/dumps/20100922005007/coyote._etc.1'
Wed Sep 22 06:03:14 2010: amflush: flushing 
'/usr/dumps/20100922005007/coyote._home.0'
Wed Sep 22 06:03:14 2010: amflush: flushing 
'/usr/dumps/20100922005007/coyote._lib.2'
Wed Sep 22 06:03:14 2010: amflush: flushing 
'/usr/dumps/20100922005007/coyote._opt.0'
Wed Sep 22 06:03:14 2010: amflush: flushing 
'/usr/dumps/20100922005007/coyote._root.1'
Wed Sep 22 06:03:14 2010: amflush: flushing 
'/usr/dumps/20100922005007/coyote._sbin.1'
Wed Sep 22 06:03:14 2010: amflush: flushing 
'/usr/dumps/20100922005007/coyote._tmp.1'
Wed Sep 22 06:03:14 2010: amflush: flushing 
'/usr/dumps/20100922005007/coyote._usr_X11R6.1'
Wed Sep 22 06:03:14 2010: amflush: flushing 
'/usr/dumps/20100922005007/coyote._usr_bin.1'
Wed Sep 22 06:03:14 2010: amflush: flushing 
'/usr/dumps/20100922005007/coyote._usr_dlds_misc.1'
Wed Sep 22 06:03:14 2010: amflush: flushing 
'/usr/dumps/20100922005007/coyote._usr_dlds_rpms.1'
Wed Sep 22 06:03:14 2010: amflush: flushing 
'/usr/dumps/20100922005007/coyote._usr_dlds_tgzs.1'
Wed Sep 22 06:03:14 2010: amflush: flushing 
'/usr/dumps/20100922005007/coyote._usr_include.1'
Wed Sep 22 06:03:14 2010: amflush: flushing 
'/usr/dumps/20100922005007/coyote._usr_lib.1'
Wed Sep 22 06:03:14 2010: amflush: flushing 
'/usr/dumps/20100922005007/coyote._usr_libexec.1'
Wed Sep 22 06:03:14 2010: amflush: flushing 
'/usr/dumps/20100922005007/coyote._usr_local.2'
Wed Sep 22 06:03:14 2010: amflush: flushing 
'/usr/dumps/20100922005007/coyote._usr_movies.0'
Wed Sep 22 06:03:14 2010: amflush: flushing 
'/usr/dumps/20100922005007/coyote._usr_music.1'
Wed Sep 22 06:03:14 2010: amflush: flushing 
'/usr/dumps/20100922005007/coyote._usr_pix.1'
Wed Sep 22 06:03:14 2010: amflush: flushing 
'/usr/dumps/20100922005007/coyote._usr_sbin.1'
Wed Sep 22 06:03:14 2010: amflush: flushing 
'/usr/dumps/20100922005007/coyote._usr_share.1'
Wed Sep 22 06:03:14 2010: amflush: flushing 
'/usr/dumps/20100922005007/coyote._usr_src.2'
Wed Sep 22 06:03:14 2010: amflush: flushing 
'/usr/dumps/20100922005007/coyote._var.1'
Wed Sep 22 06:03:14 2010: amflush: flushing 
'/usr/dumps/20100922005007/shop._etc.0'
Wed Sep 22 06:03:14 2010: amflush: flushing 
'/usr/dumps/20100922005007/shop._home.0'
Wed Sep 22 06:03:14 2010: amflush: flushing 
'/usr/dumps/20100922005007/shop._usr_lib_amanda.0'
Wed Sep 22 06:03:14 2010: amflush: flushing 
'/usr/dumps/20100922005007/shop._usr_local.0'
Wed Sep 22 06:03:14 2010: amflush: flushing 
'/usr/dumps/20100922005007/shop._usr_src.0'
Wed Sep 22 06:03:14 2010: amflush: flushing 
'/usr/dumps/20100922005007/shop._var_lib_amanda.0'

amflush.1:

amflush: start at Wed Sep 22 06:03:14 EDT 2010
amflush: datestamp 20100922060304
amflush: starttime 20100922060304
amflush: starttime-locale-independent 2010-09-22 06:03:14 EDT
FLUSH coyote /GenesAmandaHelper-0.6 20100922005007 2 
/usr/dumps/20100922005007/coyote._GenesAmandaHelper-0.6.2
FLUSH coyote /bin 20100922005007 1 /usr/dumps/20100922005007/coyote._bin.1
FLUSH coyote /boot 20100922005007 1 /usr/dumps/20100922005007/coyote._boot.1
FLUSH coyote /etc 20100922005007 1 /usr/dumps/20100922005007/coyote._etc.1
FLUSH coyote /home 20100922005007 0 /usr/dumps/20100922005007/coyote._home.0
FLUSH coyote /lib 20100922005007 2 /usr/dumps/20100922005007/coyote._lib.2
FLUSH coyote /opt 20100922005007 0 /usr/dumps/20100922005007/coyote._opt.0
FLUSH coyote /root 20100922005007 1 /usr/dumps/20100922005007/coyote._root.1
FLUSH coyote /sbin 20100922005007 1 /usr/dumps/20100922005007/coyote._sbin.1
FLUSH coyote /tmp 20100922005007 1 /usr/dumps/20100922005007/coyote._tmp.1
FLUSH coyote /usr/X11R6 20100922005007 1 
/usr/dumps/20100922005007/coyote._usr_X11R6.1
FLUSH coyote /usr/bin 20100922005007 1 
/usr/dumps/20100922005007/coyote._usr_bin.1
FLUSH coyote /usr/dlds/misc 20100922005007 1 
/usr/dumps/20100922005007/coyote._usr_dlds_misc.1
FLUSH coyote /us

Re: 3.2.0svn3411

2010-09-22 Thread Jean-Louis Martineau

Stefan,

Remove the log, amdump or amflush file from the log dir.

Jean-Louis

Stefan G. Weichinger wrote:

Am 21.09.2010 19:55, schrieb Jean-Louis Martineau:
  

Attachment included.

Jean-Louis

Jean-Louis Martineau wrote:


I can't find why it flush nothing.

Can you upgrade to latest SVN and try the attached patch, it add mode
degugging.
Add 'debug-driver 1' in amanda.conf

Retry amflush and send me the resulting amflush.1 file.
  


Appplied patch, but my initial problem always is:

FATAL amlogroll could not get current timestamp at
/usr/libexec/amanda/amlogroll line 64.

amflush doesn't even start.

v3434 now

Stefan
  




Re: 3.2.0svn3411

2010-09-22 Thread Stefan G. Weichinger
Am 21.09.2010 19:55, schrieb Jean-Louis Martineau:
> Attachment included.
> 
> Jean-Louis
> 
> Jean-Louis Martineau wrote:
>> I can't find why it flush nothing.
>>
>> Can you upgrade to latest SVN and try the attached patch, it add mode
>> degugging.
>> Add 'debug-driver 1' in amanda.conf
>>
>> Retry amflush and send me the resulting amflush.1 file.

Appplied patch, but my initial problem always is:

FATAL amlogroll could not get current timestamp at
/usr/libexec/amanda/amlogroll line 64.

amflush doesn't even start.

v3434 now

Stefan


amrecover bug?

2010-09-22 Thread Titl Erich
Hi folks

A few weeks ago I reported in the forum a problem with amrecover and
compressed dump files. Meanwhile I changed to uncompressed backup, still no
luck and, unfortunately not much replies either. So please bear with me when
I repeat this here.

subversion:/backup/amanda# amrecover
AMRECOVER Version 3.1.2. Contacting server on amanda.ruf.ch ...
220 amanda AMANDA index server (3.1.2) ready.
Setting restore date to today (2010-09-20)
200 Working date set to 2010-09-20.
200 Config set to DailySet1.
501 Host subversion is not in your disklist.
Trying host subversion ...
501 Host subversion is not in your disklist.
Trying host subversion.ruf.ch ...
200 Dump host set to subversion.ruf.ch.
Use the setdisk command to choose dump disk to recover
amrecover> setdisk /data
200 Disk set to /data.
amrecover> ls
2010-09-18-01-00-01 svn/
2010-09-18-01-00-01 lost+found/
2010-09-18-01-00-01 amanda/
2010-09-18-01-00-01 .
amrecover> add svn
Added dir /svn/ at date 2010-09-18-01-00-01
amrecover> extract

Extracting files using tape drive changer on host amanda.ruf.ch.
The following tapes are needed: amanda-0002

Extracting files using tape drive changer on host amanda.ruf.ch.
Load tape amanda-0002 now
Continue [?/Y/n/s/d]?
Restoring files into directory /backup/amanda
All existing files in /backup/amanda can be deleted
Continue [?/Y/n]?

Checksum error 0, inode 0 file (null)
restore: Tape is not a dump tape

The tape part is definitely a dump tape, but amrecover just does not
recognize it as such.

On the server I extracted the relevant file with dd, cut off the first 32k
bytes and fed it to restore, which succeeded to restore the directories.

There is a bug in amrecover somewhere and it gives me grief.

I moved to tar based backups and got similar issues, the backup in this case
is recognized by the server but a specific file is not found. Could someone
direct me to a sensible debug procedure.

Thanks

Erich Titl


smime.p7s
Description: S/MIME cryptographic signature