re-indexing virtual tapes

2006-05-02 Thread Thomas Widhalm
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi!

I started using virtual tapes some weeks ago. Unfortunately I ran out of
disk space far more quickly, than I expected, because I did many full
dumps due to misconfiguration. Now I had to delete some of the backups.
I did not find any other way, than deleting the files within the virtual
tapes. (Only those with backups not the files, amanda created when
labelling the "tapes"). Now, is there a way to re-index the left -over
files and delete the missing backups from the index?

Regards,
Thomas
- --

*
* Thomas Widhalm Unix Administrator *
* University of Salzburg   ITServices (ITS) *
* Systems Management   Unix Systems *
* Hellbrunnerstr. 34 5020 Salzburg, Austria *
* [EMAIL PROTECTED] +43/662/8044-6774 *
* gpg: 6265BAE6 *
* http://www.sbg.ac.at/zid/organisation/mitarbeiter/widhalm.htm *
*
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.3 (GNU/Linux)
Comment: Using GnuPG with Fedora - http://enigmail.mozdev.org

iD8DBQFEVwttkbjs3GJluuYRAiy5AJ9aMS1zcrxww0VYEUU8p/+CA9QjYACffcRF
vKHOZCj0Drpd+RAz/LvB97k=
=VhY7
-END PGP SIGNATURE-


Re: selfcheck problems with 2.5.0p1

2006-05-02 Thread Paul Bijnens

On 2006-05-01 22:52, Steven Sweet wrote:
I have just compiled amanda-2.5.0p1 on two systems.  When I run amcheck 
on the server system the server checks out fine, but I get this for the 
client check:


Amanda Backup Client Hosts Check

ERROR: SystemA.local:/ does not support DUMPER-API.
ERROR: SystemA.local: [BOGUS REQUEST PACKET]
Client check: 1 host checked in 0.014 seconds, 1 problem found


You get this error message when the program parameter in the dumptype is
 not "GNUTAR" or "DUMP", e.g. "gnutar" lowercase instead of uppercase.




I can't find any reference to this problem except one indicating a 
mismatch between Amanda versions, which I know isn't the problem in this 
case.  Does anyone know where I should start looking?  Is this a 
configuration problem or a missing requirements problem?





--
Paul Bijnens, xplanation Technology ServicesTel  +32 16 397.511
Technologielaan 21 bus 2, B-3001 Leuven, BELGIUMFax  +32 16 397.512
http://www.xplanation.com/  email:  [EMAIL PROTECTED]
***
* I think I've got the hang of it now:  exit, ^D, ^C, ^\, ^Z, ^Q, ^^, *
* F6, quit, ZZ, :q, :q!, M-Z, ^X^C, logoff, logout, close, bye, /bye, *
* stop, end, F3, ~., ^]c, +++ ATH, disconnect, halt,  abort,  hangup, *
* PF4, F20, ^X^X, :D::D, KJOB, F14-f-e, F8-e,  kill -1 $$,  shutdown, *
* init 0, kill -9 1, Alt-F4, Ctrl-Alt-Del, AltGr-NumLock, Stop-A, ... *
* ...  "Are you sure?"  ...   YES   ...   Phew ...   I'm out  *
***



Re: Scalability information sought

2006-05-02 Thread Alexander Jolk

Jon LaBadie wrote:

I looked back at the survey results from a few years
ago.  At that time the respondents reported maximums
of:

5 TB  total disk capacity
4 TB  actual data stored
  700 GB  average size of amdump run
   70 clients
? disklist entries (not asked in the survey)

If any of you have installations with substantially
higher values than these, I'd love to hear of it.


On one of my two site, we have 20TB total disk capacity, of which about 
16TB is in use; on two servers, 800GB nightly (two 200GB LTO-2 tapes per 
server per night); 45 clients, split into about 2200 individual DLEs.


Alex


--
Alexander Jolk  * BUF Compagnie * [EMAIL PROTECTED]
Tel +33-1 42 68 18 28  *  Fax +33-1 42 68 18 29


Re: Scalability information sought

2006-05-02 Thread listrcv

Jon LaBadie wrote:

At the recent 'Trenton Computer Festival' I was
discussing amanda with a gentleman who asked
something like "how big can amanda go?"


Hm, what would the technical limit be? Like 4194304 clients and DLEs? Or 
is there no such limit?



GH


Re: Scalability information sought

2006-05-02 Thread Paul Bijnens

On 2006-05-02 10:55, listrcv wrote:

Jon LaBadie wrote:

At the recent 'Trenton Computer Festival' I was
discussing amanda with a gentleman who asked
something like "how big can amanda go?"


Hm, what would the technical limit be? Like 4194304 clients and DLEs? Or 
is there no such limit?


As a side note, Amanda uses UDP instead of TCP during the estimate
phase just to overcome the limit of concurrent TCP connections.

Current OSes can handle many TCP connections but in the early
days of Amanda, that was really a limiting factor if you got many
clients (a limit, which Amanda did *not* bump into!).


--
Paul Bijnens, xplanation Technology ServicesTel  +32 16 397.511
Technologielaan 21 bus 2, B-3001 Leuven, BELGIUMFax  +32 16 397.512
http://www.xplanation.com/  email:  [EMAIL PROTECTED]
***
* I think I've got the hang of it now:  exit, ^D, ^C, ^\, ^Z, ^Q, ^^, *
* F6, quit, ZZ, :q, :q!, M-Z, ^X^C, logoff, logout, close, bye, /bye, *
* stop, end, F3, ~., ^]c, +++ ATH, disconnect, halt,  abort,  hangup, *
* PF4, F20, ^X^X, :D::D, KJOB, F14-f-e, F8-e,  kill -1 $$,  shutdown, *
* init 0, kill -9 1, Alt-F4, Ctrl-Alt-Del, AltGr-NumLock, Stop-A, ... *
* ...  "Are you sure?"  ...   YES   ...   Phew ...   I'm out  *
***



Default restore device issues

2006-05-02 Thread stan
I'm trying to get amrecover to default to my changer.

I've added the following to my script that runs configure:

--with-tape-device=chg-multi \
--with-changer-device=chg-multi \

made distclean, id a make, and a make install. I'm working on the server
for the moment, so I rebooted it just to make certain that I had the
correct daemons running.

But, still


amrecover> extract

Extracting files using tape drive /dev/nst0 on host amanda.meadwestvaco.com.
 
The following tapes are needed: DailyDump21

This is runnig amrecover withut any options. On 2.5.0 PL1 if it atters.

Am I doing something wrong here?


-- 
U.S. Encouraged by Vietnam Vote - Officials Cite 83% Turnout Despite Vietcong 
Terror 
- New York Times 9/3/1967


Re: Default restore device issues

2006-05-02 Thread Paul Bijnens

On 2006-05-02 14:12, stan wrote:

I'm trying to get amrecover to default to my changer.

I've added the following to my script that runs configure:

--with-tape-device=chg-multi \
--with-changer-device=chg-multi \

made distclean, id a make, and a make install. I'm working on the server
for the moment, so I rebooted it just to make certain that I had the
correct daemons running.

But, still


amrecover> extract

Extracting files using tape drive /dev/nst0 on host amanda.meadwestvaco.com.
 
The following tapes are needed: DailyDump21

This is runnig amrecover withut any options. On 2.5.0 PL1 if it atters.

Am I doing something wrong here?



Did you try the simple and documented parameter in amanda.conf:

  amrecover_changer  "changer"

No need at all to recompile and mess things up.


--
Paul Bijnens, xplanation Technology ServicesTel  +32 16 397.511
Technologielaan 21 bus 2, B-3001 Leuven, BELGIUMFax  +32 16 397.512
http://www.xplanation.com/  email:  [EMAIL PROTECTED]
***
* I think I've got the hang of it now:  exit, ^D, ^C, ^\, ^Z, ^Q, ^^, *
* F6, quit, ZZ, :q, :q!, M-Z, ^X^C, logoff, logout, close, bye, /bye, *
* stop, end, F3, ~., ^]c, +++ ATH, disconnect, halt,  abort,  hangup, *
* PF4, F20, ^X^X, :D::D, KJOB, F14-f-e, F8-e,  kill -1 $$,  shutdown, *
* init 0, kill -9 1, Alt-F4, Ctrl-Alt-Del, AltGr-NumLock, Stop-A, ... *
* ...  "Are you sure?"  ...   YES   ...   Phew ...   I'm out  *
***



Re: re-indexing virtual tapes

2006-05-02 Thread Jon LaBadie
On Tue, May 02, 2006 at 09:34:05AM +0200, Thomas Widhalm wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
> 
> Hi!
> 
> I started using virtual tapes some weeks ago. Unfortunately I ran out of
> disk space far more quickly, than I expected, because I did many full
> dumps due to misconfiguration. Now I had to delete some of the backups.
> I did not find any other way, than deleting the files within the virtual
> tapes. (Only those with backups not the files, amanda created when
> labelling the "tapes"). Now, is there a way to re-index the left -over
> files and delete the missing backups from the index?
> 

I don't know of an amanda tool for doing so.

However, the index files and the dump files
should have a 1:1 correspondence, a separate
index file for each dump you deleted.  So you
could locate the index for each dump you
deleted and delete it also.  Or reversing
the logic, pair up what you retained with its
index and delete the other indexes.

BTW amanda will still think the dumps are there
as that info comes from the log files.  When you
overwrite the vtapes that will change.

-- 
Jon H. LaBadie  [EMAIL PROTECTED]
 JG Computing
 4455 Province Line Road(609) 252-0159
 Princeton, NJ  08540-4322  (609) 683-7220 (fax)


Re: Default restore device issues

2006-05-02 Thread Paul Bijnens

On 2006-05-02 14:12, stan wrote:

I'm trying to get amrecover to default to my changer.

I've added the following to my script that runs configure:

--with-tape-device=chg-multi \
--with-changer-device=chg-multi \

made distclean, id a make, and a make install. I'm working on the server
for the moment, so I rebooted it just to make certain that I had the
correct daemons running.

But, still


amrecover> extract

Extracting files using tape drive /dev/nst0 on host amanda.meadwestvaco.com.
 
The following tapes are needed: DailyDump21

This is runnig amrecover withut any options. On 2.5.0 PL1 if it atters.

Am I doing something wrong here?



Moreover, whatever I try here to reproduce your problem, I can't.
When the SERVER is version 2.5.0 the amrecover is tried with different
values until one succeeds:

   1. the device specified with amrecover -d ...
   2. the device specified with amrecover "settape" command
  or what the index-server returns, which tries these values:
   3. The amrecover_changer param in the amanda.conf on the server
   4. the tpchanger parameter in amanda.conf of the server
   5. the tapedev specified in amanda.conf of the server

So where do you have /dev/nst0 in that list?
Are you connecting with the correct "index-server"?


--
Paul Bijnens, xplanation Technology ServicesTel  +32 16 397.511
Technologielaan 21 bus 2, B-3001 Leuven, BELGIUMFax  +32 16 397.512
http://www.xplanation.com/  email:  [EMAIL PROTECTED]
***
* I think I've got the hang of it now:  exit, ^D, ^C, ^\, ^Z, ^Q, ^^, *
* F6, quit, ZZ, :q, :q!, M-Z, ^X^C, logoff, logout, close, bye, /bye, *
* stop, end, F3, ~., ^]c, +++ ATH, disconnect, halt,  abort,  hangup, *
* PF4, F20, ^X^X, :D::D, KJOB, F14-f-e, F8-e,  kill -1 $$,  shutdown, *
* init 0, kill -9 1, Alt-F4, Ctrl-Alt-Del, AltGr-NumLock, Stop-A, ... *
* ...  "Are you sure?"  ...   YES   ...   Phew ...   I'm out  *
***



Re: Scalability information sought

2006-05-02 Thread Alexander Jolk

Jon LaBadie wrote:

On Tue, May 02, 2006 at 10:45:35AM +0200, Alexander Jolk wrote:


On one of my two site, we have 20TB total disk capacity, of which about 
16TB is in use; on two servers, 800GB nightly (two 200GB LTO-2 tapes per 
server per night); 45 clients, split into about 2200 individual DLEs.


Just a FMI question Alexander,

That is about 50 DLE's per client.
Are you doing that on a per-user basis or something similar?
Are they separate file systems, separate directory trees,
or are you doing the old include/exclude thing?


These 50 DLEs are individual directory trees.  When one particular 
directory gets too big, I split it off into several individual 
subdirectories, and the root excluding the subdirs.  I wrote a small 
perl script that helps me in splitting up, giving disklist stanzas as 
follows:


# edge3:/vol/SEQS
edge3   /vol/SEQS/BANK  comp-user-tar 1
edge3   /vol/SEQS/D1comp-user-tar 1
edge3   /vol/SEQS/F comp-user-tar 1
edge3   /vol/SEQS/F5comp-user-tar 1
edge3   /vol/SEQS {
comp-work-tar
exclude append "./BANK"
exclude append "./D1"
exclude append "./F"
exclude append "./F5"
} 1
# end edge3:/vol/SEQS

I'm trying to keep my DLEs below 10GB for most of them, with occasional 
large ones up to 70GB, on 200GB LTO-2 without hardware compression.


Alex



--
Alexander Jolk  * BUF Compagnie * [EMAIL PROTECTED]
Tel +33-1 42 68 18 28  *  Fax +33-1 42 68 18 29


number of disklist entries question

2006-05-02 Thread McGraw, Robert P.
I have 581 of the following type disklist entries. 

zorn  /export/users-aar  /export/fssnap/users  {
  users-tar
  `include' "./aar"
}

We have some people who have only a few file and some who have over 15GB. I
thought about segregating people by the first letter of their login name. I
realize that it probably takes longer having to create so many files on a
tape but then I can probably fill up a tape because I have so many small
files.


Is there a limit to the number of disklist entries?

Are their any other pros or cons?

Thanks

Robert


_
Robert P. McGraw, Jr.
Manager, Computer System EMAIL: [EMAIL PROTECTED]
Purdue University ROOM: MATH-807
Department of MathematicsPHONE: (765) 494-6055
150 N. University Street   FAX: (419) 821-0540
West Lafayette, IN 47907-2067




smime.p7s
Description: S/MIME cryptographic signature


Disabling LTO-2 hardware compression

2006-05-02 Thread Guy Dallaire
Hi,Recently added an Overland LoaderXpress LTO 2 tape library to my setup. Inside the loader is an OEM HP Ultrium tape drive. My tape server is a centos 4.2 box (RHEL 4 clone)I'm using software compression with amanda 
2.4.5p1 already.Problem is, the tape drive in the library always seems to have hardware compression ON. There is no way on the library operator panel to force the compression OFF.I've heard that LTO2 drives can easily cope with already compressed data (and do not try to re compress it). Is this true ? Otherwise, I fear that trying to compress already compressed data might actually use more tape space and reduce throughput.
I've searched the amanda wiki and di not find anything. Has anyone devised a way to force hw compression off on a similar setup ?Thanks


Re: number of disklist entries question

2006-05-02 Thread Paul Bijnens

On 2006-05-02 16:03, McGraw, Robert P. wrote:
I have 581 of the following type disklist entries. 


zorn  /export/users-aar  /export/fssnap/users  {
  users-tar
  `include' "./aar"
}

We have some people who have only a few file and some who have over 15GB. I
thought about segregating people by the first letter of their login name. I
realize that it probably takes longer having to create so many files on a
tape but then I can probably fill up a tape because I have so many small
files.


Is there a limit to the number of disklist entries?


Yes there is: the size of the UDP packet, which in current versions
is limited to 32Kbytes (if really really needed, you can change the
source and make that 64K if your OS supports it -- most do if tweaked).

See: 
http://wiki.zmanda.com/index.php/Amdump:_results_missing#UDP_packet_too_large.3F




--
Paul Bijnens, xplanation Technology ServicesTel  +32 16 397.511
Technologielaan 21 bus 2, B-3001 Leuven, BELGIUMFax  +32 16 397.512
http://www.xplanation.com/  email:  [EMAIL PROTECTED]
***
* I think I've got the hang of it now:  exit, ^D, ^C, ^\, ^Z, ^Q, ^^, *
* F6, quit, ZZ, :q, :q!, M-Z, ^X^C, logoff, logout, close, bye, /bye, *
* stop, end, F3, ~., ^]c, +++ ATH, disconnect, halt,  abort,  hangup, *
* PF4, F20, ^X^X, :D::D, KJOB, F14-f-e, F8-e,  kill -1 $$,  shutdown, *
* init 0, kill -9 1, Alt-F4, Ctrl-Alt-Del, AltGr-NumLock, Stop-A, ... *
* ...  "Are you sure?"  ...   YES   ...   Phew ...   I'm out  *
***



tuning the estimate phase?

2006-05-02 Thread Paul Lussier
Hi all,Is it possible to tune the estimate phase of a backup run?  We appearto be getting NFS timeouts experienced by our NFS clients during theestimate phase when the NFS server is getting backed up.
The going theory is this that during the estimation phase, amanda isdoing a gtar|gzip -c >/dev/null.  And, as we all know, the bandwidthof /dev/null is damn near impossible to beat :)During the actual dumping of data, the gtar|gzip is getting sent back
across the wire, and therefore gtar gets constrained by the bandwidthof the network, which even at GigE is significantly lower than that of/dev/null.  As a result, during the estimation phase, amanda is taking
over the disk IO to the RAID array and the NFS daemons are competingfor r/w access.Since the entire array is a single file system, even the backup ofindividual hierarchies seems to result in this blocking.
Does this sound like a reasonable theory? If so, is there a way I cantune the estimation to be "nicer" ?Any pointers, comments, suggestions, etc. welcome.--Seeya,Paul


Re: Disabling LTO-2 hardware compression

2006-05-02 Thread Paul Bijnens

On 2006-05-02 16:17, Guy Dallaire wrote:

Hi,

Recently added an Overland LoaderXpress LTO 2 tape library to my setup. 
Inside the loader is an OEM HP Ultrium tape drive. My tape server is a 
centos 4.2 box (RHEL 4 clone)


I'm using software compression with amanda 2.4.5p1 already.

Problem is, the tape drive in the library always seems to have hardware 
compression ON. There is no way on the library operator panel to force 
the compression OFF.


How did you find out?  Used "amtapetype -c"?  (just curious)




I've heard that LTO2 drives can easily cope with already compressed data 
(and do not try to re compress it). Is this true ? Otherwise, I fear 
that trying to compress already compressed data might actually use more 
tape space and reduce throughput.


I've searched the amanda wiki and di not find anything.


Yes, leaving HW compression for LTO drives does not hurt.
In fact, I have exactly the same, and have hw-compression on, so that
those few DLE's that have no compression enabled (because they are too
slow), can benefit from hw-compression, giving me a few extra bytes on
the tapes.

http://wiki.zmanda.com/index.php/Tapetype_definitions#HP448_.28LTO_Ultrium2.29_with_200.2F400_Gbyte_tapes


--
Paul Bijnens, xplanation Technology ServicesTel  +32 16 397.511
Technologielaan 21 bus 2, B-3001 Leuven, BELGIUMFax  +32 16 397.512
http://www.xplanation.com/  email:  [EMAIL PROTECTED]
***
* I think I've got the hang of it now:  exit, ^D, ^C, ^\, ^Z, ^Q, ^^, *
* F6, quit, ZZ, :q, :q!, M-Z, ^X^C, logoff, logout, close, bye, /bye, *
* stop, end, F3, ~., ^]c, +++ ATH, disconnect, halt,  abort,  hangup, *
* PF4, F20, ^X^X, :D::D, KJOB, F14-f-e, F8-e,  kill -1 $$,  shutdown, *
* init 0, kill -9 1, Alt-F4, Ctrl-Alt-Del, AltGr-NumLock, Stop-A, ... *
* ...  "Are you sure?"  ...   YES   ...   Phew ...   I'm out  *
***



Re: Disabling LTO-2 hardware compression

2006-05-02 Thread Guy Dallaire
2006/5/2, Paul Bijnens <[EMAIL PROTECTED]>:
>> Problem is, the tape drive in the library always seems to have hardware> compression ON. There is no way on the library operator panel to force> the compression OFF.How did you find out?  Used "amtapetype -c"?  (just curious)
No, the operator control panel has a status menu where you can show the drive parameters, compression is ON.I have tried this:
http://wiki.zmanda.com/index.php/Hardware_compressionAnd sent the mt command to the drive. but it seems to reset to HW compression while reusing a tape. It looks like when it read the tape header, it sees that the tape was written with HW compression and resets itself. 
My tapes are already labeled, I could probably just rewrite the label with HW compression OFF (The wiki could be a bit cleared about that, what does " Re-write the label block and write more /dev/zero blocks to flush its buffers" exactly mean, won't it overwrite the stuff AFTER the label (the Backups that is), but considering that it does not hurt to leav it on, I'll keep it that way.
Thanks


Re: Default restore device issues

2006-05-02 Thread listrcv

Paul Bijnens wrote:

Extracting files using tape drive /dev/nst0 on host 
amanda.meadwestvaco.com.

>

So where do you have /dev/nst0 in that list?
Are you connecting with the correct "index-server"?


You better not try to restore files from the changer device (incorrectly 
specified anyway by giving the changer script rather than the changer 
device when compiling) rather than from the tape device.


Otherwise, unless /dev/nst0 is not the tape device, I wonder what the 
problem is.



GH


Re: tuning the estimate phase?

2006-05-02 Thread Paul Bijnens

On 2006-05-02 16:22, Paul Lussier wrote:

Hi all,

Is it possible to tune the estimate phase of a backup run?  We appear
to be getting NFS timeouts experienced by our NFS clients during the
estimate phase when the NFS server is getting backed up.

The going theory is this that during the estimation phase, amanda is
doing a gtar|gzip -c >/dev/null.  And, as we all know, the bandwidth
of /dev/null is damn near impossible to beat :)

During the actual dumping of data, the gtar|gzip is getting sent back
across the wire, and therefore gtar gets constrained by the bandwidth
of the network, which even at GigE is significantly lower than that of
/dev/null.  As a result, during the estimation phase, amanda is taking
over the disk IO to the RAID array and the NFS daemons are competing
for r/w access.


So far the theory  :-) . The reality is:

The client runs a "gtar --sparse --totals -f /dev/null --otheropts...".
No piping through gzip, no transfer over the network.
Gnutar itself has special code for handling output to /dev/null, and
doesn't even read the files in that case (unless the stat() indicates it
is a sparse file, for which it depends on the version of gtar how it
handles that -- some versions read sparse files.).
Doing a stat() for each file/directory of the filesystem can be
stressing the server yes indeed.

Sidemark:  because the output is not piped through gzip, Amanda can
only guess how much it will compress.  Therefor it builds up a history
of compression rates for each DLE.  The default assumed compression
rate for a new DLE (without history) can be tuned by the amanda.conf
parameter "comprate".




Since the entire array is a single file system, even the backup of
individual hierarchies seems to result in this blocking.

Does this sound like a reasonable theory? If so, is there a way I can
tune the estimation to be "nicer" ?


Avoid doing running multiple gtar processes at the same time
by specifying the "spindle" in the disklist.

Are you sure it happens during estimate?
Another possibility is to revert to faster/less accurate estimate
strategies:  "calcsize" is faster (but if stat() is indeed the
problem, this will not help much).
There is also a only statistically based estimate, see:

http://wiki.zmanda.com/index.php/Amdump:_results_missing#Timeout_during_estimate.3F


--
Paul Bijnens, xplanation Technology ServicesTel  +32 16 397.511
Technologielaan 21 bus 2, B-3001 Leuven, BELGIUMFax  +32 16 397.512
http://www.xplanation.com/  email:  [EMAIL PROTECTED]
***
* I think I've got the hang of it now:  exit, ^D, ^C, ^\, ^Z, ^Q, ^^, *
* F6, quit, ZZ, :q, :q!, M-Z, ^X^C, logoff, logout, close, bye, /bye, *
* stop, end, F3, ~., ^]c, +++ ATH, disconnect, halt,  abort,  hangup, *
* PF4, F20, ^X^X, :D::D, KJOB, F14-f-e, F8-e,  kill -1 $$,  shutdown, *
* init 0, kill -9 1, Alt-F4, Ctrl-Alt-Del, AltGr-NumLock, Stop-A, ... *
* ...  "Are you sure?"  ...   YES   ...   Phew ...   I'm out  *
***



RE: Disabling LTO-2 hardware compression

2006-05-02 Thread uwe . kaufmann
Hello Guy,

On Tuesday, May 02, 2006 4:18 PM Guy Dallaire wrote:
> I've heard that LTO2 drives can easily cope with already compressed
> data (and do not try to re compress it). Is this true ?

I don't know that.

> Has anyone devised a way to force hw compression off on a similar setup ?

I recently has some power struggle with Tandberg SLR7 and LTO-1 hardware
compression. I lost.

Seriously, I tried to switch hw comp. off with Linux scsi commands (see
older threads "SLR7" in this list) and at the end I had to send the drive to
Tandberg for repair. 

Until now I don't know what happened exactly because Tandberg sent the
repaired drive back to me without any comment and with hw comp. on again.

My solution (for SLR7 and LTO-1) was as follows:
Exchange the boot hardisk in the amanda machine with a new one
Install MS Win XP
Install SCSI Card driver
Install LTO Driver
Install Tandberg Toolkit (www.tandberg.com)
switch hw comp. off
Exchange the windows boot harddisk with your original one
done.

It's not a joke, I am sorry.

Best regards
Uwe



Virus checked by G DATA AntiVirusKit
Version: AVK 16.7134 from 02.05.2006


Re: number of disklist entries question

2006-05-02 Thread Jon LaBadie
On Tue, May 02, 2006 at 10:03:12AM -0400, McGraw, Robert P. wrote:
> I have 581 of the following type disklist entries. 
> 
> zorn  /export/users-aar  /export/fssnap/users  {
>   users-tar
>   `include' "./aar"
> }
> 
> We have some people who have only a few file and some who have over 15GB. I
> thought about segregating people by the first letter of their login name. I
> realize that it probably takes longer having to create so many files on a
> tape but then I can probably fill up a tape because I have so many small
> files.
> 
> 
> Is there a limit to the number of disklist entries?
> 

The current "leader" just posted this morning,
in about 45 clients, 800GB/amdump run, 2200 DLEs.


-- 
Jon H. LaBadie  [EMAIL PROTECTED]
 JG Computing
 4455 Province Line Road(609) 252-0159
 Princeton, NJ  08540-4322  (609) 683-7220 (fax)


amverifyrun in 2.5.0p1

2006-05-02 Thread Sean Walmsley
Has anyone successfully used the amverifyrun utility in version 2.5.0p1?
When I run it, I get the error message:

changer: got exit: 2 str:  Illegal slot: "-1"
amtape: could not load slot : Illegal slot: "-1"
amtape: could not load slot : Illegal slot: "-1"
amtape: pid 11950 finish time Sat Apr 29 22:20:47 2006

Looking at the amverifyrun script, it is grepping through the amdump
output looking for a line of the form:

taper: slot

This worked in 2.4.5p1 because the amdump file contained the following
lines:

-- from 2.4.5p1 amdump file ---
...
changer: opening pipe to: /home/amanda/MEGABAK1/libexec/chg-zd-mtx
-slot current
...
changer: got exit: 0 str: 10 /dev/rmt/0n
taper: slot 10: date Xlabel MBK1_34 (new tape)
taper: read label `MBK1_34' date `X'
taper: wrote label `MBK1_34' date `20060429'
driver: result time 2.658 from taper: TAPER-OK
...
---

It doesn't seem to work in 2.5p1 because the amdump file contains only
the following data following the chg-zd-mtx command:

-- from 2.5.0p1 amdump file ---
...
changer: opening pipe to: /home/amanda/MEGABAK1/libexec/chg-zd-mtx
-slot current
...
changer: got exit: 0 str: 11 /dev/rmt/0n
taper: wrote label `MBK1_35' date `20060429'
driver: result time 13.409 from taper: TAPER-OK
...
---

To me, it looks like the following lines of output are simply missing
from the 2.5.0p1 output:

taper: slot 10: date Xlabel MBK1_34 (new tape)
taper: read label `MBK1_34' date `X'

Given this, it would appear that amverifyrun doesn't have a hope of
working properly.

Any assistance you can provide would be much appreciated (whether its
a solution, a "me too", or "it works fine for me").

Thanks,


Sean Walmsley


=
Sean Walmsley [EMAIL PROTECTED]
Nuclear Safety Solutions Ltd.  416-592-4608 (V)  416-592-5528 (F)
700 University Ave M/S H04 J19, Toronto, Ontario, M5G 1X6, CANADA



Re: Disabling LTO-2 hardware compression

2006-05-02 Thread Jon LaBadie
On Tue, May 02, 2006 at 10:17:48AM -0400, Guy Dallaire wrote:
> Hi,
> 
> Recently added an Overland LoaderXpress LTO 2 tape library to my setup.
> Inside the loader is an OEM HP Ultrium tape drive. My tape server is a
> centos 4.2 box (RHEL 4 clone)
> 
> I'm using software compression with amanda 2.4.5p1 already.
> 
> Problem is, the tape drive in the library always seems to have hardware
> compression ON. There is no way on the library operator panel to force the
> compression OFF.
> 
...
> Has anyone devised a way to force hw compression off on a similar setup ?


Fedora, and I presume centos, has a feature "stinit".
Once configured stinit creates several additional device files
for each tape drive (up to 4).  Each can be configured to
have several parameters automatically set when the device
is opened.  This is similar to the style of Solaris and HP-UX
magtape devices.  So I set my lto drive and my dds3 drive to
be initialized with stinit such that the "l" device (eg nst0l)
is always opened with blocksize 32K and no compression.


-- 
Jon H. LaBadie  [EMAIL PROTECTED]
 JG Computing
 4455 Province Line Road(609) 252-0159
 Princeton, NJ  08540-4322  (609) 683-7220 (fax)


Still Problems Restoring from Disk

2006-05-02 Thread Steven Backus
  Using 2.5.0p1 I can't restore any files from holding disk.  The
amindexd debug file says:

amindexd: > TAPE
amindexd: tapedev_is amrecover_changer: /dev/changer
amindexd: < 200 /dev/changer
amindexd: ? unexpected EOF

which is odd because it's not even supposed to be using the tape
drive.  Here's the amrecover session:

amrecover> add ftpusers
Added /opt/sfw/etc/ftpusers
amrecover> extract

Extracting files from holding disk on host whimsy.
The following files are needed: 
/home1/dumps/amanda/20060501190001/ambiance.med.utah.edu.sdc1.0

Restoring files into directory /tmp
Continue [?/Y/n]? 

Extracting from file  
/home1/dumps/amanda/20060501190001/ambiance.med.utah.edu.sdc1.0
tar: Read 2541 bytes from -
tar: Unexpected EOF in archive
tar: Error is not recoverable: exiting now
amrecover: Extractor child exited with status 2

extract_list - child returned non-zero status: 1
Continue [?/Y/n/r]? 
amrecover> 
--
Monday - Thursday I dump to the holding disk then amflush everything
to tape Friday.  Each night I get "taper: FATAL syncpipe_get: r:
unexpected EOF" in my log.  Anyone have any ideas?

Steve
-- 
Steven J. BackusComputer Specialist
University of Utah  E-Mail:  [EMAIL PROTECTED]
Biomedical Informatics  Alternate:  [EMAIL PROTECTED]
391 Chipeta Way -- Suite D150   Office:  801.587.9308
Salt Lake City, UT 84108-1266   http://www.math.utah.edu/~backus


Re: Still Problems Restoring from Disk

2006-05-02 Thread Steven Backus
> How are you forcing the dumps to stay in holding disk?
> May be the taper error could be a consequence of that.

In amanda.conf:

tapedev "/dev/fridaytape"
autoflush

In crontab:

0 14 * * 1-4 /usr/local/sbin/amcheck -c -m genepi
0 19 * * 1-4 /usr/local/sbin/amdump genepi
30 12 * * 5 ln -s /dev/nst0 /dev/fridaytape; /usr/local/sbin/amcheck -m genepi
0 19 * * 5 /usr/local/sbin/amdump genepi; rm /dev/fridaytape

Note I do the amcheck at 12:30 on Friday so I can leave early if
possible. :)

Steve
-- 
Steven J. BackusComputer Specialist
University of Utah  E-Mail:  [EMAIL PROTECTED]
Biomedical Informatics  Alternate:  [EMAIL PROTECTED]
391 Chipeta Way -- Suite D150   Office:  801.587.9308
Salt Lake City, UT 84108-1266   http://www.math.utah.edu/~backus


Re: Still Problems Restoring from Disk

2006-05-02 Thread Jon LaBadie
On Tue, May 02, 2006 at 09:57:50AM -0600, Steven Backus wrote:
> > How are you forcing the dumps to stay in holding disk?
> > May be the taper error could be a consequence of that.
> 
> In amanda.conf:
> 
> tapedev "/dev/fridaytape"
> autoflush
> 
> In crontab:
> 
> 0 14 * * 1-4 /usr/local/sbin/amcheck -c -m genepi
> 0 19 * * 1-4 /usr/local/sbin/amdump genepi
> 30 12 * * 5 ln -s /dev/nst0 /dev/fridaytape; /usr/local/sbin/amcheck -m genepi
> 0 19 * * 5 /usr/local/sbin/amdump genepi; rm /dev/fridaytape
> 

Perhaps amrecover has an error handler that if the tapedevice is not
available, even if it ultimatly will not be needed, causes it to abort.

Try running amrecover with the fridaytape link present.


BTW, trivial crontab simplification, one amdump line for all 5 days
with the rm redirected to devnull (2> /dev/null).

> Note I do the amcheck at 12:30 on Friday so I can leave early if
> possible. :)

Dreamer :)

-- 
Jon H. LaBadie  [EMAIL PROTECTED]
 JG Computing
 4455 Province Line Road(609) 252-0159
 Princeton, NJ  08540-4322  (609) 683-7220 (fax)


Re: selfcheck problems with 2.5.0p1

2006-05-02 Thread Steven Sweet
Holy Krap, that was it: "gnutar" versus "GNUTAR".  Thats gotta be a faq 
-- sorry if I missed it.  Thanks for your help!


Paul Bijnens wrote:

On 2006-05-01 22:52, Steven Sweet wrote:

I have just compiled amanda-2.5.0p1 on two systems.  When I run 
amcheck on the server system the server checks out fine, but I get 
this for the client check:


Amanda Backup Client Hosts Check

ERROR: SystemA.local:/ does not support DUMPER-API.
ERROR: SystemA.local: [BOGUS REQUEST PACKET]
Client check: 1 host checked in 0.014 seconds, 1 problem found



You get this error message when the program parameter in the dumptype is
 not "GNUTAR" or "DUMP", e.g. "gnutar" lowercase instead of uppercase.




I can't find any reference to this problem except one indicating a 
mismatch between Amanda versions, which I know isn't the problem in 
this case.  Does anyone know where I should start looking?  Is this a 
configuration problem or a missing requirements problem?








Re: amverifyrun in 2.5.0p1

2006-05-02 Thread Jean-Louis Martineau

Sean,

Could you try this patch.

Jean-Louis

Sean Walmsley wrote:

Has anyone successfully used the amverifyrun utility in version 2.5.0p1?
When I run it, I get the error message:

changer: got exit: 2 str:  Illegal slot: "-1"
amtape: could not load slot : Illegal slot: "-1"
amtape: could not load slot : Illegal slot: "-1"
amtape: pid 11950 finish time Sat Apr 29 22:20:47 2006

Looking at the amverifyrun script, it is grepping through the amdump
output looking for a line of the form:

taper: slot

This worked in 2.4.5p1 because the amdump file contained the following
lines:

-- from 2.4.5p1 amdump file ---
...
changer: opening pipe to: /home/amanda/MEGABAK1/libexec/chg-zd-mtx
-slot current
...
changer: got exit: 0 str: 10 /dev/rmt/0n
taper: slot 10: date Xlabel MBK1_34 (new tape)
taper: read label `MBK1_34' date `X'
taper: wrote label `MBK1_34' date `20060429'
driver: result time 2.658 from taper: TAPER-OK
...
---

It doesn't seem to work in 2.5p1 because the amdump file contains only
the following data following the chg-zd-mtx command:

-- from 2.5.0p1 amdump file ---
...
changer: opening pipe to: /home/amanda/MEGABAK1/libexec/chg-zd-mtx
-slot current
...
changer: got exit: 0 str: 11 /dev/rmt/0n
taper: wrote label `MBK1_35' date `20060429'
driver: result time 13.409 from taper: TAPER-OK
...
---

To me, it looks like the following lines of output are simply missing
from the 2.5.0p1 output:

taper: slot 10: date Xlabel MBK1_34 (new tape)
taper: read label `MBK1_34' date `X'

Given this, it would appear that amverifyrun doesn't have a hope of
working properly.

Any assistance you can provide would be much appreciated (whether its
a solution, a "me too", or "it works fine for me").

Thanks,


Sean Walmsley


=
Sean Walmsley [EMAIL PROTECTED]
Nuclear Safety Solutions Ltd.  416-592-4608 (V)  416-592-5528 (F)
700 University Ave M/S H04 J19, Toronto, Ontario, M5G 1X6, CANADA

  


--- amanda-2.5.1b1.new.bsdtcp/server-src/amverifyrun.sh.in	2005-06-03 12:36:29.0 -0400
+++ amanda-2.5.1b1.new.bsdudp/server-src/amverifyrun.sh.in	2006-05-02 12:03:32.0 -0400
@@ -52,10 +52,13 @@ FIRST_SLOT=`grep "taper: slot" $AMLOG | 
 new tape
 first labelstr match' | sed 1q | sed 's/://g' | awk '{print $3}'`
 if [ X"$FIRST_SLOT" = X"" ]; then
-  FIRST_SLOT='-1'
+  FIRST_SLOT=`grep "taper: slot: .* wrote label" $AMLOG | sed 1q | sed 's/://g' | awk '{print $3}'`
+  if [ X"$FIRST_SLOT" = X"" ]; then
+FIRST_SLOT='-1'
+  fi
 fi
 
-NBTAPES=`grep -c "taper: wrote label " $AMLOG`
+NBTAPES=`grep -c "taper: .*wrote label " $AMLOG`
 
 if [ X"$NBTAPES" != X"0" ]; then
   $AMVERIFY $CONFIG $FIRST_SLOT $NBTAPES
--- amanda-2.5.1b1.new.bsdtcp/server-src/taper.c	2006-04-11 10:13:36.0 -0400
+++ amanda-2.5.1b1.new.bsdudp/server-src/taper.c	2006-05-02 12:01:36.0 -0400
@@ -2343,6 +2343,8 @@ int label_tape()
 static int first_call = 1;
 char *timestamp;
 char *error_msg = NULL;
+char *s, *r;
+int slot = -1;
 
 if (taper_scan(NULL, &label, ×tamp, &tapedev, CHAR_taperscan_output_callback, &error_msg) < 0) {
 	fprintf(stderr, "%s\n", error_msg);
@@ -2351,7 +2353,13 @@ int label_tape()
 	amfree(timestamp);
 	return 0;
 }
-
+if(error_msg) {
+	s = error_msg; r = NULL;
+	while(s=strstr(s,"slot ")) { s += 5; r=s; };
+	if(r) {
+	slot = atoi(r);
+	}
+}
 if((tape_fd = tape_open(tapedev, O_WRONLY)) == -1) {
 	if(errno == EACCES) {
 	errstr = newstralloc(errstr,
@@ -2373,7 +2381,14 @@ int label_tape()
 	return 0;
 }
 
-fprintf(stderr, "taper: wrote label `%s' date `%s'\n", label, taper_timestamp);
+if(slot > -1) {
+	fprintf(stderr, "taper: slot: %d wrote label `%s' date `%s'\n", slot,
+		label, taper_timestamp);
+}
+else {
+	fprintf(stderr, "taper: wrote label `%s' date `%s'\n", label,
+		taper_timestamp);
+}
 fflush(stderr);
 
 #ifdef HAVE_LIBVTBLC


Re: Still Problems Restoring from Disk

2006-05-02 Thread Steven Backus
> Perhaps amrecover has an error handler that if the tapedevice is not
> available, even if it ultimatly will not be needed, causes it to abort.
> 
> Try running amrecover with the fridaytape link present.

No joy, same problem.

Steve
-- 
Steven J. BackusComputer Specialist
University of Utah  E-Mail:  [EMAIL PROTECTED]
Biomedical Informatics  Alternate:  [EMAIL PROTECTED]
391 Chipeta Way -- Suite D150   Office:  801.587.9308
Salt Lake City, UT 84108-1266   http://www.math.utah.edu/~backus


Re: Still Problems Restoring from Disk

2006-05-02 Thread Jon LaBadie
On Tue, May 02, 2006 at 11:23:54AM -0600, Steven Backus wrote:
> > Perhaps amrecover has an error handler that if the tapedevice is not
> > available, even if it ultimatly will not be needed, causes it to abort.
> > 
> > Try running amrecover with the fridaytape link present.
> 
> No joy, same problem.
> 
Just a hope :(

-- 
Jon H. LaBadie  [EMAIL PROTECTED]
 JG Computing
 4455 Province Line Road(609) 252-0159
 Princeton, NJ  08540-4322  (609) 683-7220 (fax)


Re: Disabling LTO-2 hardware compression

2006-05-02 Thread Michael Loftis

mt -f  datcompression 0

That should work but it will likely come back on after a restart though.

--On May 2, 2006 10:17:48 AM -0400 Guy Dallaire <[EMAIL PROTECTED]> wrote:


Hi,

Recently added an Overland LoaderXpress LTO 2 tape library to my setup.
Inside the loader is an OEM HP Ultrium tape drive. My tape server is a
centos 4.2 box (RHEL 4 clone)

I'm using software compression with amanda 2.4.5p1 already.

Problem is, the tape drive in the library always seems to have hardware
compression ON. There is no way on the library operator panel to force
the compression OFF.

I've heard that LTO2 drives can easily cope with already compressed data
(and do not try to re compress it). Is this true ? Otherwise, I fear that
trying to compress already compressed data might actually use more tape
space and reduce throughput.

I've searched the amanda wiki and di not find anything.

Has anyone devised a way to force hw compression off on a similar setup ?

Thanks




--
"Genius might be described as a supreme capacity for getting its possessors
into trouble of all kinds."
-- Samuel Butler


2.4.5p1 -> 2.5.0

2006-05-02 Thread Geert Uytterhoeven

last week I (well, Debian testing) upgraded from 2.4.5p1 to 2.5.0.
And suddenly my nightly backups to vtapes started failing with:

| The next tape Amanda expects to use is: DAILY10.
| 
| FAILURE AND STRANGE DUMP SUMMARY:
|   anakin /varlev 3  FAILED [no more holding disk 
space]
  ...
|   taper: FATAL could not write tapelist: No space left on device
|   taper: FATAL syncpipe_get: r: unexpected EOF

But I don't use a holding disk, and there was plenty of free space on my
vtape partition.

Then I noticed that / was 100% full (except for the reserved blocks for root).
After making sure there was free space on / for ordinary users, Amanda
continued making backups.

So I guess 2.4.5p1 used root privileges to write to /etc, while 2.5.0
falls back to user backup.

Gr{oetje,eeting}s,

Geert

--
Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- [EMAIL PROTECTED]

In personal conversations with technical people, I call myself a hacker. But
when I'm talking to journalists I just say "programmer" or something like that.
-- Linus Torvalds


error redirecting stderr to fd 51

2006-05-02 Thread McGraw, Robert P.


I still seem to be having a problem getting a good backup. Seems something
always pops up.



planner: build: VERSION="Amanda-2.5.0p1"
planner:BUILT_DATE="Sat Apr 29 15:42:05 EDT 2006"
planner:BUILT_MACH="SunOS zorn.math.purdue.edu 5.10
Generic_118833-03 sun4u sparc SUNW,Sun-Fire-280R"

I noticed that that the four "dumping" jobs seemed to hang.

zorn->[309] > amstatus --config daily --date
Using /var/amanda/daily/amdump from Tue May  2 14:30:35 EDT 2006

20060502 bers:/  0 m finished (14:42:59)
20060502 bessel:/    0 4592m finished (14:47:27)
20060502 zorn:/export/csw    0     1322m dumping0m
(14:34:14)
20060502 zorn:/export/users-aar  00m finished (14:35:43)
20060502 zorn:/export/users-aduchkov 0  448m finished (14:37:09)
20060502 zorn:/export/users-aedquist 0   59m dumping0m
(14:33:43)
20060502 zorn:/export/users-aendicot 0   18m finished (14:31:46)
20060502 zorn:/export/users-agabriel 0  777m dumping0m
(14:34:29)
20060502 zorn:/export/users-nlucier  0 8931m finished (15:10:54)
20060502 zorn:/export/users-rmcgraw  0 1430m dumping0m
(14:33:59)

I went to /tmp/amanda and ran

##R##-zorn->[351] ##> grep -i error *
sendbackup.20060502143344.debug:sendbackup: time 0.000: error redirecting
stderr to fd 51: Bad file number
sendbackup.20060502143359.debug:sendbackup: time 0.000: error redirecting
stderr to fd 51: Bad file number
sendbackup.20060502143414.debug:sendbackup: time 0.000: error redirecting
stderr to fd 51: Bad file number
sendbackup.20060502143429.debug:sendbackup: time 0.000: error redirecting
stderr to fd 51: Bad file number

I cat'ed on of the debug files

##R##-zorn->[360] ##> cat sendbackup.20060502143359.debug
sendbackup: debug 1 pid 23380 ruid 30002 euid 30002: start at Tue May  2
14:33:59 2006
sendbackup: version 2.5.0p1
  parsed request as: program `GNUTAR'
 disk `/export/users-rmcgraw'
 device `/export/fssnap/users'
 level 0
 since 1970:1:1:0:0:0
 options `|;auth=BSD;index;include-file=./rmcgraw;'
sendbackup: time 0.000: error redirecting stderr to fd 51: Bad file number
sendbackup: time 0.000: pid 23380 finish time Tue May  2 14:33:59 2006

1) Can anybody tell me what the "error redirecting stderr to fd 51: Bad file
number" means? I googled the message but found nothing.

2) I do an fssnap of the /export/users directory and mount it on
/export/fssnap/users. This should not be a problem should it?

3) I also use"estimate calcsize" where possible.

Robert


_
Robert P. McGraw, Jr.
Manager, Computer System EMAIL: [EMAIL PROTECTED]
Purdue University ROOM: MATH-807
Department of MathematicsPHONE: (765) 494-6055
150 N. University Street   FAX: (419) 821-0540
West Lafayette, IN 47907-2067




smime.p7s
Description: S/MIME cryptographic signature


DLEs with large numbers of files

2006-05-02 Thread Ross Vandegrift
Hello everyone,

I recognize that this isn't really related to Amanda, but I thought
I'd see if anyone has a good trick...

A number of DLEs in my Amanda configuration have a huge number of
small files (sometimes hardlinks and symlinks, sometimes just copies)
- often times in the millions.  Of course this is a class corner case and
these DLEs can take a very long time to backup/restore.

Currently, they are mostly using dump (which will usually report
1-3MiB/s throughput).  Is there a possible performance advantage to using
tar instead?

On some of our installations I have bumped up the data timeouts.  I've
got one as high as 5400 seconds.  I suspect a reasonable maximum is
very installation dependant, but if anyone has thoughts, I'd love to
hear them.

Thanks for any ideas!

-- 
Ross Vandegrift
[EMAIL PROTECTED]

"The good Christian should beware of mathematicians, and all those who
make empty prophecies. The danger already exists that the mathematicians
have made a covenant with the devil to darken the spirit and to confine
man in the bonds of Hell."
--St. Augustine, De Genesi ad Litteram, Book II, xviii, 37


Re: DLEs with large numbers of files

2006-05-02 Thread Michael Loftis



--On May 2, 2006 5:49:47 PM -0400 Ross Vandegrift <[EMAIL PROTECTED]> wrote:


Hello everyone,

I recognize that this isn't really related to Amanda, but I thought
I'd see if anyone has a good trick...

A number of DLEs in my Amanda configuration have a huge number of
small files (sometimes hardlinks and symlinks, sometimes just copies)
- often times in the millions.  Of course this is a class corner case and
these DLEs can take a very long time to backup/restore.


I use estimated sizes and tar on these types of DLEs.  Dump may be faster 
if you can get away with it, but realistically they're both getting limited 
by what amounts to stat() calls on the filesystem to ascertain the 
modification times of the various files.  You can try for a filesystem that 
has better small files performance or upgrade your storage hardware to 
support more IOPS.




Currently, they are mostly using dump (which will usually report
1-3MiB/s throughput).  Is there a possible performance advantage to using
tar instead?

On some of our installations I have bumped up the data timeouts.  I've
got one as high as 5400 seconds.  I suspect a reasonable maximum is
very installation dependant, but if anyone has thoughts, I'd love to
hear them.

Thanks for any ideas!

--
Ross Vandegrift
[EMAIL PROTECTED]

"The good Christian should beware of mathematicians, and all those who
make empty prophecies. The danger already exists that the mathematicians
have made a covenant with the devil to darken the spirit and to confine
man in the bonds of Hell."
--St. Augustine, De Genesi ad Litteram, Book II, xviii, 37





--
"Genius might be described as a supreme capacity for getting its possessors
into trouble of all kinds."
-- Samuel Butler


Re: Default restore device issues

2006-05-02 Thread stan
On Tue, May 02, 2006 at 03:19:52PM +0200, Paul Bijnens wrote:
> On 2006-05-02 14:12, stan wrote:
> >I'm trying to get amrecover to default to my changer.
> >
> >I've added the following to my script that runs configure:
> >
> >--with-tape-device=chg-multi \
> >--with-changer-device=chg-multi \
> >
> >made distclean, id a make, and a make install. I'm working on the server
> >for the moment, so I rebooted it just to make certain that I had the
> >correct daemons running.
> >
> >But, still
> >
> >
> >amrecover> extract
> >
> >Extracting files using tape drive /dev/nst0 on host 
> >amanda.meadwestvaco.com.
> > 
> >The following tapes are needed: DailyDump21
> >
> >This is runnig amrecover withut any options. On 2.5.0 PL1 if it atters.
> >
> >Am I doing something wrong here?
> 
> 
> Did you try the simple and documented parameter in amanda.conf:
> 
>   amrecover_changer  "changer"
> 
> No need at all to recompile and mess things up.
> 
Works perfectly. As a matter of fact this was defined in the master
amanda.conf file, but to the tape device. Thus that was causing
my problems.

Thnaks, again.

-- 
U.S. Encouraged by Vietnam Vote - Officials Cite 83% Turnout Despite Vietcong 
Terror 
- New York Times 9/3/1967


Re: Default restore device issues

2006-05-02 Thread stan
On Tue, May 02, 2006 at 03:32:35PM +0200, Paul Bijnens wrote:
> On 2006-05-02 14:12, stan wrote:
> >I'm trying to get amrecover to default to my changer.
> >
> >I've added the following to my script that runs configure:
> >
> >--with-tape-device=chg-multi \
> >--with-changer-device=chg-multi \
> >
> >made distclean, id a make, and a make install. I'm working on the server
> >for the moment, so I rebooted it just to make certain that I had the
> >correct daemons running.
> >
> >But, still
> >
> >
> >amrecover> extract
> >
> >Extracting files using tape drive /dev/nst0 on host 
> >amanda.meadwestvaco.com.
> > 
> >The following tapes are needed: DailyDump21
> >
> >This is runnig amrecover withut any options. On 2.5.0 PL1 if it atters.
> >
> >Am I doing something wrong here?
> 
> 
> Moreover, whatever I try here to reproduce your problem, I can't.
> When the SERVER is version 2.5.0 the amrecover is tried with different
> values until one succeeds:
> 
>1. the device specified with amrecover -d ...
>2. the device specified with amrecover "settape" command
>   or what the index-server returns, which tries these values:
>3. The amrecover_changer param in the amanda.conf on the server
>4. the tpchanger parameter in amanda.conf of the server
>5. the tapedev specified in amanda.conf of the server
> 
> So where do you have /dev/nst0 in that list?
> Are you connecting with the correct "index-server"?
> 
As you see from my previous message, the problem was caused
by my midconfiguration, and (in retrospect) was consitent with
the above defined beahvior.

Thanks, once more.

-- 
U.S. Encouraged by Vietnam Vote - Officials Cite 83% Turnout Despite Vietcong 
Terror 
- New York Times 9/3/1967