Re: Why amrecover takes forever!

2001-10-24 Thread Bernhard R. Erdmann

> It takes 10hours to restore a file of 8M.

if you could be a bit more specific



Re: amflush thinks that amanda directories are actually cruft after upgrading

2001-10-24 Thread Bernhard R. Erdmann

> I just moved my tape drive from one machine to another and I upgraded
> from 2.4.1p2 to 2.4.2p2 at the same time.  I moved my configuration
> files over and all of the logs and curinfo files and the indexes
> (indices?).
> 
> When I look at my holding disk, ./bkup, I see:
> 
> truk!backup 110# ls -l /bkup
> total 20
> drwx--   2 backup   backup   4096 Oct 18 06:30 20011017
> drwx--   2 backup   backup   4096 Oct 21 06:31 20011020
> drwx--   2 backup   backup   4096 Oct 21 18:30 20011021
> drwx--   2 backup   backup   4096 Oct 24 09:36 20011023
> drwxr-xr-x   2 backup   backup   4096 May 11 12:50 lost+found
> truk!backup 111#
> 
> But when I run amflush (as user backup, of course), I get:
> 
> truk!backup 111# amflush daily
> Scanning /bkup...
>   : skipping cruft directory, perhaps you should delete it.
>   : skipping cruft directory, perhaps you should delete it.
>   : skipping cruft directory, perhaps you should delete it.
>   : skipping cruft directory, perhaps you should delete it.
>   : skipping cruft directory, perhaps you should delete it.
>   : skipping cruft directory, perhaps you should delete it.
>   : skipping cruft directory, perhaps you should delete it.
> Could not find any Amanda directories to flush.


I think there are some pathnames and maybe even hostnames in the files
on the holding disk.



Re: Data Timeout

2001-10-24 Thread Bernhard R. Erdmann

> 2)  Which timeout value can I adjust in amanda.conf to avoid the data
> timeout error?

amanda.conf, dtimeout



Why amrecover takes forever!

2001-10-24 Thread Dengfeng Liu

It takes 10hours to restore a file of 8M.




amflush thinks that amanda directories are actually cruft after upgrading

2001-10-24 Thread Jeff Silverman

I just moved my tape drive from one machine to another and I upgraded
from 2.4.1p2 to 2.4.2p2 at the same time.  I moved my configuration
files over and all of the logs and curinfo files and the indexes
(indices?).

When I look at my holding disk, ./bkup, I see:

truk!backup 110# ls -l /bkup
total 20
drwx--   2 backup   backup   4096 Oct 18 06:30 20011017
drwx--   2 backup   backup   4096 Oct 21 06:31 20011020
drwx--   2 backup   backup   4096 Oct 21 18:30 20011021
drwx--   2 backup   backup   4096 Oct 24 09:36 20011023
drwxr-xr-x   2 backup   backup   4096 May 11 12:50 lost+found
truk!backup 111#

But when I run amflush (as user backup, of course), I get:

truk!backup 111# amflush daily
Scanning /bkup...
  : skipping cruft directory, perhaps you should delete it.
  : skipping cruft directory, perhaps you should delete it.
  : skipping cruft directory, perhaps you should delete it.
  : skipping cruft directory, perhaps you should delete it.
  : skipping cruft directory, perhaps you should delete it.
  : skipping cruft directory, perhaps you should delete it.
  : skipping cruft directory, perhaps you should delete it.
Could not find any Amanda directories to flush.
truk!backup 112#


My Amanda.conf file is:

truk!backup 112# more /usr/local/etc/amanda/amanda.conf
org "RCS Backup System" # your organization name for reports
mailto "admin"  # space separated list of operators at your site
dumpuser "backup" # the user to run dumps under
inparallel 4  # maximum dumpers that will run in parallel
netusage  8000 Kbps # maximum net bandwidth for Amanda, in KB per second

   # increased from 1800 Kbps 2000-10-29 JHS
   # increased from 5000 Kbps to 8000 Kbps 2001-06-29 JHS
# See http://www.backupcentral.com/amanda-10.html for details.
dumpcycle 14 days # the number of days in the normal dump cycle
runspercycle 14 # the number of amdump runs in dumpcycle days
   # (2 weeks * 7 amdump runs per week )
tapecycle 50 tapes # the number of tapes in rotation
   # Then, I changed it to 50 because that's how many tapes I have.
   # Keep 22 tapes in the changer, at least 14 of which are recent
backups
   # Not more than 8 of which are old tapes to be backed up
   # The remaining 28 tapes are off site backup, or 6 weeks of archival
storage
# WARNING: don't use `inf' for tapecycle, it's broken!

bumpsize 20 Mb  # minimum savings (threshold) to bump level 1 -> 2
bumpdays 1  # minimum days at each level
bumpmult 4  # threshold = bumpsize * bumpmult^(level-1)

etimeout 10800  # number of seconds per filesystem for estimates.
   # increased from 2400 seconds 2000-12-22 because spinoza
   # keeps timing out.
   # Changed from 3600 (1 hour) to 10800 (3 hours) because
   # spinoza still keeps timing out.
# dtimeout 3600  # Data timeout, added 2001-07-25 because spinoza times
out, still
runtapes 2  # The maximum number of tapes used in a single run. (was 1,
increased 2902 /JHS)

tpchanger "chg-zd-mtx"
tapedev "/dev/nst0"
changerfile "/usr/local/etc/amanda/changer"
changerdev "/dev/sg1"
diskfile "/usr/local/etc/amanda/daily/disklist"
tapelist "/usr/local/etc/amanda/daily/tapelist"

tapetype DLT
labelstr "^daily-[0-9][0-9][0-9]*$"

infofile "/usr/local/share/amanda/curinfo" # database DIRECTORY
logdir   "/usr/local/share/amanda/logs"  # log directory
indexdir "/usr/local/share/amanda/index" # index directory

includefile "/usr/local/etc/amanda/holding-disk.conf"
includefile "/usr/local/etc/amanda/tapetypes.conf"
includefile "/usr/local/etc/amanda/dumptypes.conf"
includefile "/usr/local/etc/amanda/interfaces.conf"



and my holding-disk.conf file is:

truk!backup 113# cat /usr/local/etc/amanda/holding-disk.conf
holdingdisk hd1 {
comment "Main holding disk"
directory "/bkup" # where the holding disk is
use 64000 Mb # how much space can we use on it
   # a negative value mean:
   #use all space except that value
chunksize 2000 Mb  # size of chunk if you want big dump to be
   # dumped on multiple files on holding disks
   #  N Kb/Mb/Gb split disks in chunks of size N
   #  0  split disks in INT_MAX/1024 Kb chunks
   # -1  same as -INT_MAX/1024 (see below)
   # -N Kb/Mb/Gb dont split, dump larger
   # filesystems directly to tape
   # (example: -2 Gb)
# chunksize 2 Gb
}
truk!backup 114#



Does anybody have any advice?


Many thanks,


Jeff


---
Jeff Silverman, sysadmin for the Research Computing Systems (RCS)
University of Washington, School of Engineering, Electrical Engineering Dept.
Box 352500, Seattle, WA, 98125-2500 FAX: (206) 221-5264 Phone (206) 221-5394
[EMAIL PROTECTED] http://rcs.ee.washington.edu/~jeffs






RE: AMANDA WEBSITE ???

2001-10-24 Thread Mitch Collinsworth


On Wed, 24 Oct 2001, Rivera, Edwin wrote:

> we may have to reboot the internet for this.

Which of course means we first have to send a notice to all users...

-Mitch




Re: tapecycle off by 1?

2001-10-24 Thread Chris Marble

Sean Noonan wrote:
> 
> In my amanda.conf, I have the following:
> 
> tapecycle 135 tapes
> 
> But here's what the report said: "These dumps were to tapes love129,
> love130, love131.  The next 6 tapes Amanda expects to used are: a new tape,
> love132, love133, love134, love1, love2.

Did you give it love132, love133, love134, love135, love1 and love2?
Did it like those 6 tapes?
-- 
  [EMAIL PROTECTED] - HMC UNIX Systems Manager



RE: AMANDA WEBSITE ???

2001-10-24 Thread Rivera, Edwin

we may have to reboot the internet for this.

-Original Message-
From: Hilton [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, October 24, 2001 1:35 PM
To: [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Subject: AMANDA WEBSITE ???


To whom it may concern.

I would like to access your website but it seems to not exist.

There are NO FILES ON YOUR WEB SERVER. all that exists is one file in the
patches directory.

Hope this is not news for you.

Regards,

_
Hilton Rosenfeld - Information Systems Administrator
Babylon.com, Information @ a Click
Visit us at: http://www.babylon.com



RE: NT backups

2001-10-24 Thread Rivera, Edwin

http://sourceforge.net/projects/amanda-win32/

it's a bit tricky to get working...

-edwin

-Original Message-
From: Stephen Carville [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, October 24, 2001 3:17 PM
To: Amanda Users
Subject: NT backups


I recently read a message here about a native NT client (doesn't
require samba) for backing up NT boxes using Amanda but I cannot fond
the message in Google or the archives.  Can someone point me to a web
or ftp site?

-- 
-- Stephen Carville
UNIX and Network Administrator
Ace Flood USA
310-342-3602
[EMAIL PROTECTED]



Re: NT backups

2001-10-24 Thread Dietmar Goldbeck

On Wed, Oct 24, 2001 at 12:17:24PM -0700, Stephen Carville wrote:
> I recently read a message here about a native NT client (doesn't
> require samba) for backing up NT boxes using Amanda but I cannot fond
> the message in Google or the archives.  Can someone point me to a web
> or ftp site?
> 

http://sourceforge.net/projects/amanda-win32/

-- 
 Alles Gute / best wishes  
 Dietmar GoldbeckE-Mail: [EMAIL PROTECTED]
Reporter (to Mahatma Gandhi): Mr Gandhi, what do you think of Western
Civilization?  Gandhi: I think it would be a good idea.



RE: gzip running when "compress none"

2001-10-24 Thread Amanda Admin

Are you indexing your backups?

Amanda compresses the index files stored on the server. Amanda may also
compress other process-oriented (not backup) files on the server, indexes
are the only ones I'm certain of though.

HTH

> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED]]On Behalf Of David Chin
> Sent: Wednesday, October 24, 2001 8:48 AM
> To: [EMAIL PROTECTED]
> Subject: gzip running when "compress none"
>
>
>
> Howdy,
>
> I'm running amanda 2.4.2p2 on RH7.1 Linux and HP-UX 10.20, with a
> Linux box
> acting as server.  On the server, there is a "gzip --best"
> process running
> even though I have "compress none" in the "global" configuration.
>  Is this
> normal?
>
> --Dave Chin
>   [EMAIL PROTECTED]
>




NT backups

2001-10-24 Thread Stephen Carville

I recently read a message here about a native NT client (doesn't
require samba) for backing up NT boxes using Amanda but I cannot fond
the message in Google or the archives.  Can someone point me to a web
or ftp site?

-- 
-- Stephen Carville
UNIX and Network Administrator
Ace Flood USA
310-342-3602
[EMAIL PROTECTED]




AMANDA WEBSITE ???

2001-10-24 Thread Hilton

To whom it may concern.

I would like to access your website but it seems to not exist.

There are NO FILES ON YOUR WEB SERVER. all that exists is one file in the
patches directory.

Hope this is not news for you.

Regards,

_
Hilton Rosenfeld - Information Systems Administrator
Babylon.com, Information @ a Click
Visit us at: http://www.babylon.com




Re: data timeout

2001-10-24 Thread Bernhard R. Erdmann

> >>   driver: fs.rocnet.de /dev/sda6 0 [dump to tape failed, will try
> again]
> BRE> I assume this is Linux' dump. What version are you using? Why don't
> you
> BRE> use a holding disk instead of writing directly to tape?
> 
> do you think thats why i'm using no holding disk. i will try it. but i
> don't use a holding disk because i dump all partitions and in any time i
> will dump the partition with the holding disk. are there any problems to
> dump the holding disk partition ?

yes, there are - you can't dump a filesystem with the "holding disk" on
it to the holding disk. You'd have to use a special parameter in
amanda.conf for it. It results in dumping directly to tape.
Go get the latest dump 0.4b24.



Sorry if this is an FAQ [Using amanda with no tape drive]

2001-10-24 Thread Adam Haberlach

I'm trying to use amanda to backup my colocated Solaris server to
a local machine, and I don't have a tape drive.  I do have plenty of
hardi drive space handy, however.

I've tried and tried, but I cannot seem to get amanda to spool the
backups to disk.  I typically end up with some varient of "could not
rewind tape" or "no tape in drive"

Any hints?

-- 
Adam Haberlach | Computer science has the unfortunate characteristic
[EMAIL PROTECTED]| that many of the pioneers of the field are still
   | alive and doing research.
   |-- Adam Rifkin



yahoogroups archive restoration

2001-10-24 Thread Jon LaBadie

The archives at yahoogroups were not recording postings
for most of the third quarter (~200 articles when the
normal flow is >2000).

I have a pretty complete set of list articles covering
the last 9 quarters (for this need, nicely grouped by Q).

Has anyone information as to whether the missing articles
can be added back into the archives at yahoogroups?  Or
who to contact there, or on this list, for further info?

-- 
Jon H. LaBadie  [EMAIL PROTECTED]
 JG Computing   [EMAIL PROTECTED]
 4455 Province Line Road(609) 252-0159
 Princeton, NJ  08540-4322  (609) 683-7220 (fax)



Re: gzip running when "compress none"

2001-10-24 Thread Mitch Collinsworth

 
> I'm running amanda 2.4.2p2 on RH7.1 Linux and HP-UX 10.20, with a Linux box 
> acting as server.  On the server, there is a "gzip --best" process running 
> even though I have "compress none" in the "global" configuration.  Is this 
> normal?

If you are indexing, yes.  The indexes are compressed.

-Mitch




amverify=no, amrecover=yes, tar/gtar

2001-10-24 Thread X X

Hi.

I have an odd problem (I think atleast).
Things work ok when backing up and when recovering but when I try to run 
amverify I get something like this:

Waiting for device to go ready...
Rewinding...
Processing label...
Rewinding...
Rewinding...
Daily-001 ():
checking 192.168.0.2._home.20011024.0

funny part**

skipped 192.168.0.4._etc.20011024.0 (** Cannot do /bin/tar dumps)

end funny part*

st0:Error with sense data: info fld=0x40, current
st09:00:sense key medium error
Additional sense indicates Block sequence error.
amrestore: error reading file header: Input/output error
** No header
0+0 records in
0+0 records out
Daily-001 ():
amrestore: error reading file header: Input/output error
** No header
0+0 records in
0+0 records out
Daily-001 ():
amrestore: error reading file header: Input/output error
** No header
0+0 records in
0+0 records out
Daily-001 ():
amrestore: error reading file header: Input/output error
** No header
0+0 records in
0+0 records out
Daily-001 ():
amrestore: error reading file header: Input/output error
** No header
0+0 records in
0+0 records out
Daily-001 ():
amrestore: error reading file header: Input/output error
** No header
0+0 records in
0+0 records out

I get no logs on this anywhere.

Im running amanda 2.4.2p2-1 on Linux RedHat 7.1 with gtar 1.13.19.
And tar is accessable through /bin/tar.

Why would amverify mess things up when amrecover works ok?

An importent thing here is that I run root-tar on /etc that fails. The other 
ones that I run as comp-user looks ok until it hits /etc above.


Anyone that can help?
btw - have a nice day.

_
Get your FREE download of MSN Explorer at http://explorer.msn.com/intl.asp




gzip running when "compress none"

2001-10-24 Thread David Chin


Howdy,

I'm running amanda 2.4.2p2 on RH7.1 Linux and HP-UX 10.20, with a Linux box 
acting as server.  On the server, there is a "gzip --best" process running 
even though I have "compress none" in the "global" configuration.  Is this 
normal?

--Dave Chin
  [EMAIL PROTECTED]




RE: Problems with amverify and amrecover - more info

2001-10-24 Thread Michael Sobik

If I try:

amrestore -p /dev/ no-such-host > /dev/null

amrestore correctly lists all the dump images on the tape.  However, if I
try and restore with:

amrestore -p /dev/   > /dev/null

I get:

amrestore:  3: restoring realhostname.diskimage.0

the console seems to hang and NO lights flash on the drive, it just sits
there with Ready status.  Once again, I can get the image with mt, dd, and
tar.

Mike
- Original Message -
From: "Michael Sobik" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Wednesday, October 24, 2001 6:25 AM
Subject: Problems with amverify and amrecover


> All,
>
> Thanks to the help of this list, I have just recently configured Amanda
and
> run my first backups.  However, I'm experiencing problems with amverify
and
> amrecover.  The morning after the first backups ran, I tried to run
amverify
> on the first tape of the run.  Here's the report I got back from amverify
> (ran for at least an hour):
>
> Tapes:  linuxman11
> Errors found:
> linuxman11 (linuxman._mnt_share_Angela.20011023.0):
> amrestore:   0: skipping start of tape: date 20011023 label linuxman11
> amrestore:   1: restoring linuxman._mnt_share_Angela.20011023.0
> amrestore: read error: Input/output error
> /bin/tar: Unexpected EOF in archive
> /bin/tar: Error is not recoverable: exiting now
> 64+0 records in
> 64+0 records out
> linuxman11 ():
> amrestore: error reading file header: Input/output error
> ** No header
> 0+0 records in
> 0+0 records out
> linuxman11 ():
> amrestore: error reading file header: Input/output error
> ** No header
> 0+0 records in
> 0+0 records out
> linuxman11 ():
> amrestore: error reading file header: Input/output error
> ** No header
> 0+0 records in
> 0+0 records out
> linuxman11 ():
> amrestore: error reading file header: Input/output error
> ** No header
> 0+0 records in
> 0+0 records out
> linuxman11 ():
> amrestore: error reading file header: Input/output error
> ** No header
> 0+0 records in
> 0+0 records out
> amtape: could not load slot 2: Unloading Data Transfer Element into
Storage
> Element 1...mtx: Request Sense: Long Report=yes
> amtape: could not load slot 2: Unloading Data Transfer Element into
Storage
> Element 1...mtx: Request Sense: Long Report=yes
>
> amverify linuxman
> Tue Oct 23 17:34:44 MST 2001
>
> Loading 1 slot...
> Using device /dev/nst0
> Volume linuxman11, Date 20011023
> ** Error detected (linuxman._mnt_share_Angela.20011023.0)
> amrestore:   0: skipping start of tape: date 20011023 label linuxman11
> amrestore:   1: restoring linuxman._mnt_share_Angela.20011023.0
> amrestore: read error: Input/output error
> /bin/tar: Unexpected EOF in archive
> /bin/tar: Error is not recoverable: exiting now
> 64+0 records in
> 64+0 records out
> ** Error detected ()
> amrestore: error reading file header: Input/output error
> ** No header
> 0+0 records in
> 0+0 records out
> ** Error detected ()
> amrestore: error reading file header: Input/output error
> ** No header
> 0+0 records in
> 0+0 records out
> ** Error detected ()
> amrestore: error reading file header: Input/output error
> ** No header
> 0+0 records in
> 0+0 records out
> ** Error detected ()
> amrestore: error reading file header: Input/output error
> ** No header
> 0+0 records in
> 0+0 records out
> ** Error detected ()
> amrestore: error reading file header: Input/output error
> ** No header
> 0+0 records in
> 0+0 records out
> Too many errors.
> Loading next slot...
> ** Error loading slot next
> amtape: could not load slot 2: Unloading Data Transfer Element into
Storage
> Element 1...mtx: Request Sense: Long Report=yes
> Advancing past the last tape...
> ** Error advancing after last slot
> amtape: could not load slot 2: Unloading Data Transfer Element into
Storage
> Element 1...mtx: Request Sense: Long Report=yes
>
> After it finished, the tape drive was flashing an error and requested a
> cleaning.  I cleaned the drive and let amdump run last night.  Everything
> completed successfully, so I tried amverify on todays tape.  This time the
> machine locked up solid...no mouse, no keyboard, nothing.
>
> I also tried to run amrecover.  Everything worked great up until I tried
to
> do the actual extract.  I loaded the tape and amrecover began
> rewinding/seeking the drive, but then just seemed to hang.  I rebooted.
>
> Anyone have any idea as to where I should begin looking for the problem?
> Drive issue, SCSI problem? I know it's not actual errors on the tape since
I
> CAN recover the data manually using mt, dd, and tar.  I also know the
index
> for each disklist entry is correct since I manually g unzipped them and
> amrecover requested the correct tape for the file I requested.  Thanks for
> the help.
>
> Mike
>



incomplete list of directories using amrecover?

2001-10-24 Thread J Ctt

I am using amanda server 2.4.2p2-1 to backup a remote
solaris system. Backups of the remote client appear to
be running properly - index records for the remote
host reflect a complete list of directories/files. The
amount of data backed up (for full backups)
corresponds with directory sizes on the remote
computer- however- when I use amrecover to view a list
of files available for recovery I only see about half
of the directories that really exist on the remote
system. I don't get any errors using sethost, setdisk,
and setdate within amrecover and I don't have this
problem with directories that are backed up locally.
I'm unsure of what I need to do next and I'd
appreciate any suggestions...


__
Do You Yahoo!?
Make a great connection at Yahoo! Personals.
http://personals.yahoo.com



No Subject

2001-10-24 Thread Andrei Neagoe

unsubscribe



Problems with amverify and amrecover

2001-10-24 Thread Michael Sobik

All,

Thanks to the help of this list, I have just recently configured Amanda and
run my first backups.  However, I'm experiencing problems with amverify and
amrecover.  The morning after the first backups ran, I tried to run amverify
on the first tape of the run.  Here's the report I got back from amverify
(ran for at least an hour):

Tapes:  linuxman11
Errors found:
linuxman11 (linuxman._mnt_share_Angela.20011023.0):
amrestore:   0: skipping start of tape: date 20011023 label linuxman11
amrestore:   1: restoring linuxman._mnt_share_Angela.20011023.0
amrestore: read error: Input/output error
/bin/tar: Unexpected EOF in archive
/bin/tar: Error is not recoverable: exiting now
64+0 records in
64+0 records out
linuxman11 ():
amrestore: error reading file header: Input/output error
** No header
0+0 records in
0+0 records out
linuxman11 ():
amrestore: error reading file header: Input/output error
** No header
0+0 records in
0+0 records out
linuxman11 ():
amrestore: error reading file header: Input/output error
** No header
0+0 records in
0+0 records out
linuxman11 ():
amrestore: error reading file header: Input/output error
** No header
0+0 records in
0+0 records out
linuxman11 ():
amrestore: error reading file header: Input/output error
** No header
0+0 records in
0+0 records out
amtape: could not load slot 2: Unloading Data Transfer Element into Storage
Element 1...mtx: Request Sense: Long Report=yes
amtape: could not load slot 2: Unloading Data Transfer Element into Storage
Element 1...mtx: Request Sense: Long Report=yes

amverify linuxman
Tue Oct 23 17:34:44 MST 2001

Loading 1 slot...
Using device /dev/nst0
Volume linuxman11, Date 20011023
** Error detected (linuxman._mnt_share_Angela.20011023.0)
amrestore:   0: skipping start of tape: date 20011023 label linuxman11
amrestore:   1: restoring linuxman._mnt_share_Angela.20011023.0
amrestore: read error: Input/output error
/bin/tar: Unexpected EOF in archive
/bin/tar: Error is not recoverable: exiting now
64+0 records in
64+0 records out
** Error detected ()
amrestore: error reading file header: Input/output error
** No header
0+0 records in
0+0 records out
** Error detected ()
amrestore: error reading file header: Input/output error
** No header
0+0 records in
0+0 records out
** Error detected ()
amrestore: error reading file header: Input/output error
** No header
0+0 records in
0+0 records out
** Error detected ()
amrestore: error reading file header: Input/output error
** No header
0+0 records in
0+0 records out
** Error detected ()
amrestore: error reading file header: Input/output error
** No header
0+0 records in
0+0 records out
Too many errors.
Loading next slot...
** Error loading slot next
amtape: could not load slot 2: Unloading Data Transfer Element into Storage
Element 1...mtx: Request Sense: Long Report=yes
Advancing past the last tape...
** Error advancing after last slot
amtape: could not load slot 2: Unloading Data Transfer Element into Storage
Element 1...mtx: Request Sense: Long Report=yes

After it finished, the tape drive was flashing an error and requested a
cleaning.  I cleaned the drive and let amdump run last night.  Everything
completed successfully, so I tried amverify on todays tape.  This time the
machine locked up solid...no mouse, no keyboard, nothing.

I also tried to run amrecover.  Everything worked great up until I tried to
do the actual extract.  I loaded the tape and amrecover began
rewinding/seeking the drive, but then just seemed to hang.  I rebooted.

Anyone have any idea as to where I should begin looking for the problem?
Drive issue, SCSI problem? I know it's not actual errors on the tape since I
CAN recover the data manually using mt, dd, and tar.  I also know the index
for each disklist entry is correct since I manually g unzipped them and
amrecover requested the correct tape for the file I requested.  Thanks for
the help.

Mike



Re: NEWBIE> Configuring Amanda w/ DAT drive

2001-10-24 Thread Chris Dahn

On Tuesday 23 October 2001 06:07 pm, John wrote:
> Hello,
>
> I would like to start using amanda to handle backups on my lan.
>  However, I am not sure how to handle it.  I have a 4MM Dat drive w/ 4
> tape library (Archive DAT).  Can amanda be configured to support each of
> the 4 tapes individually or is it limited to only treat it as one 16GB
> logical tape?
>
> TIA
>
> --
> John C. Wingenbach

  It uses them individually, but you can configure amanda to switch to a new 
tape when the current tape runs out. We have an Overland autochanger here, 
and I'm using it as a tape stacker with chg-multi. There are scripts 
available to also control it as a changer. You can use mtx to move tapes 
around too.

-- 

<->Software Engineering Research Group<->
Feel the SERG!
http://serg.mcs.drexel.edu/
CAT 186, The Microwave
http://pgp.mit.edu:11371/pks/lookup?search=Christopher+Dahn&op=index



Re: data timeout

2001-10-24 Thread Chris Dahn

> Upgrade your version of dump to the latest.  I used to sporadically see
> data timeouts with dump on Redhat 7.1.
>
> Alternatively, switch to tar, which I did for all our Linux hosts and
> haven't seen any data timeouts since.

  Yes, I also had problems with dump timing out on RH7.1, tar works great 
though.  When I had this problem, people hit the list with information about 
Linus saying that dump should not be expected to always work correctly after 
kernel 2.4.x. Perhaps this should be included in the Amanda documentation?

-- 

Chris Dahn, SERG Code Ninja
  3141 Chestnut St.
  Attn: MCS Department
  Philadephia, PA  19104
  Office: 215.895.0203
  Fax: 215.895.1582

<->Software Engineering Research Group<->
Feel the SERG!
http://serg.mcs.drexel.edu/
CAT 186, The Microwave
http://pgp.mit.edu:11371/pks/lookup?search=Christopher+Dahn&op=index



No Subject

2001-10-24 Thread Paul-Hus Diane

unsubscribe



More than one page worth of dumps on report

2001-10-24 Thread Stan Brown

I'm backing up 99 filesystems. I am using one of the postscript forms
(forget which one) to print out a report at the end of the run. Clearly 99
won't fit in one page.

Sugestions?

-- 
Stan Brown [EMAIL PROTECTED]843-745-3154
Charleston SC.
-- 
Windows 98: n.
useless extension to a minor patch release for 32-bit extensions and
a graphical shell for a 16-bit patch to an 8-bit operating system
originally coded for a 4-bit microprocessor, written by a 2-bit 
company that can't stand for 1 bit of competition.
-
(c) 2000 Stan Brown.  Redistribution via the Microsoft Network is prohibited.



Data Timeout

2001-10-24 Thread Chris Heiser


I'm currently using GTAR for Amanda backups.  The only problem I'm still
having is a [data timeout] error on one of the directories.  I can only
assume this is because there are hundres of thousands of files under this
directory and it takes GTAR forever to get data moving for amandad.

1)  Is there anyway I can get GTAR to respond more quickly?  I'd use DUMP,
however this is on a Linux 2.2.x kernel and dump gets really pissed about
live filesystems.

2)  Which timeout value can I adjust in amanda.conf to avoid the data
timeout error?

Chris Heiser
Communications Systems Engineer
Sentito Networks




unsubscribe an@secure-net.ro

2001-10-24 Thread Andrei Neagoe

unsubscribe [EMAIL PROTECTED]



selfcheck request timed out. Host down? error

2001-10-24 Thread Walker, Craig

Hi,

I am having a problem backing up one of my UNIX boxes with Amanda.  When I
run AmCheck I get the error:

WARNING: bill: selfcheck request timed out.  Host down?
Client check: 7 hosts checked in 29.678 seconds, 1 problem found.

As you can see the check runs successfully on 6 hosts but times out on 1.
The host is not down as I can ping and rlogin in to it without any problem.
It does also have the appropriate entries in the /etc/services and
/etc/inetd.conf

I realise that there is a section in the Faq-O-Matic about this problem but
since it is offline at the moment can someone please help me out with this
problem?

Regards,

Craig Walker




Re: data timeout

2001-10-24 Thread Joshua Baker-LePain

On 23 Oct 2001 at 11:39pm, Claus Rosenberger wrote

> i have a lot of problems with data timeout while backing up my local
> partitions. the installation is very simple. i only backup my local
> machine. there are a few small partitions. sometimes it work, sometimes
> not. i use redhat 7.0 and the amanda 2.4.2p2 and checked the dtimeout
> parameter. i used one time 3600 and another time 7200.
> 
Upgrade your version of dump to the latest.  I used to sporadically see 
data timeouts with dump on Redhat 7.1.

Alternatively, switch to tar, which I did for all our Linux hosts and 
haven't seen any data timeouts since.

-- 
Joshua Baker-LePain
Department of Biomedical Engineering
Duke University




No Subject

2001-10-24 Thread Andrei Neagoe

unsubscribe



Re: multiboot hosts

2001-10-24 Thread Johannes Niess

Jon LaBadie <[EMAIL PROTECTED]> writes:

>   
>   Sorry if this was also included in a garbage message.
>   I've got to learn how to use this mail reader :))
>   
> 
> I'm just brainstorming at the moment.
> 
> My laptop is setup to multiboot; three OS's,
> Win2K, RH 7.1, and Solaris 8.  None are being
> backed up by amanda at the moment.
> 
> I'm wondering how best to get it (the laptop)
> and them (the 3 os's) into a backup system that
> expects a host to be on the network at dump time
> and have the same os and directory organization
> each time a dump is done.
> 
> Any ideas or experiences welcome.

If you have the disk space for a complete disk mirror onto a desktop
you could rsync all OS'es on Linux(/Solaris?) bootup and backup the
mirrored data.

If you are guarenteed to bee on the network at dump time (with any OS)
you could choose Samba (with ext2 and Solaris drivers for NT) as the
lowest common denominator and fiddle with DFS (Microsoft Distributed
File System) to get a common share structure with all OS'es.

Johannes Niess



unsubscribe rotol@emsp.no

2001-10-24 Thread Roy-Andrè Tollefsen




unsubscribe [EMAIL PROTECTED]
Mvh / best regards,Roy Andre 
TollefsenSystemkonsulentEM Software Partners ASDir tlf +47 51 96 98 
92Fax +47 51 96 98 98Web: www.emsp.no