Sun Tar vs GNU Tar

2000-12-01 Thread Robert L. Harris


I'm running a strictly SUN setup here.  Everything looks good and is
happy.
I've seen all the talk about GNUTar.  Is there any functionality/speed
reason
I should change out my tar?

Robert


--

:wq!
-------
Robert L. Harris|  Micros~1 :
Unix System Administrator   |For when quality, reliability
  at Agency.com |  and security just aren't
\_   that important!
DISCLAIMER:
  These are MY OPINIONS ALONE.  I speak for no-one else.
FYI:
 perl -e 'print $i=pack(c5,(41*2),sqrt(7056),(unpack(c,H)-2),oct(115),10);'






Next tape?

2000-11-30 Thread Robert L. Harris


I just got this in my report today:

The dumps were flushed to tape TAPE25.
The next tape Amanda expects to use is: a new tape.


STATISTICS:
  Total   Full  Daily
      
.
.
.


tapecycle is set to 25 in my amanda.conf.  It should be expecting TAPE01
for
tonight.  Is this going to cause a problem?

Robert


--

:wq!
---
Robert L. Harris|  Micros~1 :
Unix System Administrator   |For when quality, reliability
  at Agency.com |  and security just aren't
\_   that important!
DISCLAIMER:
  These are MY OPINIONS ALONE.  I speak for no-one else.
FYI:
 perl -e 'print $i=pack(c5,(41*2),sqrt(7056),(unpack(c,H)-2),oct(115),10);'






Re: Why is it just sitting here?

2000-11-28 Thread Robert L. Harris


Mine did this.  It turned out the filesystem being dumped was just a little bigger
than my dump disk had free.  What it turns out was happeneing is I was doing
Server compression so it was sending the filesystem to my server to compress
and it was then being written to tape directly.  It wasn't streaming fast enough
for me to see without sitting for quite a while and watching the drive.  I
litterally
had to cancel that nights backups and I let it run out of curiosity.

It finished and then the level 1 the next night ran great.

Do an "mt status -f /dev/".  It will tell you if the device is locked or
busy.

Eric Wadsworth wrote:

> I started a backup last night, manually running 'amdump DailySet1'. Now this
> morning, amstatus reports that it is all done, except for one thing (the other
> lines say "finished"):
>
> navajo.hq.consys.com://vekol/data0 2715147k dumping to tape
>
> The tape drive doesn't seem to be doing anything, usually the little green light
> flashes when it's active. Right now it's just sitting there. The holding disk
> just has an empty subdirectory in it. It seems to be stuck somehow. The hard
> disk that //vekol/data is on isn't even being accessed, it's quiet (makes lots
> of noise when it's being backed up).
>
> Why is this one trying to dump directly to tape, instead of to the holding disk
> first?
> Why would it stall out like it seems to have done?
> If it is actually stuck, I suppose I can kill the amdump process with a ctrl-C
> command, but then a flush wouldn't do anything, because this data isn't on the
> holding disk, right? I would have to do an amcleanup. And I wouldn't get the
> emailed report, either, would I?
> Is there any way of gracefully regaining control without just killing amdump?
>
> Following is information for you experts out there who might have ideas. Thanks
> in advance for any and all advice!
>
> --- Eric Wadsworth
> [EMAIL PROTECTED]
>
> My holding disk has sixty gigabytes of available space (that's not a typo).
> The tape can hold 20 gigs (40 compressed).
>
> Information on this particular samba share:
> This happens to be a directory on my own NT workstation. The 'data' samba share
> is 7.2 Gigs in size, but some portions of this share have their permissions set
> such that the backup operators cannot read them (security policy requires that a
> particular project not be included in the backup) leaving about 5 gigs of data
> to back up. I'm using compression, so the 2.7 gig number makes sense. Several
> other NT boxes have similiar exclusions, and they backed up fine.
>
> Here's the summary from amstatus. (Ignore the 2 failed entries, the user of that
> NT box did a "shutdown" instead of logging out. Silly users.)
>
> SUMMARY  part real estimated
>   size  size
> partition   :  39
> estimated   :  37   14376628k
> failed  :   2  0k
> wait for dumping:   0  0k
> dumping to tape :   12715147k
> dumping :   00k0k
> dumped  :  36 12531488k 11661481k
> wait for writing:   00k0k
> writing to tape :   00k0k
> failed to tape  :   00k0k
> taped   :  36 12531488k 11661481k
> 3 dumpers idle  : not-idle
> taper writing, tapeq: 0
> network free kps: 1970
> holding space   : 53545924
>
> Here's the pertinent line from disklist:
> navajo.hq.consys.com //vekol/deneb css-nt-workstations
>
> Here's some lines from amanda.conf:
> define dumptype css-global {
> comment "CSS Global definitions"
> index yes
> program "GNUTAR"
> }
>
> define dumptype css-nt-workstations {
> css-global
> comment "User's Windows NT workstations"
> priority medium
> compress client best
> }

--

:wq!
---
Robert L. Harris|  Micros~1 :
Unix System Administrator   |For when quality, reliability
  at Agency.com |  and security just aren't
\_   that important!
DISCLAIMER:
  These are MY OPINIONS ALONE.  I speak for no-one else.
FYI:
 perl -e 'print $i=pack(c5,(41*2),sqrt(7056),(unpack(c,H)-2),oct(115),10);'






Re: bzip2 support?

2000-11-16 Thread Robert L. Harris


I would love to have bzip2.


Josh Huston wrote:

> We can add it easily into configure.in to include a new configure option
> "--with-bzip2" to enable bzip2 instead of gzip.  If you know how to work
> with autoconf tools and how to modify configure.in then feel free to add it
> in yourself.
>
> I'm currently working on a small project for Amanda and I could add it in
> for you if there are enough interest from everybody else.  Unless J.R.
> Jackson wants to do that himself.  :-)
>
> Josh
>
> John Goerzen wrote:
> >
> > Hi,
> >
> > I'd like to use bzip2 with amanda for two reasons:
> >
> > 1. It compresses a lot better than gzip
> >
> > 2. More importantly -- data recovery is possible from a damaged tape.
> >
> > >From bzip2's manual:
> >
> >   bzip2 compresses files in blocks, usually 900kbytes long. Each
> >   block is handled independently. If a media or transmission error
> >   causes a multi-block .bz2 file to become damaged, it may be
> >   possible to recover data from the undamaged blocks in the file.
> >
> >   The compressed representation of each block is delimited by a
> >   48-bit pattern, which makes it possible to find the block
> >   boundaries with reasonable certainty. Each block also carries
> >   its own 32-bit CRC, so damaged blocks can be distinguished from
> >   undamaged ones.
> >
> > With gzip and compress, in general, once a media error is encountered,
> > the entire remainder of the file (tar or dump in our case) is
> > unusable.  This is why people generally shun gzip/compress for
> > backups, unless it's done on a per-file basis to lessen the severity
> > of errors.  Of course, doing it on a per-file basis also lessens the
> > effectiveness ofcompression.  The only tradeoff with bzip2 is CPU
> > time: it is more CPU-intensive than gzip or compress.  However, when
> > reliability of backups is at stake, that's a tradeoff I'm quite
> > willing to take,
> >
> > In amanda, the configuration file seems to think that the user isn't
> > smart enough to manually specify a compression program :-)  I'd much
> > rather specify "gzip -9" and "gunzip" than specify "best compression"
> > or whatnot.  Are there any plans to support this?  If there were, it
> > would be trivial to drop in bzip2 instead of gzip with amanda.
> >
> > Are there any plans to support this?  I suppose I could just go into
> > the code with sed and s/gzip/bzip2/ but I'd prefer to do it more
> > elegantly if possible :-)
> >
> > -- John
> >
> > --
> > John Goerzen <[EMAIL PROTECTED]>   www.complete.org
> > Sr. Software Developer, Progeny Linux Systems, Inc.www.progenylinux.com
> > #include  <[EMAIL PROTECTED]>

--

:wq!
---
Robert L. Harris|  Micros~1 :
Unix System Administrator   |For when quality, reliability
  at Agency.com |  and security just aren't
\_   that important!
DISCLAIMER:
  These are MY OPINIONS ALONE.  I speak for no-one else.
FYI:
 perl -e 'print $i=pack(c5,(41*2),sqrt(7056),(unpack(c,H)-2),oct(115),10);'






Re: Amanda isn't generating indexes.

2000-11-15 Thread Robert L. Harris


Yesterday I had put "index yes" in my global dumptype though.  I just put

it in the individual dumptypes.  We'll see what happens in the morning.

Robert

"John R. Jackson" wrote:

> >It does say "index NO".  ...
>
> Well, that would certainly explain things.  :-)
>
> >How do I change this?
>
> Add "index yes" to the dumptype.  If you want indexing for everything,
> change the "global" dumptype and that should apply to all the other
> dumptypes, assuming any you defined yourself include it.
>
> >Robert L. Harris
>
> John R. Jackson, Technical Software Specialist, [EMAIL PROTECTED]

--

:wq!
---
Robert L. Harris|  Micros~1 :
Unix System Administrator   |For when quality, reliability
  at Agency.com |  and security just aren't
\_   that important!
DISCLAIMER:
  These are MY OPINIONS ALONE.  I speak for no-one else.
FYI:
 perl -e 'print $i=pack(c5,(41*2),sqrt(7056),(unpack(c,H)-2),oct(115),10);'






Re: Amanda isn't generating indexes.

2000-11-15 Thread Robert L. Harris

"John R. Jackson" wrote:

> >I still am not getting indexes on my Sun box running Amanda 2.4.1-p1.
> >...
> >Help?  ...
>
> Are you certain you have "index" set to yes?  What do you get when you
> run "su  -c amadmin  disklist  "?

I get this:
host powderday:
interface default
disk c0t0d0s0:
program "DUMP"
priority 1
dumpcycle 10
maxdumps 1
strategy STANDARD
compress CLIENT BEST
comprate 0.50 0.50
auth BSD
kencrypt NO
holdingdisk YES
record YES
index NO
skip-incr NO
skip-full NO

It does say "index NO".  How do I change this?

>
>
> Look at your amanda.conf file and find the indexdir parameter.  Look in
> that directory.  Do you have a subdirectory for each host being backed up?
> Within one of those, do you have a subdirectory for each disk being
> backed up (the name maybe slightly altered).  And within there do you
> have a gzip'ed file with a datestamp for a name?
>
> Do the host names match up with what amrecover says it is trying?
>
> Are the permissions and ownership of this tree such that the Amanda
> user can get down through it all?

The indexdir existed but there were not sub-dirs for each disk.

>
>
> Did you set the correct user in inetd.conf for amindexd such that it
> can access this information?
>

Yup.


>
>
> >Without and index, how do I recover a file?  ...
>
> The short answer is, read docs/RESTORE or www.backupcentral.com/amanda.html.
>
> The longer answer is use "amadmin  find ..." to find out what
> tapes have the images you need and what file on those tapes.  Mount them
> and use amrestore to pick off the image you want (it's usually faster to
> do the fsf yourself), then pipe that across the network to your client
> and into the restore program.
>
> >... Can indexes be regenerated?
>
> Not yet.  It's harder than you would think because it requires sending
> image back to the client and having it regenerate the index and ship it
> back to the server.
>
> >Robert
>
> John R. Jackson, Technical Software Specialist, [EMAIL PROTECTED]

--

:wq!
---
Robert L. Harris|  Micros~1 :
Unix System Administrator   |For when quality, reliability
  at Agency.com |  and security just aren't
\_   that important!
DISCLAIMER:
  These are MY OPINIONS ALONE.  I speak for no-one else.
FYI:
 perl -e 'print $i=pack(c5,(41*2),sqrt(7056),(unpack(c,H)-2),oct(115),10);'






Amanda isn't generating indexes.

2000-11-15 Thread Robert L. Harris


I still am not getting indexes on my Sun box running Amanda 2.4.1-p1.
 I have
tried to apply patch: 105722-05 but it failed because it's already
there.

Help?  Without and index, how do I recover a file?  Can I?  Can indexes
be
regenerated?

Robert


--

:wq!
---
Robert L. Harris|  Micros~1 :
Unix System Administrator   |For when quality, reliability
  at Agency.com |  and security just aren't
\_   that important!
DISCLAIMER:
  These are MY OPINIONS ALONE.  I speak for no-one else.
FYI:
 perl -e 'print $i=pack(c5,(41*2),sqrt(7056),(unpack(c,H)-2),oct(115),10);'






Re: amrecover testing

2000-11-13 Thread Robert L. Harris


I already have this one apparantly.

Talking to Henk Martijn it looks like indexes just weren't installed by
default and I didn't add it.  I need to add a configuration option
to tell it to turn on indexing.  Do you have something like "index yes"
in one of your files?

Robert

Josh Huston wrote:

> Apparently you are using Solaris platform.  There are a bug in
> Solaris's ufsrestore command.  That bug is affected only on large
> filesystem with a lot of files (e.g. over 30,000 files) You will
> need to download patch from sunsolve.sun.com to fix the problem.
>
> With the bug, ufsrestore was unable to generate a file listing for
> amindex server.  When sendbackup process on the client side makes a
> dump, it pipes to ufsrestore to collect index information to be sent
> to the Amanda index server.  What happens is that when a dump is really
> large and terminates with a "xtrmap: too many map entries" error and
> never dumped the file listing to amindex server.  Apparently it never
> showed in
> the debug files generated by sendbackup.  I discovered the
> error message only when I did an ufsrestore directly from the dump to
> figure out why it never generated a file listing.  So the patch listed
> below will fix the xtrmap bug and will generate file listing for amindex
> server.
>
> I don't know which version or platform you are using but I'll list
> all patch numbers for each platform.
>
> SunOS 5.5.1/SPARC 104490-06
> SunOS 5.5.1/x86   104491-06
> SunOS 5.6/SPARC   105722-05
> SunOS 5.6/x86 104723-05
> SunOS 5.7/SPARC   106793-05
> SunOS 5.7/x86 106794-05
>
> Solaris 8 patches is available only for customers with support
> contract with Sun:
>
> SunOS 5.8/SPARC   109091-02
> SunOS 5.8/SPARC   109092-02
>
> If you don't have a support contract with Sun, let me know and
> I can forward you the patch file for Solaris 8.
>
> Josh
>
> "Robert L. Harris" wrote:
>
> > Ok,
> >   I decided to try a recover before I NEEDED to do one.  I just did an
> > "amoverview"
> > and it showed a level 0 on my dumpserver last thurs, and level 1 on
> > friday.  I
> > followed with this:
> >
> > {0}:powderday:/>amrecover
> > AMRECOVER Version 2.4.1p1. Contacting server on
> > powderday.vail.agency.com ...
> > 220 powderday AMANDA index server (2.4.1p1) ready.
> > 200 Access OK
> > Setting restore date to today (2000-11-13)
> > 200 Working date set to 2000-11-13.
> > 200 Config set to DailySet1.
> > 501 No index records for host: powderday.vail.agency.com. Invalid?
> > Trying powderday.vail.agency.com ...
> > 501 No index records for host: powderday.vail.agency.com. Invalid?
> > Trying powderday.eriver.com ...
> > 501 No index records for host: powderday.eriver.com. Invalid?
> > Trying powderday ...
> > 501 No index records for host: powderday. Invalid?
> > Trying loghost ...
> > 501 No index records for host: loghost. Invalid?
> > amrecover> quit
> >
> > I have this in my disk list:
> > powderday c0t0d0s0 generic-cb   # /
> > powderday c0t0d0s3 generic-cb   # /var
> > powderday c0t0d0s4 generic-cb   # /opt
> > powderday c0t0d0s6 generic-cb   # /usr
> > powderday c0t1d0s6 generic-cb   #
> > /export/home
> >
> > Thoughts?
> >
> > I put "powderday.vail.agency.comroot" in my .amandahosts because
> > it complained I wasn't allowed to connect.  That error went away.
> >
> > Robert
> >
> > --
> >
> > :wq!
> > ---
> > Robert L. Harris        |  Micros~1 :
> > Unix System Administrator   |For when quality, reliability
> >   at Agency.com |  and security just aren't
> > \_   that important!
> > DISCLAIMER:
> >   These are MY OPINIONS ALONE.  I speak for no-one else.
> > FYI:
> >  perl -e 'print $i=pack(c5,(41*2),sqrt(7056),(unpack(c,H)-2),oct(115),10);'

--

:wq!
---
Robert L. Harris|  Micros~1 :
Unix System Administrator   |For when quality, reliability
  at Agency.com |  and security just aren't
\_   that important!
DISCLAIMER:
  These are MY OPINIONS ALONE.  I speak for no-one else.
FYI:
 perl -e 'print $i=pack(c5,(41*2),sqrt(7056),(unpack(c,H)-2),oct(115),10);'






amrecover testing

2000-11-13 Thread Robert L. Harris



Ok,
  I decided to try a recover before I NEEDED to do one.  I just did an
"amoverview"
and it showed a level 0 on my dumpserver last thurs, and level 1 on
friday.  I
followed with this:

{0}:powderday:/>amrecover
AMRECOVER Version 2.4.1p1. Contacting server on
powderday.vail.agency.com ...
220 powderday AMANDA index server (2.4.1p1) ready.
200 Access OK
Setting restore date to today (2000-11-13)
200 Working date set to 2000-11-13.
200 Config set to DailySet1.
501 No index records for host: powderday.vail.agency.com. Invalid?
Trying powderday.vail.agency.com ...
501 No index records for host: powderday.vail.agency.com. Invalid?
Trying powderday.eriver.com ...
501 No index records for host: powderday.eriver.com. Invalid?
Trying powderday ...
501 No index records for host: powderday. Invalid?
Trying loghost ...
501 No index records for host: loghost. Invalid?
amrecover> quit

I have this in my disk list:
powderday c0t0d0s0 generic-cb   # /
powderday c0t0d0s3 generic-cb   # /var
powderday c0t0d0s4 generic-cb   # /opt
powderday c0t0d0s6 generic-cb   # /usr
powderday c0t1d0s6 generic-cb   #
/export/home

Thoughts?

I put "powderday.vail.agency.comroot" in my .amandahosts because
it complained I wasn't allowed to connect.  That error went away.

Robert


--

:wq!
-----------
Robert L. Harris|  Micros~1 :
Unix System Administrator   |For when quality, reliability
  at Agency.com |  and security just aren't
\_   that important!
DISCLAIMER:
  These are MY OPINIONS ALONE.  I speak for no-one else.
FYI:
 perl -e 'print $i=pack(c5,(41*2),sqrt(7056),(unpack(c,H)-2),oct(115),10);'






Change Tape Cycle?

2000-11-10 Thread Robert L. Harris


I originally set up my system with 16 tapes.  I got some more in and
want to
use 25.  My job last night was on tape 16, and according to the report,
expects
tape 1 tonight.  My amanda.conf says there are 25 tapes in cycle.  How
do I get
it to plan on more tapes?  Do I need to manualy edit the TapeList file?

--

:wq!
---
Robert L. Harris|  Micros~1 :
Unix System Administrator   |For when quality, reliability
  at Agency.com |  and security just aren't
\_   that important!
DISCLAIMER:
  These are MY OPINIONS ALONE.  I speak for no-one else.
FYI:
 perl -e 'print $i=pack(c5,(41*2),sqrt(7056),(unpack(c,H)-2),oct(115),10);'






How many dumpers active?

2000-11-09 Thread Robert L. Harris


How do I know how many dumpers are active?  I just changed my config to
allow 2 while i figure out this problem with the disk.  It said both
dumpers
were active, but there were no dumper processes on the one client box
in my disklist file.

Robert


--

:wq!
---
Robert L. Harris|  Micros~1 :
Unix System Administrator   |For when quality, reliability
  at Agency.com |  and security just aren't
\_   that important!
DISCLAIMER:
  These are MY OPINIONS ALONE.  I speak for no-one else.
FYI:
 perl -e 'print $i=pack(c5,(41*2),sqrt(7056),(unpack(c,H)-2),oct(115),10);'






Dump failing on 1 disk.

2000-11-09 Thread Robert L. Harris


I'm getting this:
{0}:powderday:/usr/local/etc/amanda>amstatus normal
Using /usr/local/etc/amanda/logs/amdump

nose.incyte.agency.com:c1t6d0s6  0 6380292k dumping to tape

SUMMARY  part real estimated
  size  size
partition   :   1
estimated   :   16380292k
failed  :   0  0k
wait for dumping:   0  0k
dumping to tape :   16380292k
dumping :   00k0k
dumped  :   00k0k
wait for writing:   00k0k
writing to tape :   00k0k
failed to tape  :   00k0k
taped   :   00k0k
4 dumpers idle  : not-idle
taper writing, tapeq: 0
network free kps: 228
holding space   : 6519318

I put in a new tape, commented the rest of the disks out of my table
and started amdump.  It's been sitting there for about an hour now.

Thoughts?

--

:wq!
-----------
Robert L. Harris|  Micros~1 :
Unix System Administrator   |For when quality, reliability
  at Agency.com |  and security just aren't
\_   that important!
DISCLAIMER:
  These are MY OPINIONS ALONE.  I speak for no-one else.
FYI:
 perl -e 'print $i=pack(c5,(41*2),sqrt(7056),(unpack(c,H)-2),oct(115),10);'






Re: My backups don't finish...

2000-11-08 Thread Robert L. Harris


Yesterday I let the whole thing run from start (midnight) until about
6:00pm.  Lastnights (seen below) ran from midnight until 9:00am.

Robert

Josh Huston wrote:

> Robert,
>
> nose.incyte.agency.com:c1t6d0s6  0 6380292k dumping to tape
>
> The line above indicates that it is currently dumping directly
> to the tape.  It may take a while to dump 6 GB partition depending
> on the bandwidth of your internal network and the ethernet card.
> 10Base-T does not move as fast as 100Base-T does.  For me, dumping
> 5 to 6 GB of data over 10Base-T from one of the older Sun Sparcserver
> to 100Base-T backup server took about 2 to 3 hours.  If it was 20 GB
> of data, it'll take about 8 hours.  Just let it finish the backup and
> once it does, it will send you an email.
>
> Josh
>
> -Original Message-
> From: Robert L. Harris
> To: Amanda
> Sent: 11/8/00 9:12 AM
> Subject: My backups don't finish...
>
> Using /usr/local/etc/amanda/logs/amdump
>
> nose.incyte.agency.com:c0t0d0s0  1  32k finished
> nose.incyte.agency.com:c0t0d0s5  1 384k finished
> nose.incyte.agency.com:c0t0d0s7  1 544k finished
> nose.incyte.agency.com:c0t1d0s7  1  32k finished
> nose.incyte.agency.com:c1t0d0s6  1  32k finished
> nose.incyte.agency.com:c1t6d0s6  0 6380292k dumping to tape
> devastator.incyte.agency.com:c0t0d0s01  32k finished
> devastator.incyte.agency.com:c0t0d0s51  32k finished
> devastator.incyte.agency.com:c0t0d0s71  32k finished
> devastator.incyte.agency.com:c0t1d0s71   37952k finished
> knee.incyte.agency.com:c0t0d0s0  1  201376k finished
> knee.incyte.agency.com:c0t0d0s5  1 832k finished
> knee.incyte.agency.com:c0t0d0s7  1 128k finished
> knee.incyte.agency.com:c0t1d0s6  1  217696k finished
> spirit.incyte.agency.com:c0t0d0s01  211456k finished
> spirit.incyte.agency.com:c0t0d0s51  64k finished
> spirit.incyte.agency.com:c0t0d0s71 480k finished
> vanguard.incyte.agency.com:c0t0d0s0  1 544k finished
> vanguard.incyte.agency.com:c0t0d0s5  1  32k finished
> vanguard.incyte.agency.com:c0t0d0s7  1 128k finished
>
> SUMMARY  part real estimated
>   size  size
> partition   :  20
> estimated   :  206794287k
> failed  :   0  0k
> wait for dumping:   0  0k
> dumping to tape :   16380292k
> dumping :   00k0k
> dumped  :  19   671808k   413995k
> wait for writing:   00k0k
> writing to tape :   00k0k
> failed to tape  :   00k0k
> taped   :  19   671808k   413995k
> 4 dumpers idle  : not-idle
> taper writing, tapeq: 0
> network free kps: 228
> holding space   : 6519303
>
> I don't get any errors.  It doesn't auto-offline the tape and
> an "mt status" reports the tape busy.  Thoughts?
>
> Robert
>
> --
>
> :wq!
> 
> ---
> Robert L. Harris|  Micros~1 :
> Unix System Administrator   |For when quality, reliability
>   at Agency.com |  and security just aren't
> \_   that important!
> DISCLAIMER:
>   These are MY OPINIONS ALONE.  I speak for no-one else.
> FYI:
>  perl -e 'print
> $i=pack(c5,(41*2),sqrt(7056),(unpack(c,H)-2),oct(115),10);'

--

:wq!
---
Robert L. Harris|  Micros~1 :
Unix System Administrator   |For when quality, reliability
  at Agency.com |  and security just aren't
\_   that important!
DISCLAIMER:
  These are MY OPINIONS ALONE.  I speak for no-one else.
FYI:
 perl -e 'print $i=pack(c5,(41*2),sqrt(7056),(unpack(c,H)-2),oct(115),10);'






My backups don't finish...

2000-11-08 Thread Robert L. Harris

Using /usr/local/etc/amanda/logs/amdump

nose.incyte.agency.com:c0t0d0s0  1  32k finished
nose.incyte.agency.com:c0t0d0s5  1 384k finished
nose.incyte.agency.com:c0t0d0s7  1 544k finished
nose.incyte.agency.com:c0t1d0s7  1  32k finished
nose.incyte.agency.com:c1t0d0s6  1  32k finished
nose.incyte.agency.com:c1t6d0s6  0 6380292k dumping to tape
devastator.incyte.agency.com:c0t0d0s01  32k finished
devastator.incyte.agency.com:c0t0d0s51  32k finished
devastator.incyte.agency.com:c0t0d0s71  32k finished
devastator.incyte.agency.com:c0t1d0s71   37952k finished
knee.incyte.agency.com:c0t0d0s0  1  201376k finished
knee.incyte.agency.com:c0t0d0s5  1 832k finished
knee.incyte.agency.com:c0t0d0s7  1 128k finished
knee.incyte.agency.com:c0t1d0s6  1  217696k finished
spirit.incyte.agency.com:c0t0d0s01  211456k finished
spirit.incyte.agency.com:c0t0d0s51  64k finished
spirit.incyte.agency.com:c0t0d0s71 480k finished
vanguard.incyte.agency.com:c0t0d0s0  1 544k finished
vanguard.incyte.agency.com:c0t0d0s5  1  32k finished
vanguard.incyte.agency.com:c0t0d0s7  1 128k finished

SUMMARY  part real estimated
  size  size
partition   :  20
estimated   :  206794287k
failed  :   0  0k
wait for dumping:   0  0k
dumping to tape :   16380292k
dumping :   00k0k
dumped  :  19   671808k   413995k
wait for writing:   00k0k
writing to tape :   00k0k
failed to tape  :   00k0k
taped   :  19   671808k   413995k
4 dumpers idle  : not-idle
taper writing, tapeq: 0
network free kps: 228
holding space   : 6519303

I don't get any errors.  It doesn't auto-offline the tape and
an "mt status" reports the tape busy.  Thoughts?

Robert


--

:wq!
-----------
Robert L. Harris|  Micros~1 :
Unix System Administrator   |For when quality, reliability
  at Agency.com |  and security just aren't
\_   that important!
DISCLAIMER:
  These are MY OPINIONS ALONE.  I speak for no-one else.
FYI:
 perl -e 'print $i=pack(c5,(41*2),sqrt(7056),(unpack(c,H)-2),oct(115),10);'






Re: Compile error?

2000-11-06 Thread Robert L. Harris


They look installed.

{130}:vanguard.incyte.agency.com:/data/weblogic>which yacc
/usr/ccs/bin/yacc
{0}:vanguard.incyte.agency.com:/data/weblogic>which lex
/usr/ccs/bin/lex

Thoughts?


Remy Chibois wrote:

> On Tue, Oct 24, 2000 at 04:32:39PM -0600, Robert L. Harris wrote:
> >
> > I've built amanda so far on about 6 clients.  On # 7 I'm getting this:
> >
> > gcc -g -o .libs/amrecover amrecover.o display_commands.o extract_list.o
> > help.o set_commands.o uparse.o uscan.o -ll -R/usr/local/lib
> > ../client-src/.libs/libamclient.so -lgen -lm -ltermcap -lsocket -lnsl
> > -lintl -R/usr/local/lib ../tape-src/.libs/libamtape.so -lgen -lm
> > -ltermcap -lsocket -lnsl -lintl -R/usr/local/lib
> > ../common-src/.libs/libamanda.so -lgen -lm -ltermcap -lsocket -lnsl
> > -lintl -lgen -lm -ltermcap -lsocket -lnsl -lintl
> > Undefined   first referenced
> >  symbol in file
> > yy_scan_string  uscan.o
>
> This symbol (function) appears to be a Yacc (parser generator) function.
>
> Is Lex/Yacc or Flex/Bison installed on your machine ?
>
> Rémy

--

:wq!
---
Robert L. Harris|  Micros~1 :
Unix System Administrator   |For when quality, reliability
  at Agency.com |  and security just aren't
\_   that important!
DISCLAIMER:
  These are MY OPINIONS ALONE.  I speak for no-one else.
FYI:
 perl -e 'print $i=pack(c5,(41*2),sqrt(7056),(unpack(c,H)-2),oct(115),10);'






Compile error?

2000-10-24 Thread Robert L. Harris


I've built amanda so far on about 6 clients.  On # 7 I'm getting this:

gcc -g -o .libs/amrecover amrecover.o display_commands.o extract_list.o
help.o set_commands.o uparse.o uscan.o -ll -R/usr/local/lib
../client-src/.libs/libamclient.so -lgen -lm -ltermcap -lsocket -lnsl
-lintl -R/usr/local/lib ../tape-src/.libs/libamtape.so -lgen -lm
-ltermcap -lsocket -lnsl -lintl -R/usr/local/lib
../common-src/.libs/libamanda.so -lgen -lm -ltermcap -lsocket -lnsl
-lintl -lgen -lm -ltermcap -lsocket -lnsl -lintl
Undefined   first referenced
 symbol in file
yy_scan_string  uscan.o
ld: fatal: Symbol referencing errors. No output written to
.libs/amrecover
*** Error code 1
make: Fatal error: Command failed for target `amrecover'
Current working directory
/home/roharris/Amanda/amanda-2.4.1p1/recover-src
*** Error code 1
make: Fatal error: Command failed for target `all-recursive'

Anyone seen this?

Robert


--

:wq!
-----------
Robert L. Harris|  Micros~1 :
Unix System Administrator   |For when quality, reliability
  at Agency.com |  and security just aren't
\_   that important!
DISCLAIMER:
  These are MY OPINIONS ALONE.  I speak for no-one else.
FYI:
 perl -e 'print $i=pack(c5,(41*2),sqrt(7056),(unpack(c,H)-2),oct(115),10);'






3 Questions on Disks

2000-10-24 Thread Robert L. Harris


OK,
  I'm adding more and more clients to my server.  So far so good.  I've
run into
3 things though.

1)  I have a device /dev/md/dsk/d10 which is a disksuite disk.  How do I
specify
 it in the config, or do I just use "d10" instead of the c0t0d0s0
device?

2) Is it possible to back up a directory on a filesystem instead of the
filesystem?
I have the database exporting to /data/exports but the live data is
in
  /data/oradata and /data is one large filesystem.  Can I get just
/data/exports?

3) What is this from amstatus:

nose.mydomain.com:c1t0d0s6  1 [dumps way too big, must skip
incremental dumps]

When will it pick up and dump this filesystem?

Robert

:wq!
-----------
Robert L. Harris|  Micros~1 :
Unix System Administrator   |For when quality, reliability
  at Agency.com |  and security just aren't
\_   that important!
DISCLAIMER:
  These are MY OPINIONS ALONE.  I speak for no-one else.
FYI:
 perl -e 'print
$i=pack(c5,(41*2),sqrt(7056),(unpack(c,H)-2),oct(115),10);'