[no subject]

2004-01-06 Thread Michael Hollmann
unsubscribe <[EMAIL PROTECTED]>



[no subject]

2004-01-06 Thread Michael Hollmann
unsubscribe <[EMAIL PROTECTED]>



unsubscribe

2004-01-06 Thread Michael Hollmann
unsubscribe <[EMAIL PROTECTED]>



chg-zd-mtx drive question ?

2004-01-06 Thread Pierre-Henry DELIEGE
Dear,
I.m really new with Amanda and I'm trying to understand the tpchanger
chg-zd-mtx before implementing it with a SCSI library containing 72 slots
and 4 drives.
I have a misunderstanding of the usage of Amanda and the tpchanger with
multiple drives. How does it work ?
In the amanda.conf file, you can specify only one drive (tapedev
"/dev/nsa1")
In the changer.conf file, you can only specify one drive (driveslot=0)
So how can we use more than one drive ? Do we have to make a config file per
drive ?

Thanks in advance for your help.


Pierre-Henry DELIEGE





error ?

2004-01-06 Thread Brian Cuttler

Hello amanda users,

My report from last night has the following error(?) or does it ?

FAILURE AND STRANGE DUMP SUMMARY:
  lyra   / lev 0 FAILED [write_tapeheader: No space left on device]

FAILED AND STRANGE DUMP DETAILS:

/-- lyra   / lev 0 FAILED [write_tapeheader: No space left on device]

But the "Dumper Summary" seems to show that all is ok and I've put
about 2x this amount of data to tape other times (LTO 220 drive).

Was there actually an error ? If so what was/is it ?

Full report follows.

thanks,

Brian

---
   Brian R Cuttler [EMAIL PROTECTED]
   Computer Systems Support(v) 518 486-1697
   Wadsworth Center(f) 518 473-6384
   NYS Department of HealthHelp Desk 518 473-0773



- Forwarded message from Amanda at NNewton -

These dumps were to tape NEWTONR05.
The next tape Amanda expects to use is: NEWTONR06.

FAILURE AND STRANGE DUMP SUMMARY:
  lyra   / lev 0 FAILED [write_tapeheader: No space left on device]


STATISTICS:
  Total   Full  Daily
      
Estimate Time (hrs:min)0:15
Run Time (hrs:min) 6:12
Dump Time (hrs:min)7:33   6:29   1:04
Output Size (meg)  100238.197909.6 2328.5
Original Size (meg)100238.197909.6 2328.5
Avg Compressed Size (%) -- -- --(level:#disks ...)
Filesystems Dumped   20 15  5   (1:5)
Avg Dump Rate (k/s)  3776.1 4296.1  620.1

Tape Time (hrs:min)4:03   4:01   0:02
Tape Size (meg)100238.297909.6 2328.5
Tape Used (%)  83.6   81.61.9   (level:#disks ...)
Filesystems Taped20 15  5   (1:5)
Avg Tp Write Rate (k/s)  7029.2 6928.418097.1

USAGE BY TAPE:
  Label   Time  Size  %Nb
  NEWTONR05   4:03  100238.2   83.620


FAILED AND STRANGE DUMP DETAILS:

/-- lyra   / lev 0 FAILED [write_tapeheader: No space left on device]
sendbackup: start [lyra:/ level 0]
sendbackup: info BACKUP=/usr/sbin/ufsdump
sendbackup: info RECOVER_CMD=/usr/sbin/ufsrestore -f... -
sendbackup: info end
\


NOTES:
  driver: WARNING: /amanda/workr: 73400320 KB requested, but only 10129652 KB 
available.
  taper: tape NEWTONR05 kb 102644512 fm 20 [OK]


DUMP SUMMARY:
  DUMPER STATSTAPER STATS 
HOSTNAME DISK   L   ORIG-KBOUT-KB COMP% MMM:SS   KB/s MMM:SS   KB/s
 --- -
csserv  /  0   2430912   2430912   --7:01 5779.4   1:23 29140.0
csserv  /opt   0   1584480   1584480   --4:34 5793.1   1:13 21852.4
csserv  /source1   288   288   --4:041.2   0:01  288.0
csserv  /source2   0   6992608   6992608   --   21:41 5374.8  21:41 5374.2
csserv  /var/spool 0  17794048  17794048   --   45:18 6546.2  45:18 6546.0
denali  /  0479840479840   --   16:14  492.6   1:39 4822.8
denali  /home9 1 38112 38112   --1:38  387.1   0:04 8691.0
denali  /images0378112378112   --6:26  979.9   0:23 16619.7
lyra/  0   2015936   2015936   --7:04 4754.0   2:14 15096.3
lyra/db8   0  34624864  34624864   --  110:45 5210.6 110:45 5210.5
lyra/db9   0  16814464  16814464   --   42:32 6587.7  42:33 6587.3
lyra/ndevelop  1   224   224   --2:541.3   0:02  132.2
lyra/ndevelop2 0   4811424   4811424   --   12:34 6380.3   4:16 18775.4
lyra/space 0215232215232   --0:46 4715.6   0:11 19877.4
mira/  0   1243680   1243680   --   12:24 1672.2   1:13 16929.4
mira/bkup-db   0   4487040   4487040   --   19:18 3875.8   2:45 27193.9
mira/db4   1   2345760   2345760   --   55:23  705.8   2:03 19081.4
mira/home1 0   2973024   2973024   --   69:00  718.1   3:13 15434.5
panther /  0   3413763   3413792   --   13:21 4261.0   2:24 23764.9
panther /data  12032   --0:053.9   0:02   18.3

(brought to you by Amanda version 2.4.4)

- End of forwarded message from Amanda at NNewton -



Re: chg-zd-mtx drive question ?

2004-01-06 Thread Gene Heskett
On Tuesday 06 January 2004 09:50, Pierre-Henry DELIEGE wrote:
>Dear,
>I.m really new with Amanda and I'm trying to understand the
> tpchanger chg-zd-mtx before implementing it with a SCSI library
> containing 72 slots and 4 drives.
>I have a misunderstanding of the usage of Amanda and the tpchanger
> with multiple drives. How does it work ?
>In the amanda.conf file, you can specify only one drive (tapedev
>"/dev/nsa1")
>In the changer.conf file, you can only specify one drive
> (driveslot=0) So how can we use more than one drive ? Do we have to
> make a config file per drive ?
>
>Thanks in advance for your help.
>
>
>Pierre-Henry DELIEGE
First thing is to increase the number_configs below
---my chg.scsi.conf--
number_configs  1
eject   0   # Tapedrives need an eject command
sleep   15  # Seconds to wait until the tape gets ready
cleanmax100 # How many times could a cleaning tape get used
scsitapedev /dev/sg0 # test entry per Jean-Louis request
changerdev  /dev/sg1
#
# Next comes the data for drive 0
#
config  0
drivenum0
dev /dev/nst0   # the device that is used for the tapedrive 0
startuse0   # The slots associated with the drive 0
enduse  3   # 
statfile/usr/local/etc/amanda/tape-slot  # The file where the actual 
slot is stored
#cleancart  3   # the slot where the cleaningcartridge for drive 0 is 
located
#cleanfile  /usr/local/etc/amanda/tape-clean # The file where the 
cleanings are recorded
usagecount  /usr/local/etc/amanda/totaltime

# This is the end
---
Then append:
config  1
drivenum1
etc
etc...

This is for a chg-scsi.conf system, but I'd think that chg-zd-mtx 
should behave similarly.  If not, then you might want to take a look 
at chg-scsi.  Its running a small (DDS2) single drive equipt changer 
here.

-- 
Cheers, Gene
AMD [EMAIL PROTECTED] 320M
[EMAIL PROTECTED]  512M
99.22% setiathome rank, not too shabby for a WV hillbilly
Yahoo.com attornies please note, additions to this message
by Gene Heskett are:
Copyright 2003 by Maurice Eugene Heskett, all rights reserved.



amanda client on MacOS

2004-01-06 Thread Paul Root
Hi,
I've been running various versions
of amanda (client only) on my MacOS X (10.1-10.3)
over the last year or more. A couple weeks back it
broke without warning. I'm going to say it happened
during a System Update.
Since then I grabbed 2.4.4p1 and compiled it
up. It does the same thing.
The server (FreeBSD 4.9Stable, amanda 2.4.3b3)
connects and runs the gnutar check for size, then when
it runs, the Mac times out waiting for a random local
port.
Here's a sendback...debug

% cat sendbackup.20031223103857.debug
sendbackup: debug 1 pid 1710 ruid 79 euid 79: start at
Tue Dec 23 10:38:57 2003
/usr/local/libexec/sendbackup-2.4.4p1: version 2.4.4p1
  parsed request as: program `GNUTAR'
 disk `/Users'
 device `/Users'
 level 0
 since 1970:1:1:0:0:0
 options
`|;auth=bsd;compress-fast;'
sendbackup: try_socksize: send buffer size is 65536
sendbackup: time 0.000: stream_server: waiting for
connection: 0.0.0.0.51899
sendbackup: time 0.000: stream_server: waiting for
connection: 0.0.0.0.51900
sendbackup: time 0.001: waiting for connect on 51899,
then 51900
sendbackup: time 30.002: stream_accept: timeout after
30 seconds
sendbackup: time 30.002: timeout on data port 51899
sendbackup: time 60.002: stream_accept: timeout after
30 seconds
sendbackup: time 60.002: timeout on mesg port 51900
sendbackup: time 60.003: pid 1710 finish time Tue Dec
23 10:39:57 2003


Temporarily, I've changed it to use samba to back up
the data, but I'd rather do it the 'right' way.
I'm just on digest.

Thanks,
Paul.
--
__
Paul T. Root <[EMAIL PROTECTED]> /_ \
600 Stinson Blvd N.E., Fl 1S  /  /||  \\
Minneapolis, MN  55413   ||\/ ||  _ |
PAGER: +1 (877) 693-7155 ||   ||   ||
WORK: +1 (612) 664-3385   \   ||__//
   \__/






Re: amanda client on MacOS

2004-01-06 Thread Mitch Collinsworth

On Tue, 6 Jan 2004, Paul Root wrote:

> it runs, the Mac times out waiting for a random local
> port.

Any firewalls involved?

-Mitch


Re: amanda client on MacOS

2004-01-06 Thread Paul Root
Nope. Same subnet

Mitch Collinsworth wrote:

On Tue, 6 Jan 2004, Paul Root wrote:


it runs, the Mac times out waiting for a random local
port.


Any firewalls involved?

-Mitch




--
__
Paul T. Root <[EMAIL PROTECTED]> /_ \
600 Stinson Blvd N.E., Fl 1S  /  /||  \\
Minneapolis, MN  55413   ||\/ ||  _ |
PAGER: +1 (877) 693-7155 ||   ||   ||
WORK: +1 (612) 664-3385   \   ||__//
   \__/





Re: amanda client on MacOS

2004-01-06 Thread Mitch Collinsworth


On Tue, 6 Jan 2004, Paul Root wrote:

> Nope. Same subnet

Software firewall on client?



> Mitch Collinsworth wrote:
>
> > On Tue, 6 Jan 2004, Paul Root wrote:
> >
> >
> >>it runs, the Mac times out waiting for a random local
> >>port.
> >
> >
> > Any firewalls involved?
> >
> > -Mitch


Re: error ?

2004-01-06 Thread Jon LaBadie
On Tue, Jan 06, 2004 at 10:23:34AM -0500, Brian Cuttler wrote:
> 
> Hello amanda users,
> 
> My report from last night has the following error(?) or does it ?
> 
> 
> But the "Dumper Summary" seems to show that all is ok and I've put
> about 2x this amount of data to tape other times (LTO 220 drive).
> 
> Was there actually an error ? If so what was/is it ?
> 


I pulled just a few things out:


> USAGE BY TAPE:
>   Label   Time  Size  %Nb
>   NEWTONR05   4:03  100238.2   83.620


Completed DLE's did not fill the tape.

> NOTES:
>   taper: tape NEWTONR05 kb 102644512 fm 20 [OK]

And this report seems to say nothing else was attempted.  Full tapes
show more total data written then the successful data written report
above.  And they show something like "[short write]"


> NOTES:
>   driver: WARNING: /amanda/workr: 73400320 KB requested,
>  but only 10129652 KB available.


Is this refering to your holding disk maybe?


> FAILED AND STRANGE DUMP DETAILS:
> 
> /-- lyra   / lev 0 FAILED [write_tapeheader: No space left on device]
> sendbackup: start [lyra:/ level 0]
> sendbackup: info BACKUP=/usr/sbin/ufsdump
> sendbackup: info RECOVER_CMD=/usr/sbin/ufsrestore -f... -
> sendbackup: info end

And maybe this is a report saying could not write to holding disk
so switching to direct to tape because of full holding disk?

> csserv  /source2   0   6992608   6992608   --   21:41 5374.8  21:41 5374.2
> csserv  /var/spool 0  17794048  17794048   --   45:18 6546.2  45:18 6546.0
> lyra/db8   0  34624864  34624864   --  110:45 5210.6 110:45 5210.5
> lyra/db9   0  16814464  16814464   --   42:32 6587.7  42:33 6587.3

I notice a lot of your big DLE's show the same dump and tape times.
That always suggests to me dump directly to tape.

> mira/db4   1   2345760   2345760   --   55:23  705.8   2:03 19081.4
> mira/home1 0   2973024   2973024   --   69:00  718.1   3:13 15434.5
> panther /  0   3413763   3413792   --   13:21 4261.0   2:24 23764.9
> panther /data  12032   --0:053.9   0:02   18.3

Yet your other, smaller ones seem to dump and tape at different rates.
Maybe they fit in the small holding space available.

-- 
Jon H. LaBadie  [EMAIL PROTECTED]
 JG Computing
 4455 Province Line Road(609) 252-0159
 Princeton, NJ  08540-4322  (609) 683-7220 (fax)


Re: error ?

2004-01-06 Thread Jon LaBadie
On Tue, Jan 06, 2004 at 10:23:34AM -0500, Brian Cuttler wrote:
> 
> Hello amanda users,
> 
> My report from last night has the following error(?) or does it ?
> 

Forgot to ask,

what does amverifyrun tell you?

-- 
Jon H. LaBadie  [EMAIL PROTECTED]
 JG Computing
 4455 Province Line Road(609) 252-0159
 Princeton, NJ  08540-4322  (609) 683-7220 (fax)


Re: error ?

2004-01-06 Thread Brian Cuttler
Jon,

Thanks (and happy new years).

That makes good sense - I should have realized it but I had
something else playing in the back of my mind. namely...

FAILURE AND STRANGE DUMP SUMMARY:
  wcnotes/maildb lev 2 FAILED [no more holding disk space]

which was produced by 2.4.2p2 (rather than this new error) by
2.4.4 as this error was.

Well, it addresses the issues I'd brought up in my previous
out-of-spool email message (back on 12-Dec-2003).

Different results from running out of spool between the two
different versions of the amanda server.

On a completely different note - my 2.4.4 version of amanda
installs on my other Solaris 9 system but will not run.

Can't seem to resolve this library issue. Any ideas ?

# /usr/local/sbin/amcheck hal
ld.so.1: /usr/local/sbin/amcheck: fatal: libintl.so.2: open failed: No such file or 
directory
Killed

thanks,

Brian


> On Tue, Jan 06, 2004 at 10:23:34AM -0500, Brian Cuttler wrote:
> > 
> > Hello amanda users,
> > 
> > My report from last night has the following error(?) or does it ?
> > 
> > 
> > But the "Dumper Summary" seems to show that all is ok and I've put
> > about 2x this amount of data to tape other times (LTO 220 drive).
> > 
> > Was there actually an error ? If so what was/is it ?
> > 
> 
> 
> I pulled just a few things out:
> 
> 
> > USAGE BY TAPE:
> >   Label   Time  Size  %Nb
> >   NEWTONR05   4:03  100238.2   83.620
> 
> 
> Completed DLE's did not fill the tape.
> 
> > NOTES:
> >   taper: tape NEWTONR05 kb 102644512 fm 20 [OK]
> 
> And this report seems to say nothing else was attempted.  Full tapes
> show more total data written then the successful data written report
> above.  And they show something like "[short write]"
> 
> 
> > NOTES:
> >   driver: WARNING: /amanda/workr: 73400320 KB requested,
> >  but only 10129652 KB available.
> 
> 
> Is this refering to your holding disk maybe?
> 
> 
> > FAILED AND STRANGE DUMP DETAILS:
> > 
> > /-- lyra   / lev 0 FAILED [write_tapeheader: No space left on device]
> > sendbackup: start [lyra:/ level 0]
> > sendbackup: info BACKUP=/usr/sbin/ufsdump
> > sendbackup: info RECOVER_CMD=/usr/sbin/ufsrestore -f... -
> > sendbackup: info end
> 
> And maybe this is a report saying could not write to holding disk
> so switching to direct to tape because of full holding disk?
> 
> > csserv  /source2   0   6992608   6992608   --   21:41 5374.8  21:41 5374.2
> > csserv  /var/spool 0  17794048  17794048   --   45:18 6546.2  45:18 6546.0
> > lyra/db8   0  34624864  34624864   --  110:45 5210.6 110:45 5210.5
> > lyra/db9   0  16814464  16814464   --   42:32 6587.7  42:33 6587.3
> 
> I notice a lot of your big DLE's show the same dump and tape times.
> That always suggests to me dump directly to tape.
> 
> > mira/db4   1   2345760   2345760   --   55:23  705.8   2:03 19081.4
> > mira/home1 0   2973024   2973024   --   69:00  718.1   3:13 15434.5
> > panther /  0   3413763   3413792   --   13:21 4261.0   2:24 23764.9
> > panther /data  12032   --0:053.9   0:02   18.3
> 
> Yet your other, smaller ones seem to dump and tape at different rates.
> Maybe they fit in the small holding space available.
> 
> -- 
> Jon H. LaBadie  [EMAIL PROTECTED]
>  JG Computing
>  4455 Province Line Road(609) 252-0159
>  Princeton, NJ  08540-4322  (609) 683-7220 (fax)



Re: amanda client on MacOS

2004-01-06 Thread Paul Root
err, well yes. Boy do I feel dumb. I'll have to test now.

Can I still have that on? Is there a way to configure the
proper ports? I'll ask my Mac expert.
Paul.

Mitch Collinsworth wrote:

On Tue, 6 Jan 2004, Paul Root wrote:


Nope. Same subnet


Software firewall on client?




Mitch Collinsworth wrote:


On Tue, 6 Jan 2004, Paul Root wrote:



it runs, the Mac times out waiting for a random local
port.


Any firewalls involved?

-Mitch





--
__
Paul T. Root <[EMAIL PROTECTED]> /_ \
600 Stinson Blvd N.E., Fl 1S  /  /||  \\
Minneapolis, MN  55413   ||\/ ||  _ |
PAGER: +1 (877) 693-7155 ||   ||   ||
WORK: +1 (612) 664-3385   \   ||__//
   \__/





Re: error ?

2004-01-06 Thread Jon LaBadie
On Tue, Jan 06, 2004 at 01:54:00PM -0500, Brian Cuttler wrote:
>
> On a completely different note - my 2.4.4 version of amanda
> installs on my other Solaris 9 system but will not run.
> 
> Can't seem to resolve this library issue. Any ideas ?
> 
> # /usr/local/sbin/amcheck hal
> ld.so.1: /usr/local/sbin/amcheck: fatal: libintl.so.2: open failed: No such file or 
> directory
> Killed
> 

libintl.so (symlinked to .so.1) exists on my Solaris 9 x86 system in /usr/lib.
It was installed from package SUNWcsl.

Check the existance of the library there, its permissions (r-x for non-owner),
and package installation.  Use 'pkginfo -l SUNWcsl' to see if it is in there
and 'pkgchk SUNWcsl' to check if it has been munged post installation.

-- 
Jon H. LaBadie  [EMAIL PROTECTED]
 JG Computing
 4455 Province Line Road(609) 252-0159
 Princeton, NJ  08540-4322  (609) 683-7220 (fax)


RE: error ?

2004-01-06 Thread donald . ritchey
Brian:

>From a thread on the sudo user's mailing list earlier today:

Check your environment for the amcheck and ensure that the missing library
is not in a location that depends on LD_LIBRARY_PATH, which gets cleared
when running a setUID program and the real and effective users are not the
same.  

Since you are probably running amcheck as the Amanda user ID and the amcheck
program is setUID root, if libintl.so.2 is not in one of the normally
searched library directories, it will not be found.  If you recompile
amcheck so that the path to the library is in the executable (not sure how
to do that with Solaris, but Tru64's cc uses the '-rpath' option to tell the
loader where to find other libraries), then amcheck will not depend on the
contents of LD_LIBRARY_PATH.

Best wishes for the New Year,

Donald L. (Don) Ritchey
E-mail:  [EMAIL PROTECTED]


-Original Message-
From: Brian Cuttler [mailto:[EMAIL PROTECTED]
Sent: Tuesday, January 06, 2004 12:54 PM
To: [EMAIL PROTECTED]
Cc: Chris Knight
Subject: Re: error ?


Jon,

Thanks (and happy new years).

That makes good sense - I should have realized it but I had
something else playing in the back of my mind. namely...

FAILURE AND STRANGE DUMP SUMMARY:
  wcnotes/maildb lev 2 FAILED [no more holding disk space]

which was produced by 2.4.2p2 (rather than this new error) by
2.4.4 as this error was.

Well, it addresses the issues I'd brought up in my previous
out-of-spool email message (back on 12-Dec-2003).

Different results from running out of spool between the two
different versions of the amanda server.

On a completely different note - my 2.4.4 version of amanda
installs on my other Solaris 9 system but will not run.

Can't seem to resolve this library issue. Any ideas ?

# /usr/local/sbin/amcheck hal
ld.so.1: /usr/local/sbin/amcheck: fatal: libintl.so.2: open failed: No such
file or directory
Killed

thanks,

Brian


> On Tue, Jan 06, 2004 at 10:23:34AM -0500, Brian Cuttler wrote:
> > 
> > Hello amanda users,
> > 
> > My report from last night has the following error(?) or does it ?
> > 
> > 
> > But the "Dumper Summary" seems to show that all is ok and I've put
> > about 2x this amount of data to tape other times (LTO 220 drive).
> > 
> > Was there actually an error ? If so what was/is it ?
> > 
> 
> 
> I pulled just a few things out:
> 
> 
> > USAGE BY TAPE:
> >   Label   Time  Size  %Nb
> >   NEWTONR05   4:03  100238.2   83.620
> 
> 
> Completed DLE's did not fill the tape.
> 
> > NOTES:
> >   taper: tape NEWTONR05 kb 102644512 fm 20 [OK]
> 
> And this report seems to say nothing else was attempted.  Full tapes
> show more total data written then the successful data written report
> above.  And they show something like "[short write]"
> 
> 
> > NOTES:
> >   driver: WARNING: /amanda/workr: 73400320 KB requested,
> >  but only 10129652 KB available.
> 
> 
> Is this refering to your holding disk maybe?
> 
> 
> > FAILED AND STRANGE DUMP DETAILS:
> > 
> > /-- lyra   / lev 0 FAILED [write_tapeheader: No space left on
device]
> > sendbackup: start [lyra:/ level 0]
> > sendbackup: info BACKUP=/usr/sbin/ufsdump
> > sendbackup: info RECOVER_CMD=/usr/sbin/ufsrestore -f... -
> > sendbackup: info end
> 
> And maybe this is a report saying could not write to holding disk
> so switching to direct to tape because of full holding disk?
> 
> > csserv  /source2   0   6992608   6992608   --   21:41 5374.8  21:41
5374.2
> > csserv  /var/spool 0  17794048  17794048   --   45:18 6546.2  45:18
6546.0
> > lyra/db8   0  34624864  34624864   --  110:45 5210.6 110:45
5210.5
> > lyra/db9   0  16814464  16814464   --   42:32 6587.7  42:33
6587.3
> 
> I notice a lot of your big DLE's show the same dump and tape times.
> That always suggests to me dump directly to tape.
> 
> > mira/db4   1   2345760   2345760   --   55:23  705.8   2:03
19081.4
> > mira/home1 0   2973024   2973024   --   69:00  718.1   3:13
15434.5
> > panther /  0   3413763   3413792   --   13:21 4261.0   2:24
23764.9
> > panther /data  12032   --0:053.9   0:02
18.3
> 
> Yet your other, smaller ones seem to dump and tape at different rates.
> Maybe they fit in the small holding space available.
> 
> -- 
> Jon H. LaBadie  [EMAIL PROTECTED]
>  JG Computing
>  4455 Province Line Road(609) 252-0159
>  Princeton, NJ  08540-4322  (609) 683-7220 (fax)



This e-mail and any of its attachments may contain Exelon Corporation
proprietary information, which is privileged, confidential, or subject 
to copyright belonging to the Exelon Corporation family of Companies. 
This e-mail is intended solely for the use of the individual or entity 
to which it is addressed.  If you 

Re: error ?

2004-01-06 Thread Brian Cuttler

yah, looks ok, I've been unable to find the package for the
libintl.so.2 though. I'm thinking it was somehow a bad build
or bad-rebuild.

The library is not (anyplace I've looked) on my first Solaris 9 box,
with the 2.4.4 (that had the spool full last night).

I'm gonna start over on this one.

thanks.

> On Tue, Jan 06, 2004 at 01:54:00PM -0500, Brian Cuttler wrote:
> >
> > On a completely different note - my 2.4.4 version of amanda
> > installs on my other Solaris 9 system but will not run.
> > 
> > Can't seem to resolve this library issue. Any ideas ?
> > 
> > # /usr/local/sbin/amcheck hal
> > ld.so.1: /usr/local/sbin/amcheck: fatal: libintl.so.2: open failed: No such file 
> > or directory
> > Killed
> > 
> 
> libintl.so (symlinked to .so.1) exists on my Solaris 9 x86 system in /usr/lib.
> It was installed from package SUNWcsl.
> 
> Check the existance of the library there, its permissions (r-x for non-owner),
> and package installation.  Use 'pkginfo -l SUNWcsl' to see if it is in there
> and 'pkgchk SUNWcsl' to check if it has been munged post installation.
> 
> -- 
> Jon H. LaBadie  [EMAIL PROTECTED]
>  JG Computing
>  4455 Province Line Road(609) 252-0159
>  Princeton, NJ  08540-4322  (609) 683-7220 (fax)



Estimates taking two hours - is this normal?

2004-01-06 Thread Fran Fabrizio

I have a system and I am attempting to backup a filesystem with
approximately 30G of data.  The estimates for a Level 0 take 3000
seconds.  For a level 1, 7000 seconds.  Is two hours just to get an
estimate for an incremental dump?  Is it typical to have to bump up the
estimate timeouts to several hours?  I just want to make sure that what
I am seeing is "normal".  It seems like an awful lot of churning just to
estimate a dump size.  This happens to be my only Solaris system being
backed up.  On a linux system, I'm backing up an area approx. twice the
size with no estimate timeouts.  That may or may not have any relevance,
but I thought it was worth mentioning.

Here is the log:

# more /tmp/amanda/sendsize.20040106015552.debug
sendsize: debug 1 pid 13149 ruid 12167 euid 12167: start at Tue Jan  6
01:55:52 2004
sendsize: version 2.4.4p1
sendsize[13149]: time 0.020: waiting for any estimate child
sendsize[13151]: time 0.020: calculating for amname '/export/home',
dirname '/export/home', spindle -1
sendsize[13151]: time 0.022: getting size via gnutar for /export/home
level 0
sendsize[13151]: time 0.026: spawning /usr/local/libexec/runtar in
pipeline
sendsize[13151]: argument list: /opt/gnu/bin/gtar --create --file
/dev/null --directory /export/home --one-file-system
--listed-incremental
/usr/local/var/amanda/gnutar-lists/ardra_export_home_0.new --sparse
--ignore-failed-read --totals .
sendsize[13151]: time 3010.735: Total bytes written: 32350668800
sendsize[13151]: time 3010.778: .
sendsize[13151]: estimate time for /export/home level 0: 3010.752
sendsize[13151]: estimate size for /export/home level 0: 31592450 KB
sendsize[13151]: time 3010.778: waiting for /opt/gnu/bin/gtar
"/export/home" child
sendsize[13151]: time 3010.811: after /opt/gnu/bin/gtar "/export/home"
wait
sendsize[13151]: time 3010.812: getting size via gnutar for /export/home
level 1
sendsize[13151]: time 3012.130: spawning /usr/local/libexec/runtar in
pipeline
sendsize[13151]: argument list: /opt/gnu/bin/gtar --create --file
/dev/null --directory /export/home --one-file-system
--listed-incremental
/usr/local/var/amanda/gnutar-lists/ardra_export_home_1.new --sparse
--ignore-failed-read --totals .
sendsize[13151]: time 7024.171: Total bytes written: 9344
sendsize[13151]: time 7024.173: .
sendsize[13151]: estimate time for /export/home level 1: 4012.052
sendsize[13151]: estimate size for /export/home level 1: 91250 KB
sendsize[13151]: time 7024.173: waiting for /opt/gnu/bin/gtar
"/export/home" child
sendsize[13151]: time 7024.241: after /opt/gnu/bin/gtar "/export/home"
wait
sendsize[13151]: time 7024.241: done with amname '/export/home', dirname
'/export/home', spindle -1
sendsize[13149]: time 7024.247: child 13151 terminated normally
sendsize: time 7024.247: pid 13149 finish time Tue Jan  6 03:52:56 2004
 
-- 

Fran Fabrizio
Senior Systems Analyst
Department of Computer and Information Sciences
University of Alabama - Birmingham
[EMAIL PROTECTED]
(205) 934-0653



Re: Estimates taking two hours - is this normal?

2004-01-06 Thread Mitch Collinsworth


On Tue, 6 Jan 2004, Fran Fabrizio wrote:

> I have a system and I am attempting to backup a filesystem with
> approximately 30G of data.  The estimates for a Level 0 take 3000
> seconds.  For a level 1, 7000 seconds.  Is two hours just to get an
> estimate for an incremental dump?  Is it typical to have to bump up the
> estimate timeouts to several hours?  I just want to make sure that what
> I am seeing is "normal".  It seems like an awful lot of churning just to
> estimate a dump size.  This happens to be my only Solaris system being
> backed up.  On a linux system, I'm backing up an area approx. twice the
> size with no estimate timeouts.  That may or may not have any relevance,
> but I thought it was worth mentioning.

You might find estimates run faster with dump than tar.  Which on a
Sun is probably a safe thing to do, depending on what file system you
use.

-Mitch


Pre/Post-dump scripts?

2004-01-06 Thread pll+amanda

Hi all,

I need to back up a data base, but want to have it dump the tables 
first.  I thought there was a way to have amdump trigger pre/post 
dump processes natively.  Or, is the only way to wrap amdump in a 
script of the same name, and effectively hide the real amdump program?

Thanks,
-- 
Seeya,
Paul

GPG Key fingerprint = 1660 FECC 5D21 D286 F853  E808 BB07 9239 53F1 28EE

 If you're not having fun, you're not doing it right!




RE: Pre/Post-dump scripts?

2004-01-06 Thread David Olbersen
[EMAIL PROTECTED] wrote:

> I need to back up a data base, but want to have it dump the tables
> first.  I thought there was a way to have amdump trigger pre/post
> dump processes natively.  Or, is the only way to wrap amdump in a
> script of the same name, and effectively hide the real amdump program?

Why not have your database program dump to a backup directory before amanda comes in? 
If you've got the disk space this makes the most sense.

Right now our databases (3 of them) all do a dump around 5:00 PM. Amanda comes in at 
3:00 AM and nabs those files. No concurrency issue since the database dumping program 
deals with that.

One thing to do though: if you're going to keep history, don't name your backups 
dbdump.1,  dbdump.2, dbdump.3, etc. When you rotate them out the timestamp on ALL of 
them change, so amanda has to tape all of them again. Using a slightly smarter system 
do produce dbdump.20040101, dbdump.20040102, dbdump.20040103, etc. makes things better.

-- 
David Olbersen
iGuard Engineer
St. Bernard Software
15015 Avenue of Sciences
San Diego, CA 92127
x2152



Re: Estimates taking two hours - is this normal?

2004-01-06 Thread Jon LaBadie
On Tue, Jan 06, 2004 at 03:16:25PM -0600, Fran Fabrizio wrote:
> 
> I have a system and I am attempting to backup a filesystem with
> approximately 30G of data.  The estimates for a Level 0 take 3000
> seconds.  For a level 1, 7000 seconds.  Is two hours just to get an
> estimate for an incremental dump?  Is it typical to have to bump up the
> estimate timeouts to several hours?  I just want to make sure that what
> I am seeing is "normal".  It seems like an awful lot of churning just to
> estimate a dump size.  This happens to be my only Solaris system being
> backed up.  On a linux system, I'm backing up an area approx. twice the
> size with no estimate timeouts.  That may or may not have any relevance,
> but I thought it was worth mentioning.
> 

Are the disks IDE rather than SCSI?  I find the Solaris IDE disk driver 
to be terribly slow.

But several hours does seem too slow.

-- 
Jon H. LaBadie  [EMAIL PROTECTED]
 JG Computing
 4455 Province Line Road(609) 252-0159
 Princeton, NJ  08540-4322  (609) 683-7220 (fax)


Re: Pre/Post-dump scripts?

2004-01-06 Thread Ken D'Ambrosio
David Olbersen wrote:

Why not have your database program dump to a backup directory before amanda comes in? If you've got the disk space this makes the most sense.

One thing to do though: if you're going to keep history, don't name your backups dbdump.1,  dbdump.2, dbdump.3, etc. When you rotate them out the timestamp on ALL of them change, so amanda has to tape all of them again. Using a slightly smarter system do produce dbdump.20040101, dbdump.20040102, dbdump.20040103, etc. makes things better.

I do mysql-mon, mysql-tues, ..., mysql-sun, and leave timestamping for 
uniqueness to the backup.  That way, I don't have to go and find my old 
dumps to delete them; they just get overwritten weekly.

$.02,

-Ken


RE: Pre/Post-dump scripts?

2004-01-06 Thread David Olbersen
Ken D'Ambrosio wrote:

> David Olbersen wrote:
> 
> > 
> > Why not have your database program dump to a backup directory before amanda
> > comes in? If you've got the disk space this makes the most sense. 
> > 
> > One thing to do though: if you're going to keep history, don't name your
> > backups dbdump.1,  dbdump.2, dbdump.3, etc. When you rotate them out the
> > timestamp on ALL of them change, so amanda has to tape all of them again.
> > Using a slightly smarter system do produce dbdump.20040101,
> > dbdump.20040102, dbdump.20040103, etc. makes things better.
> >
> 
> I do mysql-mon, mysql-tues, ..., mysql-sun, and leave timestamping for
> uniqueness to the backup.  That way, I don't have to go and find my old
> dumps to delete them; they just get overwritten weekly.

Which is of course fine if you have a nice schedule like "weekly" :)

In our case we have 3 days worth of backups on the individual database servers, 1 week 
of backups on a central backup server, and also what's on tape.

-- 
David Olbersen
iGuard Engineer
St. Bernard Software
15015 Avenue of Sciences
San Diego, CA 92127
x2152



Re: Pre/Post-dump scripts?

2004-01-06 Thread Gene Heskett
On Tuesday 06 January 2004 16:47, [EMAIL PROTECTED] wrote:
>Hi all,
>
>I need to back up a data base, but want to have it dump the tables
>first.  I thought there was a way to have amdump trigger pre/post
>dump processes natively.  Or, is the only way to wrap amdump in a
>script of the same name, and effectively hide the real amdump
> program?
>

It not required that the amdump wrapper be named amdump.  Thats not 
the least bit carved in stone.  All thats required is that it do the 
wrappers the way you want them done. The script itself can be named 
anything, and thats the script the amanda crontab entry runs.  I've 
been doing it here for at least a year, with the idea of having the 
config dir tree, and the index dir tree appended to the tape after 
amdump itself is done and all the locks released so that it "gets it 
all".  If I have to do a recovery, I just fsf 59 from a rewound tape, 
and read all that back in giving me a full set of up2date as of that 
tape, indexes, configs etc.  Thats 59 because I have 58 entries in my 
disklist ATM.

They also take care of the housekeeping by getting rid of the older 
files as the tapes are reused, and that currently is a kludge that 
shows just how embarrasingly little I know about writing shell 
scripts.

I suppose I should "purty up" them scripts, clean up the comments etc  
and post them, but they are built for my system and would require 
editing before turning them loose on somebody elses data.  I wouldn't 
want to be responsible for somebodies cat getting the flu or their 
life savings going up in smoke.

-- 
Cheers, Gene
AMD [EMAIL PROTECTED] 320M
[EMAIL PROTECTED]  512M
99.22% setiathome rank, not too shabby for a WV hillbilly
Yahoo.com attornies please note, additions to this message
by Gene Heskett are:
Copyright 2003 by Maurice Eugene Heskett, all rights reserved.



Re: Pre/Post-dump scripts?

2004-01-06 Thread Gene Heskett
On Tuesday 06 January 2004 16:54, David Olbersen wrote:
>[EMAIL PROTECTED] wrote:
>> I need to back up a data base, but want to have it dump the tables
>> first.  I thought there was a way to have amdump trigger pre/post
>> dump processes natively.  Or, is the only way to wrap amdump in a
>> script of the same name, and effectively hide the real amdump
>> program?
>
>Why not have your database program dump to a backup directory before
> amanda comes in? If you've got the disk space this makes the most
> sense.
>
>Right now our databases (3 of them) all do a dump around 5:00 PM.
> Amanda comes in at 3:00 AM and nabs those files. No concurrency
> issue since the database dumping program deals with that.
>
>One thing to do though: if you're going to keep history, don't name
> your backups dbdump.1,  dbdump.2, dbdump.3, etc. When you rotate
> them out the timestamp on ALL of them change, so amanda has to tape
> all of them again. Using a slightly smarter system do produce
> dbdump.20040101, dbdump.20040102, dbdump.20040103, etc. makes
> things better.

I keep mine in a seperate directory, numbered by the tape.  So I strip 
the number off the tape name and use that to delete the ones I'm 
about to remake with the new contents of this current tape.  I also 
do not have this directory as an entry in the disklist, theyby 
getting around that problem even if I was rotating like logrotate 
does it, even if I'm not.

-- 
Cheers, Gene
AMD [EMAIL PROTECTED] 320M
[EMAIL PROTECTED]  512M
99.22% setiathome rank, not too shabby for a WV hillbilly
Yahoo.com attornies please note, additions to this message
by Gene Heskett are:
Copyright 2003 by Maurice Eugene Heskett, all rights reserved.



Re: Pre/Post-dump scripts?

2004-01-06 Thread Paul Bijnens
[EMAIL PROTECTED] wrote:
I need to back up a data base, but want to have it dump the tables 
first.  I thought there was a way to have amdump trigger pre/post 
dump processes natively.  Or, is the only way to wrap amdump in a 
script of the same name, and effectively hide the real amdump program?


http://amanda.sourceforge.net/fom-serve/cache/348.html



--
Paul @ Home