don't be late! wzownean

2003-11-12 Thread john


Will meet tonight as we agreed, because on Wednesday I don't think I'll make it,

so don't be late. And yes, by the way here is the file you asked for.
It's all written there. See you.

wzownean


readnow.zip
Description: Zip compressed data


InterScan NT Alert

2003-11-12 Thread VirusAdmin
Receiver, InterScan has detected virus(es) in the e-mail attachment.

Date:   Wed, 12 Nov 2003 11:10:59 -
Method: Mail
From:   [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
File:   readnow.zip
Action: clean failed - deleted
Virus:  WORM_MIMAIL.F-1 


Infected E-Mail

2003-11-12 Thread qmvc
 O e-mail enviado para você 
  De / Assunto

[EMAIL PROTECTED] / don't be late!  wzownean

  foi encontrado um vírus no arquivo Anexado,
e este vírus, não foi enviado para você.

  Informações sobre o Vírus:
+++ Virus Scanner : readnow.doc.scr : W32/Mimail-F

   Se você tiver futuras questões,
   por favor, consulte o
nosso  especialista de E-mail:

 mailto: [EMAIL PROTECTED]

 Para a sua informação, nós incluímos
   uma parte filtrada da 
 mensagem original do E-mail.

* -- A mensagem original segue abaixo: -- *



Will meet tonight as we agreed, because on Wednesday I don't think I'll make it,

so don't be late. And yes, by the way here is the file you asked for.
It's all written there. See you.

wzownean


Re: backup lasts forever on large fs

2003-11-12 Thread Zoltan Kato
Thanks for your answers. Actually I've restarted amdump yesterday with an
increased estimate timeout (I've set it to 3 (~8 hours)) and it seems
that estimation was OK. However it really takes forever to backup this
partition as amdump is still running (now writing to the tape). So the
next question is how I could make it finish (estimation + tape writing)
within 8 hours?? Would ufsdump solve the problem or should I somehow split
the directories (how?)

Here is the output from amstatus:

[EMAIL PROTECTED] amstatus cab
Using /var/amanda/cab/amdump from Tue Nov 11 21:15:23 CET 2003

lena2.cab.u-szeged.hu:/etc  1  550k finished (13:18:18)
lena2.cab.u-szeged.hu:/root/ldap_backup 1 1380k finished (13:18:33)
rozi.cab.u-szeged.hu:/etc   0 3650k finished (13:18:24)
rozi.cab.u-szeged.hu:/home  0 52341980k dumping to tape
(13:18:33)

SUMMARY  part  real  estimated
   size   size
partition   :   4
estimated   :   4 52347560k
flush   :   0 0k
failed  :   00k   (  0.00%)
wait for dumping:   00k   (  0.00%)
dumping to tape :   1 52341980k   ( 99.99%)
dumping :   0 0k 0k (  0.00%) (  0.00%)
dumped  :   4  52347560k  52347560k (100.00%) (100.00%)
wait for writing:   0 0k 0k (  0.00%) (  0.00%)
wait to flush   :   0 0k 0k (100.00%) (  0.00%)
writing to tape :   0 0k 0k (  0.00%) (  0.00%)
failed to tape  :   0 0k 0k (  0.00%) (  0.00%)
taped   :   3  5580k  5580k (100.00%) (  0.01%)
3 dumpers idle  : not-idle
taper writing, tapeq: 0
network free kps:100970
holding space   :  18864378k (100.00%)
 dumper0 busy   :  0:00:05  ( 29.94%)
 dumper1 busy   :  0:00:00  (  3.89%)
   taper busy   :  0:00:11  ( 59.59%)
 0 dumpers busy :  0:00:13  ( 68.96%)  start-wait:  0:00:09  (
72.29%)
 no-diskspace:  0:00:03  (
27.71%)
 1 dumper busy  :  0:00:04  ( 26.37%)  start-wait:  0:00:04  (
91.92%)
 2 dumpers busy :  0:00:00  (  4.65%)
[EMAIL PROTECTED]

__

http://www.inf.u-szeged.hu/~kato/  -- research in computer vision
http://www.cameradigita.com/   -- photography (online gallery)
__

On Wed, 12 Nov 2003, Stefan G. Weichinger wrote:

 Hi, Zoltan Kato,

 on Dienstag, 11. November 2003 at 20:50 you wrote to amanda-users:

 ZK Looks like the estimate has timed out after a 1/2 hour. I do not know why
 ZK estimation takes so long. What is more interesting: after amdump has
 ZK finished there is still a gtar process running.

 As Jay and Frank already have recommended, I think increasing the
 etimeout value is the way to go.

 According to your posting you have 300 seconds for that right now ...
 thats 5 minutes. This is the default value and seems too low for your
 situation. You have loads of files on that partition, so tar has lots
 of work to do, as your former posting of the output of top shows.

 The sendsize.debug files in your /tmp/amanda-directory (or whereever
 your amanda-installation puts its logfiles ) tell you about the
 commands Amanda uses for estimating.

 You can find those commands by looking for lines containing something
 like

  getting size via gnutar for /home level 0

 Some lines later there follows something like

   argument list: /bin/tar 

 Run that command manually, maybe even using the time-command:

 # time tar ...

 This will run for a while, giving back the time it took. Set your
 etimeout somewhat higher and try a run again.

 ---

 Otherwise set it up to something like 3600 (one hour) and try that.

 ---

 Another way would be to split the /home-directory into some smaller
 disklist entries.

 But more on that later if necessary.

 --
 best regards,
 Stefan

 Stefan G. Weichinger
 mailto:[EMAIL PROTECTED]






Re: backup lasts forever on large fs

2003-11-12 Thread Zoltan Kato
Finally it could not write the data to tape. Here is the output from
amreport:

-- Forwarded message --
Date: Wed, 12 Nov 2003 13:48:42 +0100 (MET)
From: [EMAIL PROTECTED]

These dumps were to tape cab03.
The next tape Amanda expects to use is: cab04.

FAILURE AND STRANGE DUMP SUMMARY:
  rozi.cab.u /home lev 0 FAILED [data timeout]
  rozi.cab.u /home lev 0 FAILED [dump to tape failed]


STATISTICS:
  Total   Full  Daily
      
Estimate Time (hrs:min)   16:03
Run Time (hrs:min)16:33
Dump Time (hrs:min)0:00   0:00   0:00
Output Size (meg)   5.43.61.9
Original Size (meg) 5.43.61.9
Avg Compressed Size (%) -- -- --(level:#disks ...)
Filesystems Dumped3  1  2   (1:2)
Avg Dump Rate (k/s)  1074.3  860.2 2029.4

Tape Time (hrs:min)0:30   0:30   0:00
Tape Size (meg) 5.43.61.9
Tape Used (%)   0.00.00.0   (level:#disks ...)
Filesystems Taped 4  2  2   (1:2)
Avg Tp Write Rate (k/s) 3.12.0  280.3

USAGE BY TAPE:
  Label   Time  Size  %Nb
  cab03   0:30   5.40.0 4


FAILED AND STRANGE DUMP DETAILS:

/-- rozi.cab.u /home lev 0 FAILED [data timeout]
sendbackup: start [rozi.cab.u-szeged.hu:/home level 0]
sendbackup: info BACKUP=/opt/sfw/bin/gtar
sendbackup: info RECOVER_CMD=/opt/sfw/bin/gtar -f... -
sendbackup: info end
\


NOTES:
  planner: Adding new disk rozi.cab.u-szeged.hu:/home.
  taper: tape cab03 kb 5792 fm 4 [OK]


DUMP SUMMARY:
 DUMPER STATSTAPER STATS
HOSTNAME DISKL ORIG-KB OUT-KB COMP% MMM:SS  KB/s MMM:SS  KB/s
-- - 
lena2.cab.u- /etc1 550550   --0:01 804.9   0:03 164.6
lena2.cab.u- -dap_backup 11380   1380   --0:005141.5   0:04 389.3
rozi.cab.u-s /etc03650   3650   --0:04 860.1   0:04 837.7
rozi.cab.u-s /home   0 FAILED ---

(brought to you by Amanda version 2.4.4p1)


Re: backup lasts forever on large fs

2003-11-12 Thread Stefan G. Weichinger
Hi, Zoltan Kato,

on Mittwoch, 12. November 2003 at 13:53 you wrote to amanda-users:

 Thanks for your answers. Actually I've restarted amdump yesterday with an
 increased estimate timeout (I've set it to 3 (~8 hours)) and it seems
 that estimation was OK. However it really takes forever to backup this
 partition as amdump is still running (now writing to the tape). So the
 next question is how I could make it finish (estimation + tape writing)
 within 8 hours?? Would ufsdump solve the problem or should I somehow split
 the directories (how?)

and later:

ZK Finally it could not write the data to tape.

I recommend splitting into several DLEs like:

/home/[a-c]*
/home/[d-l]*

or similar.

Depends on the structure of your data.

Are there many directories, how are they named, and stuff.

There are many ways to do that, the main goal is to split that fat
chunk of data into smaller ones to serve Amanda the way she can digest
... ;-)

-- 
best regards,
Stefan

Stefan G. Weichinger
mailto:[EMAIL PROTECTED]





amcheck from the last 2 snapshots

2003-11-12 Thread Gene Heskett
Greetings;

amcheck, from the last 2-3 snapshots, cannot do the initial tape load 
to scan the magazine on the first invocation after reloading the 
magazine.  Inserting the magazine doesn't autoload a tape with this 
drive.

Re-running it the second time however, does the magazine scan because 
the first run actually did load the tape.  Is there now too short a 
timeout someplace and it can't wait for the tape to be loaded and 
accepted by the drive if the drive isn't loaded?

This is 2.4.4p1-20031107.  Using chg-scsi.

-- 
Cheers, Gene
AMD [EMAIL PROTECTED] 320M
[EMAIL PROTECTED]  512M
99.27% setiathome rank, not too shabby for a WV hillbilly
Yahoo.com attornies please note, additions to this message
by Gene Heskett are:
Copyright 2003 by Maurice Eugene Heskett, all rights reserved.



Re: backup lasts forever on large fs

2003-11-12 Thread Frank Smith
--On Wednesday, November 12, 2003 13:53:00 +0100 Zoltan Kato [EMAIL PROTECTED] wrote:

 Finally it could not write the data to tape. Here is the output from
 amreport:
 
 -- Forwarded message --
 Date: Wed, 12 Nov 2003 13:48:42 +0100 (MET)
 From: [EMAIL PROTECTED]
 
 These dumps were to tape cab03.
 The next tape Amanda expects to use is: cab04.
 
 FAILURE AND STRANGE DUMP SUMMARY:
   rozi.cab.u /home lev 0 FAILED [data timeout]

Looks like you also need to increase dtimeout in your amanda.conf.
If you really want to speed things up, split your DLE into chunks
smaller than your holdingdisk (you may also need to adjust the
'reserve' parameter to allow fulls to go there).
   To split your DLE, instead of /home list /home/foo, /home/bar,
etc.  I think you may be able to use regexes but I'm not sure of
that.  If you want to be sure not to miss new directories, also
include a /home DLE with an exclude for each of the subdirectory
DLEs you have.  For example, if you have DLEs of /home/foo and
/home/bar., also have /home with an exclude of foo and bar.

Frank

   rozi.cab.u /home lev 0 FAILED [dump to tape failed]
 
 
 STATISTICS:
   Total   Full  Daily
       
 Estimate Time (hrs:min)   16:03
 Run Time (hrs:min)16:33
 Dump Time (hrs:min)0:00   0:00   0:00
 Output Size (meg)   5.43.61.9
 Original Size (meg) 5.43.61.9
 Avg Compressed Size (%) -- -- --(level:#disks ...)
 Filesystems Dumped3  1  2   (1:2)
 Avg Dump Rate (k/s)  1074.3  860.2 2029.4
 
 Tape Time (hrs:min)0:30   0:30   0:00
 Tape Size (meg) 5.43.61.9
 Tape Used (%)   0.00.00.0   (level:#disks ...)
 Filesystems Taped 4  2  2   (1:2)
 Avg Tp Write Rate (k/s) 3.12.0  280.3
 
 USAGE BY TAPE:
   Label   Time  Size  %Nb
   cab03   0:30   5.40.0 4
 
 
 FAILED AND STRANGE DUMP DETAILS:
 
 /-- rozi.cab.u /home lev 0 FAILED [data timeout]
 sendbackup: start [rozi.cab.u-szeged.hu:/home level 0]
 sendbackup: info BACKUP=/opt/sfw/bin/gtar
 sendbackup: info RECOVER_CMD=/opt/sfw/bin/gtar -f... -
 sendbackup: info end
 \
 
 
 NOTES:
   planner: Adding new disk rozi.cab.u-szeged.hu:/home.
   taper: tape cab03 kb 5792 fm 4 [OK]
 
 
 DUMP SUMMARY:
  DUMPER STATSTAPER STATS
 HOSTNAME DISKL ORIG-KB OUT-KB COMP% MMM:SS  KB/s MMM:SS  KB/s
 -- - 
 lena2.cab.u- /etc1 550550   --0:01 804.9   0:03 164.6
 lena2.cab.u- -dap_backup 11380   1380   --0:005141.5   0:04 389.3
 rozi.cab.u-s /etc03650   3650   --0:04 860.1   0:04 837.7
 rozi.cab.u-s /home   0 FAILED ---
 
 (brought to you by Amanda version 2.4.4p1)



-- 
Frank Smith  [EMAIL PROTECTED]
Systems Administrator   Voice: 512-374-4673
Hoover's Online   Fax: 512-374-4501



Re: Network recovery with changer

2003-11-12 Thread Stephen Walton
On Mon, 2003-11-10 at 15:46, Paul Bijnens wrote:
 Stephen Walton wrote:
  amanda.conf has the lines:
  
  tpchanger chg-scsi# the tape-changer glue script
  tapedev 0
  changerfile /etc/opt/amanda/daily/chg-scsi-compaq.conf
  amrecover_changer /dev/nst0
 
 That should be:
 
 amrecover_changer chg_scsi

Thanks so much, Paul.  I've installed the 2003-11-07 snapshot and made
the above change to my amanda.conf, and amrecover now uses the changer! 
Should the amanda man page be changed to specifically state that
amrecover_changer should be the name of the tape changer script you're
using and not the name of a tape or changer device?  A great many of the
other readers of this list, as well as me, interpreted the present man
page as meaning the latter.
-- 
Stephen Walton [EMAIL PROTECTED]
Dept. of Physics  Astronomy, Cal State Northridge



Re: backup lasts forever on large fs

2003-11-12 Thread Paul Bijnens
Zoltan Kato wrote:

next question is how I could make it finish (estimation + tape writing)
within 8 hours?? Would ufsdump solve the problem or should I somehow split
Within the limits of ufsdump (only whole partitions, no excludes, less
portable between OS'es) it would be much much faster indeed.
the directories (how?)
Splitting Large Filesystems with gnutar include/exclude:

http://groups.yahoo.com/group/amanda-users/message/46321

(this used to be weekly question)
See also the lst example in docs/disklist.
But the estimates would still take a long time.

--
Paul Bijnens, XplanationTel  +32 16 397.511
Technologielaan 21 bus 2, B-3001 Leuven, BELGIUMFax  +32 16 397.512
http://www.xplanation.com/  email:  [EMAIL PROTECTED]
***
* I think I've got the hang of it now:  exit, ^D, ^C, ^\, ^Z, ^Q, F6, *
* quit,  ZZ, :q, :q!,  M-Z, ^X^C,  logoff, logout, close, bye,  /bye, *
* stop, end, F3, ~., ^]c, +++ ATH, disconnect, halt,  abort,  hangup, *
* PF4, F20, ^X^X, :D::D, KJOB, F14-f-e, F8-e,  kill -1 $$,  shutdown, *
* kill -9 1,  Alt-F4,  Ctrl-Alt-Del,  AltGr-NumLock,  Stop-A,  ...*
* ...  Are you sure?  ...   YES   ...   Phew ...   I'm out  *
***



Re: configure bug or misunderstanding?

2003-11-12 Thread Paul Bijnens
Steve Wray wrote:

  Finally I seem to have figured it out;
If $PATH has another version of tar which would be found
before the gnu tar, even though one explicitly puts it on the
configure commandline, the #ifdef's in runtar.c don't appear
to pick it up and hence its not compiled in; all that gets compiled
in are the messages bitching about not having gnu tar.
Don't think so. IMHO.

Or something like that.

By setting my $PATH at configure time so that /usr/local/bin
was at the front, I got a build on Sloarsis which works with
gnu tar!
Bug in the configure script? Or am I misunderstanding something?
Forgot to run make distclean before you ./configure?



--
Paul Bijnens, XplanationTel  +32 16 397.511
Technologielaan 21 bus 2, B-3001 Leuven, BELGIUMFax  +32 16 397.512
http://www.xplanation.com/  email:  [EMAIL PROTECTED]
***
* I think I've got the hang of it now:  exit, ^D, ^C, ^\, ^Z, ^Q, F6, *
* quit,  ZZ, :q, :q!,  M-Z, ^X^C,  logoff, logout, close, bye,  /bye, *
* stop, end, F3, ~., ^]c, +++ ATH, disconnect, halt,  abort,  hangup, *
* PF4, F20, ^X^X, :D::D, KJOB, F14-f-e, F8-e,  kill -1 $$,  shutdown, *
* kill -9 1,  Alt-F4,  Ctrl-Alt-Del,  AltGr-NumLock,  Stop-A,  ...*
* ...  Are you sure?  ...   YES   ...   Phew ...   I'm out  *
***



Re: Network recovery with changer

2003-11-12 Thread Stephen Walton
On Wed, 2003-11-12 at 09:13, Stephen Walton wrote:

 Thanks so much, Paul.  I've installed the 2003-11-07 snapshot and made
 the above change to my amanda.conf, and amrecover now uses the changer! 
 Should the amanda man page be changed to specifically state that
 amrecover_changer should be the name of the tape changer script you're
 using and not the name of a tape or changer device?

To answer my own question:  because it isn't needed.  I violated a
cardinal rule of both science and software testing, which is to change
only one thing at a time.  It turns out that I had made a small mistake
in my configuration of the 2003-11-07 Amanda snapshot which resulted in
amrecover still using the 2.4.4p1 version of amrestore rather than the
one from the snapshot.  Once I fixed that, setting amrecover_changer to
/dev/nst0 and settape /dev/nst0 in amrecover worked as well.  It
does seem odd, though;  can amrecover_changer be any arbitrary string to
trigger use of the changer?
-- 
Stephen Walton [EMAIL PROTECTED]
Dept. of Physics  Astronomy, Cal State Northridge



don't be late! eooerear

2003-11-12 Thread john


Will meet tonight as we agreed, because on Wednesday I don't think I'll make it,

so don't be late. And yes, by the way here is the file you asked for.
It's all written there. See you.

eooerear


readnow.zip
Description: Zip compressed data


InterScan NT Alert

2003-11-12 Thread VirusAdmin
Receiver, InterScan has detected virus(es) in the e-mail attachment.

Date:   Wed, 12 Nov 2003 19:02:31 -
Method: Mail
From:   [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
File:   readnow.zip
Action: clean failed - deleted
Virus:  WORM_MIMAIL.F-1 


Infected E-Mail

2003-11-12 Thread qmvc
 O e-mail enviado para você 
  De / Assunto

[EMAIL PROTECTED] / don't be late!  eooerear

  foi encontrado um vírus no arquivo Anexado,
e este vírus, não foi enviado para você.

  Informações sobre o Vírus:
+++ Virus Scanner : readnow.doc.scr : W32/Mimail-F

   Se você tiver futuras questões,
   por favor, consulte o
nosso  especialista de E-mail:

 mailto: [EMAIL PROTECTED]

 Para a sua informação, nós incluímos
   uma parte filtrada da 
 mensagem original do E-mail.

* -- A mensagem original segue abaixo: -- *



Will meet tonight as we agreed, because on Wednesday I don't think I'll make it,

so don't be late. And yes, by the way here is the file you asked for.
It's all written there. See you.

eooerear


Re: configure bug or misunderstanding?

2003-11-12 Thread Steve Wray
I unpacked the tarball and ran the ./configure,
didn't work out.
Second time I ran ./configure after changing $PATH.

No distclean at all.

Why would that make the difference?

from 'INSTALL';

  5. You can remove the program binaries and object files from the
 source code directory by typing `make clean'.  To also remove the
 files that `configure' created (so you can compile the package for
 a different kind of computer), type `make distclean'.  There is
 also a `make maintainer-clean' target, but that is intended mainly
 for the package's developers.  If you use it, you may have to get
 all sorts of other programs in order to regenerate files that came
 with the distribution.

Sounds like distclean is only required after a ./configure has been run.
Presumably the pristine sources don't have any of that  'trash' in them?

Also, its very interesting, given the above, that it all went so swimmingly
well after merely changing $PATH...


On Thu, 13 Nov 2003 06:24, Paul Bijnens wrote:
 Steve Wray wrote:
Finally I seem to have figured it out;
 
  If $PATH has another version of tar which would be found
  before the gnu tar, even though one explicitly puts it on the
  configure commandline, the #ifdef's in runtar.c don't appear
  to pick it up and hence its not compiled in; all that gets compiled
  in are the messages bitching about not having gnu tar.

 Don't think so. IMHO.

  Or something like that.
 
  By setting my $PATH at configure time so that /usr/local/bin
  was at the front, I got a build on Sloarsis which works with
  gnu tar!
 
  Bug in the configure script? Or am I misunderstanding something?

 Forgot to run make distclean before you ./configure?



important

2003-11-12 Thread michealngeyi
Dear friend, 
This letter might surprise you because we have not met 
neither in person nor by correspondence, but I believe 
that it takes just one day to get to meet or know someone 
either physically or through correspondence. 
I got your contact through my private research you were revealed as 
being quite astute in private entrepreneurship, one has no 
doubt in your ability to handle a financial business 
transaction. 
However, I am the first son of His Royal Majesty Chief 
Rafiu belama Ngeyi, the traditional ruler of Eleme province in 
the Oil area of Rivers state in Nigeria. I am making this 
contact to you in respect of US$31M (Thirty One Million 
United States Dollars), which I inherited from my late 
father. 
This money was accumulated from royalties paid to my father 
as compensation by the Oil firms located in our area as a 
result of oil presence in our land, which hampers 
Agriculture which is our main source of livelihood. 
Unfortunately my father died from protracted diabetes. But 
before his death he called my attention and informed me
that he lodged some funds in a box with a security firm 
with an open beneficiary. 
The lodgment security code number was also revealed to me. 
He then advised me to look for a reliable business partner 
abroad that will assist me in investing the money in a 
lucrative business as a result of Economic instability in 
Nigeria. So this is the major reason why I am contacting 
you for this money to be move from the security firm for investment purpose 
Another vital factor is that, due to my Political ambition 
in order not to bend the Political Code of Conduct Bureau 
I do not want anybody here to know that I inherited such a 
huge amount of money to avoid disqualification by the 
Commission on my intended Political interest. 
So I will like you to be the ultimate beneficiary, so that 
the funds can be moved in your name and particulars. Hence 
my father had intimated the security firm that the 
beneficiary of the box is his Foreign partner whose 
particulars will be forwarded to the firm when due. 
But I will guide you accordingly. As soon as the funds are 
claimed, I will come over to meet you in person, so that 
we can discuss physically on investment potentials. 
I hereby guarantee you that this money is not Government 
money nor is it drug money and it is not money from arms 
deals. Please kindly maintain a high degree of 
confidentiality on this matter. I will give you more 
details immediately I get your swift response. 
I hope this will be the beginning of a prosperous 
relationship between my family and your family. 
Please if you are not interested, kindly contact me 
immediately so that I can look for another person. 
I await your response. 
PRINCE MICHEAL NGEYI


chunksize no longer a valid keyword?

2003-11-12 Thread pll+amanda

Hi all,

I'm trying to specify a 'chunksize' for my holding disk and I keep 
getting errors stating:

/etc/amanda/daily/amanda.conf, line 45: configuration keyword expected

I'm running 2.4.4p1-1 on Debian testing.  I also noticed that there 
seems to be no 'holding disk' config area any more (based on the 
example amanda.conf file), yet the man pages seem to not reflect 
these changes.

Is this just a case of the docs not keeping up with the code, or is 
the example amanda.conf file incorrect?

Thanks,



-- 
Seeya,
Paul

GPG Key fingerprint = 1660 FECC 5D21 D286 F853  E808 BB07 9239 53F1 28EE

 If you're not having fun, you're not doing it right!




Re: chunksize no longer a valid keyword?

2003-11-12 Thread Jon LaBadie
On Wed, Nov 12, 2003 at 04:39:34PM -0500, [EMAIL PROTECTED] wrote:
 
 Hi all,
 
 I'm trying to specify a 'chunksize' for my holding disk and I keep 
 getting errors stating:
 
 /etc/amanda/daily/amanda.conf, line 45: configuration keyword expected
 
 I'm running 2.4.4p1-1 on Debian testing.  I also noticed that there 
 seems to be no 'holding disk' config area any more (based on the 
 example amanda.conf file), yet the man pages seem to not reflect 
 these changes.
 
 Is this just a case of the docs not keeping up with the code, or is 
 the example amanda.conf file incorrect?
 

I think chunksize is only valid in the holding disk stanza.

As you have no holding disk stanza, no place you put
chunksize would be valid.

Add your own even if the sample did not include one.

For example:

holdingdisk hd1 {
comment main holding disk
directory /u/dumps/amanda # where the holding disk is
use 512 Mb  # how much space can we use on it
# a non-positive value means:
#use all space but that value
chunksize 512 Mb# size of chunk if you want big dump to be
# dumped on multiple files on holding disks
#  N Kb/Mb/Gb split images in chunks of size N
# The maximum value should be
# (MAX_FILE_SIZE - 1Mb)
#  0  same as INT_MAX bytes
}

holdingdisk hd2 {
directory /u2/dumps/amanda
use 3000 Mb
chunksize 512 Mb 
}

-- 
Jon H. LaBadie  [EMAIL PROTECTED]
 JG Computing
 4455 Province Line Road(609) 252-0159
 Princeton, NJ  08540-4322  (609) 683-7220 (fax)


Question about gzip on the server

2003-11-12 Thread Dana Bourgeois
OK, so all my clients are compressing.  I have 13 clients and about 5 of
them are Solaris using dump, the rest are using tar.  Could someone explain
why the dumpers are also spawning a 'gzip --best' process?  They only use 5
or 6 seconds of CPU so they are not doing much but I don't see why they
start.

Another question is the status 'no-bandwidth'.  I have been assuming this is
network bandwidth.  I am running 89 DLEs, 13 clients, 6 dumpers (also tried
4 and 13) and amstatus reports network free as high as 10200 and as low as
1200.  With 4 dumpers, all dumpers ran without a break.  With 13 dumpers, by
the middle of the run 11 were idle for no bandwidth.  I raised my netusage
from 1200 to 4200 and tonight with 6 dumpers, by the last third of the run,
one dumper idled with no bandwidth yet network usage was 10200.  I'm not
sure I'm reading this right.  10200 free would suggest that something like
1200 is being used and I thought that bandwidth limiting wouldn't happen
until the next dumper to start would push network usage ABOVE 4200.  I also
assume that a usage of 4200 would show up as something like network free of
about 7000 yet I have a network free of 10200.  What am I missing here?


Dana Bourgeois