HP LTO Ultrium changer plus timeout problems

2002-01-28 Thread jason walton

Hi,
you couldn't by any chance give your conf file could you?
We tried for months trying to get the tape changer to work but it
never happened. We just manually change them each day, waiting for the
day that we purchase an expensive backup tool.
Also, is there a way to stop it from estimating? We use full backups
(130Gb) every night and the thing spends hours running it's
compression estimations (even though it is turned off), the result of
which is that it sometimes times out and doesn't back any thing up
from large file systems (50K+ directories). The simplest method would
be to tell it to use "du -sk" but so far, I haven't had the time.
Any help would be most welcome.
cheers

.
--- In [EMAIL PROTECTED], Gerhard den Hollander wrote:
>* Lylace Garcia-Blake (Wed, Feb 07, 2001 at 08:28:37AM
-0800)
> > Happy Wednesday,
>
> > Does someone have a tapetype entry for the HP LTO Ultrium device
that
> > they would be willing to post?
>
>no tapetype (at least non official)
>but the following works for me
>
>
>define tapetype LTO {
>comment "LTO ultrium"
>length 150 gbytes # conservative estimate
>filemark 1 byte # should work given above
>speed 30 mbytes # even more, but this isn't used in
amanda
># yet
>lbl-templ "/volume/amanda/share/lto.ps"
>}
>
>
>Notes:
>if you use compression on amanda (iso using drive compression) set
this to
>100G.
>
>With our data the LTO drive does a bit better than 1:1.5 compression
>(I assume 1.6 but I'd rather err on the side of caution)
>
>
>
>Gerhard, <@jasongeo.com> == The Acoustic Motorbiker ==
>--
>__O If your watch is wound, wound to run, it will
>=`\<, If your time is due, due to come, it will
>(=)/(=) Living this life, is like trying to learn latin
>in a chines


_
MSN Photos is the easiest way to share and print your photos: 
http://photos.msn.com/support/worldwide.aspx




amanda in aix

2002-01-28 Thread Monserrat Seisdedos Nuñez



Hello:
I'm trying to compile amanda 2.4.2p2 in a 4.3.3 aix system.
i installed gtar,gawk, gsed, perl and readline.
i run the configure script, but when i do make i receive the next error:

make[1]: Entering directory `/home/software/amanda-2.4.2p2/client-src'
/usr/bin/sh ../libtool --mode=link gcc  -g -O2-o amandad  amandad.o
../common-src/libamanda.la  libamclient.la  ../common-src/libamanda.la
-lm -lreadline -lcurses -lnsl -lintl
gcc -g -O2 -o amandad amandad.o ../common-src/.libs/libamanda.a -lm
-lreadline -lcurses -lnsl -lintl .libs/libamclient.a -lm -lreadline -lcurses
-lnsl -lintl ../common-src/.libs/libamanda.a -lm -lreadline -lcurses -lnsl
-lintl -lm -lreadline -lcurses -lnsl -lintl
ld: 0711-317 ERROR: Undefined symbol: .__main
ld: 0711-345 Use the -bloadmap or -bnoquiet option to obtain more
information.
collect2: ld returned 8 exit status
make[1]: *** [amandad] Error 1
make[1]: Leaving directory `/home/software/amanda-2.4.2p2/client-src'
make: *** [all-recursive] Error 1



FAILURE AND STRANGE DUMP SUMMARY:

2002-01-28 Thread Sascha Wuestemann

Hi Helpers,

as you can see, I am currently backing up just one client using gnutar.
Now, having one tapecycle run, suddenly 3 of 14 mountpoints on the same machine failed
for reasons I don't get.

The mail from today, report from the night 25.-26.
---cut-on---
...
machine /some/mountpoint_1 lev 1 FAILED [could not connect to machine]
machine /some/mountpoint_2 lev 1 FAILED [could not connect to machine]
machine /some/mountpoint_3 lev 1 FAILED [could not connect to machine]

STATISTICS:

...
DUMP SUMMARY:
 DUMPER STATSTAPER STATS 
HOSTNAME DISKL ORIG-KB OUT-KB COMP% MMM:SS  KB/s MMM:SS  KB/s
-- - 
...
machine -mountpoint_8 0  880720 352416  40.0   5:411033.4   7:03 833.0
machine -mountpoint_9 FAILED ---
machine -mountpoint_A FAILED ---
machine -mountpoint_B 1 10 32 320.0   0:00 589.5   0:02  36.8
machine -mountpoint_C FAILED ---
machine -mountpoint_D 1 679990  78272  11.5   1:061188.4   1:30 865.6
...
(brought to you by Amanda version 2.4.2p2)
---cut-off---

On the client-machine in /tmp/amanda I found no information leading to errors at 
sendsize.20020126004755.debug, nor at runtar.20020126???.debug but at 
amandad.20020126004937.debug:
---cut-on---
...
got packet: 

Amanda 2.4 REQ HANDLE 000-90B80708 SEQ 1012002305
...
sending ack:

Amanda 2.4 ACK HANDLE 000-90B80708 SEQ 1012002305
...
amandad: got packet:

Amanda 2.4 REQ HANDLE 000-78BA0708 SEQ 1012002304

amandad: received other packet, NAKing it
  addr: peer 192.168.1.15 dup 192.168.1.15, port: peer 854 dup 855
sending nack:

Amanda 2.4 NAK HANDLE 000-78BA0708 SEQ 1012002304
ERROR amandad busy

... ... ...
---cut-off---

Why is amandad still busy? Or why ist the backup-server requesting a former sequenz?

adTHANXvance
Sascha



Re: Who uses Amanda?

2002-01-28 Thread KEVIN ZEMBOWER

John, thank you so much for processing the archives to extract the
domain portions of folk's email addresses. I was hoping someone had the
ability to do that, and appreciate the time and energy it took for you
to do it.

Thanks, again.

-Kevin Zembower

>>> [EMAIL PROTECTED] 01/25/02 07:07PM >>>
>(This might seem like a stupid question to this group, but) I'm being
>challenged by the folks who can't get my firewall setup to work with
>Amanda that I should adopt a more "industry-standard" backup product.
>Hogwash.  ...

Good answer :-).

>Anyone have any guesses how many institutions and individuals are
using
>amanda?

There are currently 1187 addresses on the amanda-users mailing list,
and 561 on amanda-hackers.  Running that all through uniq (and some
other Perl magic), I came up with 1257 "domains" represented.

Some caveats about that number.  Not all sites running Amanda subscribe
to
the mailing lists, but not everybody subscribed to the list runs
Amanda.
Also, some of the addresses are clearly internal mailing lists, so the
number of people who actually get the E-mail is certainly higher.

Looking through the list, paying particular attention to .com's (since
they are so much more important the rest of us peons :-), I see
several
names I recognize right away:

  3com.com  (3com)
  adp.com   (ADP)
  attbi.com (AT&T)
  bbn.com   (BBN)
  boeing.com(Boeing)
  corning.com   (Corning)
  cypress.com   (Cypress)
  daimlerchrysler.com   (Chysler)
  dell.com  (Dell)
  fedex.com (Federal Express)
  ge.com(General Electric)
  goodyear.com  (GoodYear)
  harris.com(Harris)
  honeywell.com (Honeywell)
  hp.com(Hewlitt-Packard)
  ibm.com   (IBM)
  informix.com  (Informix)
  kodak.com (Kodak)
  mot.com   (Motorola)
  nokia.com (Nokia)
  nsc.com   (National Semiconductor)
  oracle.com(Oracle)
  philips.com   (Phillips)
  redhat.com(Red Hat)
  ricoh.com (Ricoh)
  siemens.com   (Siemans)
  sun.com   (Sun)
  trw.com   (TRW)
  valinux.com   (VA Linux)
  xerox.com (Xerox)

I'm sure there's a few billion dollars of worth floating around there,
and I only looked at the U.S. .com entries.  There are almost 500
international entries and another hundred .edu's (and if you don't
think
universities are in it for the money ... :-).

>-Kevin Zembower

John R. Jackson, Technical Software Specialist, [EMAIL PROTECTED]



Re: Monthly-type backup

2002-01-28 Thread Joshua Baker-LePain

On 27 Jan 2002 at 12:37pm, Dan Smith wrote

> Is there any way to do this elegantly?  I was thinking I could also
> setup 3 jobs and spread the disks out among them, but that would be
> more work, and I'd like for amanda to organize the dumps so they best
> fill out each tape.
> 
dumpcycle=0, runtapes=3, chg-manual

Then you get to play at being a tape robot!  ;)

-- 
Joshua Baker-LePain
Department of Biomedical Engineering
Duke University




Re: re-doing bad tapes

2002-01-28 Thread Chris Marble

Ben Elliston wrote:
> 
> On Friday, amdump wrote my dump images to a tape successfully, but a 
> subsequent amverify showed that the tape is defective.  Now, my dumps have 
> been removed from the holding disk, but I don't think the images on the 
> tape are usable.
> 
> Is there any way to re-do this backup onto a good tape or am I hosed?

Do an amadmin  amrmtape 
Mark the take so you don't re-use it.
amlabel a new tape with the same 
Rerun the backup and you should get about the same levels as before.
-- 
  [EMAIL PROTECTED] - HMC UNIX Systems Manager
  My opinions are my own and probably don't represent anything anyway.



file too large, linux 2.4.2

2002-01-28 Thread Matthew Boeckman

Hi there list.
  I'm running amanda 2.4.2 on a RH7.1 box with 2.4.2-2. I recently read 
in the archives about making chunksizes 1Gb instead of 2 due to amanda 
tacking on stuff at the end, making those file too large. Too late, I'm 
afraid, as I'm trying to restore a set of files from a level0 of a sun 
box. I was able to get the archive off of the tape medium, and it's size 
is:2468577280. restore on the linux box in question fails with "File too 
large" (version 0.4b21). I was also able to get the file to the sun box 
it was backed up from, but ufsrestore complains that "Volume is not in 
dump format", which I assume is because it is a file made by dump, not 
ufsdump.

So the question is: WHAT CAN I DO? Is there any way to get this 
directory extricated from this honking big 2 gig file? Second, I 
_thought_ that the 2.4 kernel was supposed to do away with the 2gb file 
size limitations. Am I misinformed?

-- 
Matthew Boeckman(816) 777-2160
Manager - Systems Integration   Saepio Technologies
== 
==
Public Notice as Required by Law: Any Use of This Product, in
Any Manner Whatsoever, Will Increase the Amount of Disorder in the Universe.
Although No Liability Is Implied Herein, the Consumer Is Warned That This
Process Will Ultimately Lead to the Heat Death of the Universe.




Re: amrecover cannot find index for host

2002-01-28 Thread Jeremy Wadsack


Kirk Strauser ([EMAIL PROTECTED]):


> At 2002-01-25T19:16:22Z, "Jeremy Wadsack" <[EMAIL PROTECTED]> writes:

>> We switch from tape- to hdd-based backup because hard drives are MUCH
>> cheaper than tapes these days. I need to figure out the 2GB limit on the
>> drives before I completely restart the system (separate problem.)

> Ehhh?  The good tape drives aren't particularly cheap, but the media is
> practically throw-away money, especially for any shop larger than one
> person.  I mean, DDS-3 tapes are about $15, which translates to $0.625 per
> GB.  Add the fact that tapes are, by their nature, hot-swappable, and I
> think you'd be hard-pressed to find a less-expensive and more-featureful HD
> setup.

Well, $0.625 / GB does not include the cost of the hardware. With a
ten tape changer you *might* get 180GB of storage on "24GB" DDS-3
tapes. A 160GB hard drive is about $260. That's much lower cost than
10 DDS-2 tapes and the changer hardware. And it's significantly
faster. It became impossible for us to backup 120GB of data in the
off-hours of the server and the gtar process was using far too much
resources on live web servers.


-- 

Jeremy Wadsack
Wadsack-Allen Digital Group




Re: nervous over amrecover

2002-01-28 Thread Chris Marble

Albert Hopkins wrote:
> 
> Testing a new amanda server install (RH Linux 7.2, amanda 2.4.2p2, DLT
> 7000). I first ran a successful backup and then attempted to recover one
> (small) file.  The first time I extract I get
> 
> amrecover: Can't read file header
> extract_list - child returned non-zero status: 1
> 
> But then, without exiting I "add" and "extract" the file again.  This
> time it works.  But this would have been pretty scary if this were a
> real-life situation.  I'm just wondering why this happened and what
> could be done to prevent it.  I had not ejected or rewound the tape
> between backup and recovery.  Could this have been the issue?

You need to rewind the tape before doing the recover.
-- 
  [EMAIL PROTECTED] - HMC UNIX Systems Manager
  My opinions are my own and probably don't represent anything anyway.



Re: amandad

2002-01-28 Thread Chris Marble

Davidson, Brian wrote:
> 
> I have Amanda 2.4.3b1 installed on a intel computer running BSDI 4.1.  I
> have amanda in the /etc/services file and in the /etc/inetd.conf file.
> amandad does not start when the computer is rebooted but no error messages
> are reported in /var/log/messages. I can run amandad from the command line
> as user "amanda" and it will creat the /tmp/amanda/debug.amanda directory
> and debug file before timing out. 

That sounds like the appropriate behavior.  Amandad doesn't run except
when a connection comes in on port 10080.  Running it by hand should do
what you describe.
-- 
  [EMAIL PROTECTED] - HMC UNIX Systems Manager
  My opinions are my own and probably don't represent anything anyway.



Re: file too large, linux 2.4.2

2002-01-28 Thread Joshua Baker-LePain

On Mon, 28 Jan 2002 at 10:38am, Matthew Boeckman wrote

>   I'm running amanda 2.4.2 on a RH7.1 box with 2.4.2-2. I recently read 
> in the archives about making chunksizes 1Gb instead of 2 due to amanda 
> tacking on stuff at the end, making those file too large. Too late, I'm 
> afraid, as I'm trying to restore a set of files from a level0 of a sun 
> box. I was able to get the archive off of the tape medium, and it's size 
> is:2468577280. restore on the linux box in question fails with "File too 
> large" (version 0.4b21). I was also able to get the file to the sun box 

Newer versions of dump/restore should be compiled with large file support 
-- try upgrading.  You could also compile the newest version yourself, 
adding in the appropriate flags.  The kernel/glibc combo on RH7.1 *can* 
handle large files -- you just have to make sure that the app can.

> it was backed up from, but ufsrestore complains that "Volume is not in 
> dump format", which I assume is because it is a file made by dump, not 
> ufsdump.

No.  For every filesystem, AMANDA runs the appropriate backup utility.  
Also, dumps are FS specific -- ext2 dump won't be able dump a UFS 
filesystem, and vice verse for ufsdump.  Whether or not a 
particular "restore" can read another dump's format is often hit-and-miss.

So, if that file was indeed from a Sun filesytem using "dump", it should 
indeed be a ufsdump archive.  Perhaps it's getting truncated somewhere?

> So the question is: WHAT CAN I DO? Is there any way to get this 
> directory extricated from this honking big 2 gig file? Second, I 

Your options are:

1) Use amrecover from the Sun box.  This will automatically do everything 
in pipes and avoid the whole large file issue.

2) Upgrade dump/restore to a version compiled with large file support.

3) Pull the file off the tape with amrestore or dd, and pipe the output 
straight to restore, again avoiding a 2GB disk file.

> _thought_ that the 2.4 kernel was supposed to do away with the 2gb file 
> size limitations. Am I misinformed?
> 
As mentioned above, it's a kernel/glibc thing, *and* the app must be 
configured appropriately.  IIRC, for whatever reason, the dump/restore 
shipped with RH7.1 wasn't.

-- 
Joshua Baker-LePain
Department of Biomedical Engineering
Duke University




Re: Input/Output error

2002-01-28 Thread John R. Jackson

>Along those lines, here's what I see in dmesg after it finally fails:
>
>ide-tape: ht0: DSC timeout
>ide-tape: ht0: position_tape failed in discard_pipeline()
>ide-tape: ht0: DSC timeout
>hdc: DMA disabled
>hdc: ATAPI reset complete
>ide-tape: ht0: I/O error, pc = 10, key = 2, asc = 4 ascq =1
>ide-tape: Couldn't write a filemark

Well, those timeouts and resets can't be a good thing.

I don't have the manuals for your specific drive (in fact I couldn't
find it on the Seagate web site -- what kind of drive is it and what
kind of tapes are you using?), but assuming it is roughly like the ones
I have, the "key = 2, asc = 4 ascq =1" translates to "drive not ready,
calibration in progress".

Also, this confirms that some type of hardware error is happening.

>> What happens if you try to amlabel one of your already labelled tapes?
>
>I get:
>
>rewinding, reading label DailySet101, tape is active
>rewinding
>tape not labeled
>
>so I guess it refused to do it.  Is this what is supposed to happen?

Sorry.  I should have told you to use "-f" on amlabel to coerce it into
rewriting the label even though the tape looks active.

Now that I think about it some more, though, this can lead you down a
path of other problems and is probably not a good idea to pursue.

>> What happens if you run amcheck with the -w option ...
>After switching to the tape Amanda wants to see next, I get:
>
>Amanda Tape Server Host Check
>-
>Tape DailySet100 is writable
>Tape DailySet100 label ok
>Server check took 4.565 seconds
>
>(brought to you by Amanda 2.4.2p2)
>
>Seems okay, is this what you expect?

Yes, that looks normal.

>I did this, and it seemed to work fine.  No I/O errors, just nice little
>messages from dd confirming reads and writes of so many blocks.  I had 30 or
>so writes in the script, and they all seemed to complete okay.

Sigh.  Oh, well.  It's usually too much to ask for that hardware will
misbehave with easy tests :-).

>...  Switched to the other IDE controller, still get the same behavior.

That leaves the cable, the drive itself, or tremendously bad luck
picking several tapes that all fail (my money is on the drive, or maybe
the cable).

>Steve Stanners

John R. Jackson, Technical Software Specialist, [EMAIL PROTECTED]



Re: nervous over amrecover

2002-01-28 Thread John R. Jackson

>> ...  The first time I extract I get
>> 
>> amrecover: Can't read file header
>> extract_list - child returned non-zero status: 1
>> 
>> But then, without exiting I "add" and "extract" the file again.  This
>> time it works.  ...
>
>You need to rewind the tape before doing the recover.

Chris is right.  Here's a little more of what's going on behind the
scenes:

  * When amdump (taper) completes, it leaves the tape where it left off
(at the end).  In theory you could use this to tack on some non-Amanda
dump information to the tape, although that can be tricky.

  * Amidxtaped (called by amrecover) does **not** do any tape positioning
before trying to read the image.  This is so you can do the motion
yourself on devices that fsf much faster than read/scan.

The combination of these first two items probably gave you your
initial error.

  * Amidxtaped *does* rewind the tape when it completes (even if there
was an error), the theory being that if you need to do another restore
from the same tape it will be in a known position to start from.

This is probably why it worked the second time for you.

Personally, I run a wrapper shell script around amdump and one of the
things it does is unload any tape amdump writes to.  So whenever I'm
doing a restore, I have to mount the requested tape and therefor it's
at a known position.

I also always position the tape myself to do a restore (rewind followed
by an appropriate fsf) because it's much faster on the types of drives
I have.

>>  --a
>  [EMAIL PROTECTED]

John R. Jackson, Technical Software Specialist, [EMAIL PROTECTED]



Re: amrecover cannot find index for host

2002-01-28 Thread John R. Jackson

>... A 160GB hard drive is about $260. That's much lower cost than
>10 DDS-2 tapes and the changer hardware.  ...

Agreed, but there is a fundamental difference here and that's the
critical failure path.  If one of those 10 tapes fails, you should still
be able to recover the majority of your data.  If that single disk fails,
you're screwed.

I'm not disagreeing with the idea of using disk for backup.  Just saying
it takes some thought to make it a safe setup (tape does too, of course).

>Jeremy Wadsack

John R. Jackson, Technical Software Specialist, [EMAIL PROTECTED]



Re: Monthly-type backup

2002-01-28 Thread Stephen Carville

On 27 Jan 2002, Dan Smith wrote:

- I want to (once a month) do a backup of all disks.  My tapes are not
- large enough to fit the whole thing, so I need to spread it across 3
- tapes or so.  I was thinking of setting up a 1-day dumpcycle with 3
- runspercycle and just do all three dumps on a saturday.  I don't think
- this is the best way to do it, since anything that has changed between
- dumps will be backed up incremental to the 2nd or third run.
-
- Is there any way to do this elegantly?  I was thinking I could also
- setup 3 jobs and spread the disks out among them, but that would be
- more work, and I'd like for amanda to organize the dumps so they best
- fill out each tape.
-
- Does anyone have any advice?

In amanda.conf define:

define dumptype always-full {
global
compress none
priority high
dumpcycle 0
}

Then in disklist put something like:

server /volume {
always-full
}

-- 
-- Stephen Carville
UNIX and Network Administrator
Ace Flood USA
310-342-3602
[EMAIL PROTECTED]




Re: FAILURE AND STRANGE DUMP SUMMARY:

2002-01-28 Thread John R. Jackson

>... suddenly 3 of 14 mountpoints on the same machine failed
>for reasons I don't get.
>...
>machine /some/mountpoint_1 lev 1 FAILED [could not connect to machine]

Some things to check:

  * See if there are left over amandad (or dumper) processes laying
around still running from a previous failure.

  * Verify that the amandad service in inetd.conf/xinetd is set to
"wait" (the default is "nowait").

  * Make sure you don't have multiple cron jobs running at the same time.
 
>Sascha

John R. Jackson, Technical Software Specialist, [EMAIL PROTECTED]



invalid password on windows host

2002-01-28 Thread Jonathan F. Swaby

When I run amcheck on the config for one of my windows machines I get the following:
ERROR: lotus: [PC SHARE //vegas/avr access error: host down or invalid password?]

I checked amandad.debug and found:
running /usr/bin/smbclient vegas\\avr  -E -U backup -W VPSA -c quit
ERROR [PC SHARE //vegas/avr access error: host down or invalid password?]

This would indicate to me that it is not picking up the usrid and password information 
stored in /etc/amandapass. The amandapass file has the correct permissions. I think I 
have the format correct. It looks like this:

//vegas/avr userid%password NT_Domain

Does anyone have any ideas?

Thanks



Re: Who uses Amanda?

2002-01-28 Thread Stephen Carville

On Fri, 25 Jan 2002, KEVIN ZEMBOWER wrote:

- (This might seem like a stupid question to this group, but) I'm being
- challenged by the folks who can't get my firewall setup to work with
- Amanda that I should adopt a more "industry-standard" backup product.
- Hogwash. But, I would like to at least offer an answer.
-
- Anyone have any guesses how many institutions and individuals are using
- amanda?
-
- Anyone know, or want to self-disclose, some "noteworthy" institutions
- using amanda? If you think this would clog up the list too badly, email
- me privately at [EMAIL PROTECTED], and after a week or so, I'll
- compile a list and post it to the email list.

I have no idea if Ace USA is "noteworthy" be we use Amanda to backup
Oracle servers, Web servers, Samba servers and several development
machines.  Total data written to tape in a cycle is about 200 Gig.  I
originally picked it because the difference between the price of
Amanda (free) and BackupExec licenses ($$$) paid for the 15 tape
changers with a barcode reader.  Now that I've been using it for a
few months, I've discovered it is faster and more effecient of tape
usage than BackupExec.  The windows admin is actually jealous :-)

What firewall problems are you having?

-- 
-- Stephen Carville
UNIX and Network Administrator
Ace Flood USA
310-342-3602
[EMAIL PROTECTED]




bootstrapping

2002-01-28 Thread Scaglione Ermanno

I did a few test with amanda, I tried both gnu tar and dump, I have 6
servers to dump, 3 Solaris and 3 Linux RedHat, some 50Gb, now I found what I
think is the right configuration, bypassed the firewall and starting to dump
the servers. And now I found the first problem I cannot easily solve. I am
using a DDS4 tape, here is the def produced by tapetype:

define tapetype DDS4-DAT {
comment "just produced by tapetype program"
length 16382 mbytes
filemark 10 kbytes
speed 1884 kps
}

and I?d like to use just one tape for each run, I've setup a 11Gb partition
as holding disk and making dumps with dump:

define dumptype sno-dump {
global
program "DUMP"
comment "standard webserver dumped with dump"
compress client best
comprate 40,0
index yes
priority medium
holdingdisk yes
record yes
}

The problem is that amanda (amanda-2.4.3b2-20020126) simply ignore the
comprate parameter, I did a test run and I know that gzip does really a good
work with those partition containing lots of html. I know that after the
first run amanda will use the last one to obtain the estimates what I need
is to do the initial full backups but it insists to keep the full size of
the partition as estimate, in the first run I could just dump 3 partition
out of 22:

DUMP SUMMARY:
  DUMPER STATS
TAPER STATS
HOSTNAME DISK L ORIG-KB  OUT-KB COMP% MMM:SS  KB/s
MMM:SS  KB/s
--- -- -
---
somehost /somewhere/   0 1507440  618080  41.0 194:16  53.0
5:171948.1
somehost /somewhere/   0 2372200  940672  39.7 270:57  57.9
8:021950.7
somehost /somewhere/   0 10377660 7475360  72.0 111:261118.0
63:481952.8

So it looks like I used just 9gb out of 16 and all the other partition got
better compression rates in the test phase than the first 2 that succeeded.
I have 2 questions how can I convince amanda that a certain partition will
be well compressed by gzip?
Second I need to do the initial full backups, I am using:
dumpcycle 9 # the number of days in the normal dump cycle
runspercycle -1 # the number of amdump runs in dumpcycle days
# (4 weeks * 5 amdump runs per week -- just
weekdays)
tapecycle 10 tapes  # the number of tapes in rotation

strategy standard

may I run amdump 2 or 3 times the same day, so that all partitions get a
level 0 dump or this can cause troubles with the dump cycle?

Thanks in advance for the answer.




Re: bootstrapping

2002-01-28 Thread John R. Jackson

>define dumptype sno-dump {
>...
>comprate 40,0
>...
>}
>
>The problem is that amanda (amanda-2.4.3b2-20020126) simply ignore the
>comprate parameter ...

I'll have to look into the code to see if it's really ignoring your
values, but what you entered is not doing what you think.

I assume what you're trying to say is that a compressed full dump will be
40% of the size of the original.  If you look at the amanda(8) man page,
you'll see the values for comprate are floating point and the defaults
are 0.50 and 0.50, which means 50%.  So you probably want:

  comprate 0.40,0.40

Your value of 40 is telling Amanda the resulting compressed image will
be 40 times larger than the original.  No wonder it's ignoring you :-).

Since this is only for getting started (as you noted, Amanda will use
its own history after the initial run), you might just ignore setting
comprate since 50% is not all the different than 40%.

>may I run amdump 2 or 3 times the same day, so that all partitions get a
>level 0 dump ...

Yes.

John R. Jackson, Technical Software Specialist, [EMAIL PROTECTED]