Re: bare metal restore, basically

2007-02-28 Thread Ross Vandegrift
On Wed, Feb 28, 2007 at 11:07:04AM +0100, Paul Bijnens wrote:
 On 2007-02-27 22:34, Byarlay, Wayne A. wrote:
  Is there a method I can use to boot from, say, Ubuntu, or knoppix, or
  some other CD-based OS, run the AMANDA client, connect to the server
  where the latest backups reside, and basically rebuild this machine with
  only one /dev/sda?
 
 Install a new hard disk.
 Boot with your bootable CD, partition it, and make the
 filesystems (mke2fs etc.)

Having done these restores numerous times, you will save yourself a
lot of consternation if you download the latest Knoppix and use it to
do the restore.  Before you start, run aptitute update; aptitude
install dump amanda-client.

The reason you want to do this: a recent change in dump's file format
breaks restoring using older copies of restore.  If you use the latest
Knoppix, you'll save waiting a long time of download upgrades, but
you'll still be able to pull down the latest amanda and dump.

-- 
Ross Vandegrift
[EMAIL PROTECTED]

The good Christian should beware of mathematicians, and all those who
make empty prophecies. The danger already exists that the mathematicians
have made a covenant with the devil to darken the spirit and to confine
man in the bonds of Hell.
--St. Augustine, De Genesi ad Litteram, Book II, xviii, 37


Re: Feature suggestion : test restore

2007-01-31 Thread Ross Vandegrift
On Wed, Jan 31, 2007 at 06:02:58PM -0700, Glenn English wrote:
 Stefan G. Weichinger wrote:
  Jon LaBadie schrieb:
   - assuming the restore is done to the server;
 the tools used to create the dump may not exist on
 the server or be in different locations
 
  An amanda-admin has to know if he wants to do these tests, and after
  that he has to provide ressources (diskspace ...) and software (binaries
  like tar, dump, gzip ...) on the machine he wants to perform these tests.
 
 It'd never occurred to me there might be such a thing as a write-only
 backup server before. And I always restore to the server anyway -- it's
 in the same room as the tape drive...

All of the backup servers that I used to run were write-only.

It becomes a major issue with recent changes in RHEL4.  Some of our
Amanda servers were RHEL3.  A new version of dump was integrated into
RHEL4 which broke binary compatability from the old version.  This
split the versions and restore would give really scary errors about
unable to find start of tape if you attempted a bad combination.

-- 
Ross Vandegrift
[EMAIL PROTECTED]

The good Christian should beware of mathematicians, and all those who
make empty prophecies. The danger already exists that the mathematicians
have made a covenant with the devil to darken the spirit and to confine
man in the bonds of Hell.
--St. Augustine, De Genesi ad Litteram, Book II, xviii, 37


Slow performance with dump + LVM

2007-01-29 Thread Ross Vandegrift
Hi everyone,

I'm working on setting up some Amanda backups at my house and have run
into a serious performance problem.  Both the client and the server
machine are using LVM2 for their disks.  Though there's 320GiB of data
to backup, this is a bit ridiculous:

sendbackup: time 8008.486:  87:  normal(|):   DUMP: 8.07% done at 3472 kB/s, 
finished in 24:40

I've done tons of dumps to LVM volumes on a server with no speed
problems, so I'm investigating the client.  dumping on the client to
/dev/null gives nearly the same performance, so I'm confident that
it's an issue with dump + LVM on the client. [1]

The client is all but idle, not doing compression/encryption, and only
has two PVs in the VG, both of which are fast SATA disks.  If I
increase the blocksize of dump to 64kiB (the largest the manpage says
is smart), performance gets better, in the range of 15-20M/s.

Has anyone seen performance this bad from dump before?  Is there
some tunable to get things running a little faster?  24 hours of a
full backup seems just terrible


[1] - So yea, technically this the wrong place to ask, the LVM mailing
lists seem to be all but completely dead.  On the other hand, lots of
people know stuff about dump here!

-- 
Ross Vandegrift
[EMAIL PROTECTED]

The good Christian should beware of mathematicians, and all those who
make empty prophecies. The danger already exists that the mathematicians
have made a covenant with the devil to darken the spirit and to confine
man in the bonds of Hell.
--St. Augustine, De Genesi ad Litteram, Book II, xviii, 37


Re: Slow performance with dump + LVM

2007-01-23 Thread Ross Vandegrift
On Tue, Jan 23, 2007 at 01:01:38AM -0500, Gene Heskett wrote:
 Well, since dump works at the partition level, it may be that dump and LVM 
 aren't compatible.  Switch to tar, which is file oriented  see what 
 happens.

Looks like I'm hitting nearly the same speed situation.  I backed up
7932MiB in 2460 seconds (I got impatient...) which comes out to about
3-4MiB/s.

Since tar behaves the same way, I think I'm going to play around with
various record sizes when reading from LVM.  Something in the blokc
layer is shooting my transfer rates.

-- 
Ross Vandegrift
[EMAIL PROTECTED]

The good Christian should beware of mathematicians, and all those who
make empty prophecies. The danger already exists that the mathematicians
have made a covenant with the devil to darken the spirit and to confine
man in the bonds of Hell.
--St. Augustine, De Genesi ad Litteram, Book II, xviii, 37


Re: amandas group membership in FC6?

2006-11-26 Thread Ross Vandegrift
On Sat, Nov 25, 2006 at 11:22:38PM -0500, Gene Heskett wrote:
 See what that number maps to in /etc/group.  I'm betting it 
 goes to an 'amanda' group and not the 'disk' group.
 
 There was not, and still is not, a group named amanda, just the amanda 
 entry in the disk line.

Is there any chance you're using ldap/nis/winbind/etc for groups in
nsswitch.conf?  The amanda group must be coming from somewhere.  If
it's not listed in /etc/group, I'm wondering if there's another source
of groups on your system.

-- 
Ross Vandegrift
[EMAIL PROTECTED]

The good Christian should beware of mathematicians, and all those who
make empty prophecies. The danger already exists that the mathematicians
have made a covenant with the devil to darken the spirit and to confine
man in the bonds of Hell.
--St. Augustine, De Genesi ad Litteram, Book II, xviii, 37


Re: confused and concerned about virtual tape sizes

2006-11-15 Thread Ross Vandegrift
On Fri, Nov 10, 2006 at 02:43:13PM -0500, René Kanters wrote:
 I backup a small directory (/home/rene at about 2.1MB) and when I run  
 amdump I get messages with (excerpts):
[snip]
 i.e., the full size of my (compressed) /home/rene

You're running up against Amanda being too smart!

Since it seems that your data set is very small, it is doing a full
backup every time.  Why bother to make an incremental backup of
2.1MiB?

By default, Amanda must get a 10MiB savings before it will bump to the
next incremental level.  You can control the bumping with bumpsize,
bumppercent, bumpmult, and bumpdays.

The short, though, throw some real data at it and you'll see some real
performance!

-- 
Ross Vandegrift
[EMAIL PROTECTED]

The good Christian should beware of mathematicians, and all those who
make empty prophecies. The danger already exists that the mathematicians
have made a covenant with the devil to darken the spirit and to confine
man in the bonds of Hell.
--St. Augustine, De Genesi ad Litteram, Book II, xviii, 37


Re: Looooooooong backup

2006-10-18 Thread Ross Vandegrift
On Wed, Oct 18, 2006 at 01:14:57PM -0400, Steven Settlemyre wrote:
 I have a monthly (full) backup running for about 22 hrs now. Do you 
 think there is a problem, or is it possible it's just taking a long 
 time? about 150G of data.

Is it closer to 150G in on file or 150G in one million files?

As the number of files increases, the time it takes to do
dumps/restores becomes dominated by the amount of time it takes to
deal with individual files.

Millions of files add up to extremely long dumps...

-- 
Ross Vandegrift
[EMAIL PROTECTED]

The good Christian should beware of mathematicians, and all those who
make empty prophecies. The danger already exists that the mathematicians
have made a covenant with the devil to darken the spirit and to confine
man in the bonds of Hell.
--St. Augustine, De Genesi ad Litteram, Book II, xviii, 37


Re: bizarre bug fixed but not explained

2006-09-30 Thread Ross Vandegrift
On Fri, Sep 29, 2006 at 06:47:42PM -0400, Jon LaBadie wrote:
 On Fri, Sep 29, 2006 at 06:01:46PM -0400, Steve Newcomb wrote:
  chown root.disk /home/amanda/libexec/runtar
 
 All the chown's I've used require a colon (':'),
 not a period ('.') between the user and group. 

It's a BSD-ism that's been inherited into the GNU tools.  POSIX
technically disallows it because a username is allowed to contain
dots. 

Check the documentation for fileutils for more references.  Might not
be a bad idea to convert that to colon since Amanda can run on many
platforms that are both non-BSD and non-GNU.

-- 
Ross Vandegrift
[EMAIL PROTECTED]

The good Christian should beware of mathematicians, and all those who
make empty prophecies. The danger already exists that the mathematicians
have made a covenant with the devil to darken the spirit and to confine
man in the bonds of Hell.
--St. Augustine, De Genesi ad Litteram, Book II, xviii, 37


Re: restoring from DVDs

2006-08-06 Thread Ross Vandegrift
On Sun, Aug 06, 2006 at 09:41:39AM +0200, Geert Uytterhoeven wrote:
  I've had quite good luck doing poor man's data recovery.  Boot the
  machine into Knoppix or like ilk and use dd_rescue to copy the disk to
  an image file or another disk.  dd_rescue is smart about skipping
  areas of the disk it cannot read instead of giving up.  It can take a
  long time, but I've recovered quite a bit of data with that sucker.
 
 But this simple methods won't work, as the disk used to be part of a RAID0
 setup, and thus contains only half of the data. Then it depends on the stripe
 size: the larger it is, the more likely you can find useful pieces of data
 (e.g. a complete password or credit card number).

Well, no reason you couldn't apply this principle to the RAID0 data.
Suppose your disks are /dev/sda1 and /dev/sdb1, sdb has failed:

# dd_rescue /dev/sdb1 some_file_or_device
(wait a long time)
(if you're using a file # losetup /dev/loop/0 some_file)
# mdadm --assemble /dev/md0 /dev/sda1 /dev/whatever

If the superblock of the second disk is intact, mdadm should be able
to assemble the array with no problems.  Of course you'll run into
files with corrupt data, as you know one half of the stuff is damaged.
If you're unlucky, the filesystem data was damaged and you'll have
either lost everything or will need to run fsck...


-- 
Ross Vandegrift
[EMAIL PROTECTED]

The good Christian should beware of mathematicians, and all those who
make empty prophecies. The danger already exists that the mathematicians
have made a covenant with the devil to darken the spirit and to confine
man in the bonds of Hell.
--St. Augustine, De Genesi ad Litteram, Book II, xviii, 37


Re: restoring from DVDs

2006-08-06 Thread Ross Vandegrift
On Sun, Aug 06, 2006 at 03:13:39PM +0100, Laurence Darby wrote:
 This is besides the point, but that wont work, and I'm not sure if you
 completely understand the situation. I *DON'T* want data to be
 recovered from the drive, and because I'm returning it under warranty, I
 want the data on it to be destroyed.

Aha!  I did misunderstand.

You shouldn't worry about this - the manufacturers are bound by
process requirements that prevent them from recovering data on RMAed
drives.  My employer is going through the process of becoming
certified for various government/banking/etc security processes.  Part
of it is verifying that sensitive data is not recoverable.

According to our compliance manager, sending a drive to RMA places
data security liability on the RMA facility as soon as they acceept
the package.  You'd be able to sue for damages if you did have
plaintext passwords that they leaked.

This still leaves a hole while the disk is in transit, but you can
only do so much...

-- 
Ross Vandegrift
[EMAIL PROTECTED]

The good Christian should beware of mathematicians, and all those who
make empty prophecies. The danger already exists that the mathematicians
have made a covenant with the devil to darken the spirit and to confine
man in the bonds of Hell.
--St. Augustine, De Genesi ad Litteram, Book II, xviii, 37


Re: restoring from DVDs

2006-08-05 Thread Ross Vandegrift
On Sat, Aug 05, 2006 at 12:29:43AM +0100, Laurence Darby wrote:
 BTW, it was actually one disk of nice and fast RAID 0 (so I'm
 restoring to the one good disk). Does anybody know if data recovery from
 it would be possible?  I hope *not*, since I'm sending it back under
 waranty, and I can't erase it cos its dead, although it sounded like
 the platter might be all scratched up...

As always, it depends on what you want to pay.  If you have the money
to burn, just about anything besdies physical platter
destruction/degaussing can be recovered.  I read an article not that
long ago about recovering a hard disk that had been burned in a fire.

I've had quite good luck doing poor man's data recovery.  Boot the
machine into Knoppix or like ilk and use dd_rescue to copy the disk to
an image file or another disk.  dd_rescue is smart about skipping
areas of the disk it cannot read instead of giving up.  It can take a
long time, but I've recovered quite a bit of data with that sucker.

Good luck!

-- 
Ross Vandegrift
[EMAIL PROTECTED]

The good Christian should beware of mathematicians, and all those who
make empty prophecies. The danger already exists that the mathematicians
have made a covenant with the devil to darken the spirit and to confine
man in the bonds of Hell.
--St. Augustine, De Genesi ad Litteram, Book II, xviii, 37


Re: Recovering files from DVD-recorded backup

2006-07-31 Thread Ross Vandegrift
On Mon, Jul 31, 2006 at 06:56:22PM +0100, Anne Wilson wrote:
 It seems to me that once vtapes are free for overwriting the corresponding 
 index will be lost too.  Can either amrestore or amrecover actually access 
 these files?

Barring defects in the file/media, amrestore can *always* be used to
recover an amanda backup.  The procedure isn't quite as automated as
amrecover.  Here's an outline of what you might need to do:

1) Determine what tape you actually need.  You might not know this for
real the first time!  Trial and error may be required.

2) Mount the DVD, say at /mnt.

3) To figure out what backups are on the tape, run something like
this:

# amrestore -f 0 -p file:/mnt/tapeX invalid-host  /dev/null

This says to start at block 0, of the tape file:/mnt/tapeX (path
depends on how you named tapes/burned them), search for
invalid-host and throw the data to /dev/null.

This command will give you a printout of the contents of the tape like
this:

amrestore:   0: skipping start of tape: date 20050128 label DailySet117
amrestore:   1: skipping blah.sda1.20050128.2
...

This output is formatted like hostname.device.date.dumplevel.  Find
the dump that you are looking for.  Let's assume we want to restore
from blah.sda1.20050128.2.

4) Now we run a valid amrestore command to get data:

# amrestore -f 1 -p file:/mnt/tapeX blah sda1 | restore -if -

This pulls data for blah:/sda1 from file:/mnt/tapeX starting at block 1.
You'll get a shell very much like amrecover that you can navigate the
filesystem dump, add files, and extract them.


If you use tar, you'll want to do something like:
# amrestore -f 1 -p file:/mnt/tapeX blah sda1 | tar xpvf -

to untar the tarball into your current directory.

-- 
Ross Vandegrift
[EMAIL PROTECTED]

The good Christian should beware of mathematicians, and all those who
make empty prophecies. The danger already exists that the mathematicians
have made a covenant with the devil to darken the spirit and to confine
man in the bonds of Hell.
--St. Augustine, De Genesi ad Litteram, Book II, xviii, 37


Re: What is the highest incremental/differential level performed by amanda?

2006-07-27 Thread Ross Vandegrift
On Tue, Jul 25, 2006 at 03:19:31PM -0400, Jon LaBadie wrote:
  Is there such a limit in Amanda anywhere?
 
 In planner.c there is code deciding upon level that basically
 says if I'm thinking of doing level 9, don't even consider anything
 higher, return level 9 as the decision.
 
 And in lots of other code, like reports, cleanup, amadmin, ...
 there are things like for (level = 0; level = 9; level++).

Makes sense - Amanda runs on plenty of platforms where dump levels
higher than 9 probably freak out the software.

 My take from your questions is that you want or expect to get high
 incremental levels.  Your needs may differ of course, but I'm happy
 that my incrementals seldom even get to level 3.  It reduces the
 number of tapes needed for a recovery and changed data often appears
 in multiple incrementals at the same level for redundancy.

With vtapes, there's not much of a downside to having higher levels.
No physical tape switching means that I just have to repoint to a
different directory for the next tape.

In that sense, having dumps hit levels 5 and 6 is a quite good thing.
The space savings from going higher than that are diminishing returns
for the servers I run, but I can imagine a highly volatile load where
that had good benefits.

 Perhaps very important for
 your need might be bumpdays which defaults to 2 days.  This is the
 minimum number of dumps at each incremental level before amanda will
 bump to the next level.  Dropping it to 1 is a good first step.

Dropping bumpdays to 1 made a major difference in terms of space usage
for me.  It helped a lot.  The other parameter that helped me a lot
was bumpmult.  It had bee originaly configured as 4 (not sure if
that's a default or not) and I dropped it to 2.  This helps Amanda
spend less time on larger level 0 and 1 dumps and more quickly gets us
to the more space efficient levels.



-- 
Ross Vandegrift
[EMAIL PROTECTED]

The good Christian should beware of mathematicians, and all those who
make empty prophecies. The danger already exists that the mathematicians
have made a covenant with the devil to darken the spirit and to confine
man in the bonds of Hell.
--St. Augustine, De Genesi ad Litteram, Book II, xviii, 37


Re: What is the highest incremental/differential level performed by amanda?

2006-07-25 Thread Ross Vandegrift
On Tue, Jul 25, 2006 at 11:42:12AM -0400, Joshua Baker-LePain wrote:
 Look at the various bump* options in amanda.conf -- they control when 
 amanda bumps an incremental up a level.  Theoretically it can go to 9 
 (just like the backup tools), but there's an increasingly high requirement 
 for backup image size savings to do the bumps.

Is there such a limit in Amanda anywhere?

For example, dump from Linux's e2fstools support arbitrary integers as
dump levels.  From the manpage:

A level number above 0, incremental backup, tells dump to copy all files
new  or  modified  since  the last dump of a lower level. The default level
is  9.  Historically only  levels  0 to 9 were usable in dump, this
version is able to understand any integer  as  a  dump level.

My Amandas don't ever get above a 6, and my predecessor's bump
settings limited runs to level 3, so I'm not saying level 897125
backups are a good idea ::-)

-- 
Ross Vandegrift
[EMAIL PROTECTED]

The good Christian should beware of mathematicians, and all those who
make empty prophecies. The danger already exists that the mathematicians
have made a covenant with the devil to darken the spirit and to confine
man in the bonds of Hell.
--St. Augustine, De Genesi ad Litteram, Book II, xviii, 37


Re: backing up to DVD-Rs

2006-07-13 Thread Ross Vandegrift
On Wed, Jul 12, 2006 at 06:31:52PM +0100, Anne Wilson wrote:
 On Wednesday 12 July 2006 18:17, Ross Vandegrift wrote:
  Make a holding area, and use dump with a tape size of just slightly
  less than the DVD-R size.  Tell dump to generate manifests and quick
  file access data.
 
  Then, all you have to do is burn each dump file to a DVD-R in
  pre-prepared mode (check man growisofs for details on this) and
  burn/store the index/QFA data somewhere.
 
 Would you like to amplify this, please?

Sure.  Assume that you've got /dev/sda1 that you'd like to backup and
you'll store the images in /holding.  Then it goes a little something
like this:

# cd /holding
# dump -0uy -A MANIFEST -B 4403200 -Q QFA /dev/sda1 -f disc1,disc2,disc3,...
... (wait for dump to dump the filesystem)
# growisofs -dvd-compat -Z /dev/dvd=disc1
... (burn all the images)

The dump command generates files named disc1, disc2, etc (you may
need more than three depending on your filesystem), as well as a
restore index called MANIFEST and quick file access info named QFA.
You can do an incremental by changing the -0 to some higher integer.
You may need to twiddle with -B, the size param.  I don't remember
exactly how large it can be.


Then, restoring from DVD looks like this:

# cd /destination
# restore -A MANIFEST -Q QFA -i -f /dev/dvd
... (restore as usual, inserting discX when asked for volume X)


That's all there is to my personal backup system.  It has one *major*
flaw: backups are not automated the way they are with Amanda.  I guess
if you're burning to DVD, you have to manually interact anyhow.  But
as a result, I have only ever done two complete backups of my
system...

On the other hand, I did use such a backup to recover from a
catastrophic disk crash and it worked very well.

-- 
Ross Vandegrift
[EMAIL PROTECTED]

The good Christian should beware of mathematicians, and all those who
make empty prophecies. The danger already exists that the mathematicians
have made a covenant with the devil to darken the spirit and to confine
man in the bonds of Hell.
--St. Augustine, De Genesi ad Litteram, Book II, xviii, 37


Re: backing up to DVD-Rs

2006-07-12 Thread Ross Vandegrift
On Wed, Jul 12, 2006 at 10:29:14AM +0100, Laurence Darby wrote:
 Bascially I have a 30GB directory (/home) and 50 4.7GB DVD-Rs.  Is there
 any support in Amanda for backing up to DVD-Rs?   Actually, the only
 thing I'm looking for is the capability to divide the 30GB into 4.7GB
 chunks, or slightly smaller to fit on a 4.7GB ext2 loopback file which
 can be burned to DVD.

Is this the only disklist item you'd have?  Also, how frequently will
you be running backups?

Basically, I found that Amanda with DVD-Rs was too much trouble for
backing up my home system.  Rather, I run dump manually.

Make a holding area, and use dump with a tape size of just slightly
less than the DVD-R size.  Tell dump to generate manifests and quick
file access data.

Then, all you have to do is burn each dump file to a DVD-R in
pre-prepared mode (check man growisofs for details on this) and
burn/store the index/QFA data somewhere.


Once you have that burnt, restores are easy.  You just tell restore to
restore from your DVD device.  When it asks for tape number X, you
insert DVD X and it does its thing.

It works quite well and prevent having to juggle all the information
that Amanda is going to keep about your filesystems.

-- 
Ross Vandegrift
[EMAIL PROTECTED]

The good Christian should beware of mathematicians, and all those who
make empty prophecies. The danger already exists that the mathematicians
have made a covenant with the devil to darken the spirit and to confine
man in the bonds of Hell.
--St. Augustine, De Genesi ad Litteram, Book II, xviii, 37


Re: LVM snapshots

2006-07-07 Thread Ross Vandegrift
On Fri, Jul 07, 2006 at 02:35:23AM -0700, Joe Donner (sent by Nabble.com) wrote:
 I'm just wondering what happens during the freeze - how freezing all
 activity to and from the filesystem to reduce the risk of problems affects
 the system?  One would imagine that disk writes are somehow queued up and
 complete when the file system is unfreezed again?

Yes - when you configure an LVM snapshot, you provide another device
to which changes are written.  LVM creates a new device node that
points to the snapshot of the original device.

The rest of the system basically sees everything as normal.  LVM
handles all the changes between the disks until the snapshot is
removed.

I don't run LVM snapshots with Amanda, but be aware that if you wanted
to do this, there'd be a few hoops to jump though.  You'd need to
specify the snapshot device name in the DLE, not the normal one.  As a
result, you'll need to somehow ensure that the snapshot was
sucessfully created before running amdump.

-- 
Ross Vandegrift
[EMAIL PROTECTED]

The good Christian should beware of mathematicians, and all those who
make empty prophecies. The danger already exists that the mathematicians
have made a covenant with the devil to darken the spirit and to confine
man in the bonds of Hell.
--St. Augustine, De Genesi ad Litteram, Book II, xviii, 37


Re: Amanda on any of the Linux live CDs?

2006-06-24 Thread Ross Vandegrift
On Fri, Jun 23, 2006 at 01:45:10PM -0500, Frank Smith wrote:
  I will start my own research this weekend but just in case anyone has done
  it, I will ask.  Do any of the Linux live CDs have amanda installed?
 
 It's on the Knoppix live DVD, but not the CD version.

This is correct, but remember that Knoppix has a working apt-get
configuration.  All you need to do is run apt-get update  apt-get
install amanda-client amanda-server and you've got it.  Makes for a
great recovery environment.

-- 
Ross Vandegrift
[EMAIL PROTECTED]

The good Christian should beware of mathematicians, and all those who
make empty prophecies. The danger already exists that the mathematicians
have made a covenant with the devil to darken the spirit and to confine
man in the bonds of Hell.
--St. Augustine, De Genesi ad Litteram, Book II, xviii, 37


Re: Disagreements between amplot, amstatus

2006-05-31 Thread Ross Vandegrift
On Tue, May 30, 2006 at 01:50:21PM -0700, Pavel Pragin wrote:
 But amplot's Bandwidth Allocated graph shows the line mostly pegged at 0% 
 the
 entire time.  From glancing through the amplot scripts, I'm guessing
 this should be labeled Bandwidth Free?
 
 If you run amstatus again what is the value you have in the field below:
 network free kps:  ?
 also what is your inparallel and maxdumps set to in amanda.conf?

network free kps:189380
inparallel 15
maxdumps 2

I don't think maxdumps will affect much - my DLEs happen to almost
always be single spindle boxes.  As a result, disks don't happen
simultaneously on the same host.  That's okay with me though.

How is network free kps calulated?  I played a little bit yesterday
with figuring out how 80 Mbps is related to 189380 but couldn't really
come up with anything...

-- 
Ross Vandegrift
[EMAIL PROTECTED]

The good Christian should beware of mathematicians, and all those who
make empty prophecies. The danger already exists that the mathematicians
have made a covenant with the devil to darken the spirit and to confine
man in the bonds of Hell.
--St. Augustine, De Genesi ad Litteram, Book II, xviii, 37


Disagreements between amplot, amstatus

2006-05-30 Thread Ross Vandegrift
Hello everyone,

I'm working on getting one of my Amanda servers to finish dumping a
little bit faster.  Right now, all but one of my hosts are taking
around four hours to finish.

Anyhow, I noticed that amstatus says this after the run:

 0 dumpers busy :  0:33:39  (  3.64%)not-idle:  0:33:39 (100.00%)
 1 dumper busy  :  9:38:18  ( 62.47%)not-idle:  9:37:29  ( 99.86%)
 no-bandwidth:  0:00:49  ( 0.14%)
 2 dumpers busy :  0:28:11  (  3.05%)not-idle:  0:27:57  ( 99.18%)
 no-bandwidth:  0:00:13  ( 0.82%)
 3 dumpers busy :  0:26:59  (  2.92%)no-bandwidth:  0:22:59  ( 85.20%)
 not-idle:  0:03:59  ( 14.80%)
 4 dumpers busy :  2:13:20  ( 14.40%)no-bandwidth:  1:56:04  ( 87.06%)
 not-idle:  0:17:15  ( 12.94%)
 5 dumpers busy :  1:12:40  (  7.85%)no-bandwidth:  1:12:40 (100.00%)
 6 dumpers busy :  0:32:23  (  3.50%)no-bandwidth:  0:32:23 (100.00%)
 7 dumpers busy :  0:03:03  (  0.33%)no-bandwidth:  0:03:03 (100.00%)
 8 dumpers busy :  0:00:27  (  0.05%)no-bandwidth:  0:00:27 (100.00%)
 9 dumpers busy :  0:00:48  (  0.09%)no-bandwidth:  0:00:48 (100.00%)
10 dumpers busy :  0:00:09  (  0.02%)no-bandwidth:  0:00:09 (100.00%)

Looks like I have too many dumpers for my bandwidth, no?  FWIW,
netusage is set to 80 Mbps.


But amplot's Bandwidth Allocated graph shows the line mostly pegged at 0% the
entire time.  From glancing through the amplot scripts, I'm guessing
this should be labeled Bandwidth Free?

-- 
Ross Vandegrift
[EMAIL PROTECTED]

The good Christian should beware of mathematicians, and all those who
make empty prophecies. The danger already exists that the mathematicians
have made a covenant with the devil to darken the spirit and to confine
man in the bonds of Hell.
--St. Augustine, De Genesi ad Litteram, Book II, xviii, 37


Re: Intermittent client failures reading /var/backups/.amandahosts

2006-05-29 Thread Ross Vandegrift
On Mon, May 29, 2006 at 11:07:29AM +0200, Paul Bijnens wrote:
 Strange.
 Is /var/backup a local filesystem?  Or mounted via NFS?
 
 Is the user backup defined in the local password file or via yp/ldap?

Both are local, filesystem and user.

 Is the DNS consistently giving back the same hostname? 
 (linuxbackup2.smarterlinux.com, always FQDN too?)
 
 Has one of the hosts two interface cards, and are they reverse resolved 
 identically?  (if not, add lines to .amandahosts)

Yep - we have /etc/hosts entries that ensure they are resolved
properly, to the correct interface, and have appropriate reverse
mappings.  That's what's weird about it - it's a pretty simple
setup...

-- 
Ross Vandegrift
[EMAIL PROTECTED]

The good Christian should beware of mathematicians, and all those who
make empty prophecies. The danger already exists that the mathematicians
have made a covenant with the devil to darken the spirit and to confine
man in the bonds of Hell.
--St. Augustine, De Genesi ad Litteram, Book II, xviii, 37


Re: Intermittent client failures reading /var/backups/.amandahosts

2006-05-29 Thread Ross Vandegrift
On Mon, May 29, 2006 at 01:26:08PM +0100, Paul Haldane wrote:
 As the error reported is a failure to open .amandahosts rather than a problem 
 with the contents of the file (which would result in a amandahostsauth 
 failed error) a likely cause (given that the problem is intermittent) is 
 running out of system resources (file descriptors/memory/?).

Ahhh, this is much more likely the cause.  These servers are very busy
and generally do not get much of a break.

 You should find more information about the exact cause of the failure in the 
 amanda debug file on the client - that should show the same error along with 
 the system error message.

Dang - Debian woody puts those logs in /tmp/amanda, which gets cleaned
automatically.  Is there any way I can change the log directory on the
client programs at runtime?

-- 
Ross Vandegrift
[EMAIL PROTECTED]

The good Christian should beware of mathematicians, and all those who
make empty prophecies. The danger already exists that the mathematicians
have made a covenant with the devil to darken the spirit and to confine
man in the bonds of Hell.
--St. Augustine, De Genesi ad Litteram, Book II, xviii, 37


Intermittent client failures reading /var/backups/.amandahosts

2006-05-28 Thread Ross Vandegrift
Hello everyone,

Approximately once a month, some of my servers fail to make backups
with this error message:

FAILURE AND STRANGE DUMP SUMMARY:
  capote sda8 lev 1 FAILED [ [access as backup not allowed from [EMAIL 
PROTECTED] open of /var/backups/.amandahosts failed]

But, there's simply no way this error message is correct.  I have two
reasons:

1) This host has three other DLEs that worked fine:
capote   sda1 1   4640547  11.8   0:07 81.6   0:00  
  93138.1
capote   sda7 1 117750   8472   7.2   0:27 313.9   0:00 
  190309.3
capote   sda8 1 FAILED 
--
capote   sda9 31633400 340849  20.9  13:11
431.2   0:1131918.6

2) The permissions on /, /var, /var/backups/.amandahosts, /etc,
/etc/amandahosts are all correct and none have been updated in at
least a week.

I've seen this before on other servers, and the major common fact
between them: they are all Debian woody servers running amanda-client
2.4.2p2-4.  I'd love to upgrade this, but upgrading woody's
amanda-client without upgrading glibc has proven very, very
challenging.

Is there any chance this is a known issue with the old client
software?  Maybe there's something else that could be failing?


-- 
Ross Vandegrift
[EMAIL PROTECTED]

The good Christian should beware of mathematicians, and all those who
make empty prophecies. The danger already exists that the mathematicians
have made a covenant with the devil to darken the spirit and to confine
man in the bonds of Hell.
--St. Augustine, De Genesi ad Litteram, Book II, xviii, 37


Re: Parallel dumps of a single file system?

2006-05-23 Thread Ross Vandegrift
On Tue, May 23, 2006 at 11:04:44AM -0400, Jon LaBadie wrote:
 Recently I had a look at amplot results for my new vtape setup.
 One thing it showed was that for 2/3 of the time, only one of the
 default four dumpers was active.

This is a good point.  amplot is awesome for checking out what kind of
stuff is slowing down your backups!  Also check the output at the end
of amstatus when a run is finished.  It'll give you a summary of the
same information.  But there's nothing like a cool graph!

As far as the original poster's question: I think you should try it
out.  Whether it's a performance win or loss is going to depend
heavily on how the data has ended up across those disks.

Your RAID5 performance is always dominated by the time it takes to
seek for data.  If all n disks can just just stream for a while, you
get full streaming performance from the disks.  But if even one of
them needs to seek to find its blocks, you're going to have to wait
until that disk finishes.

This makes me think that in most cases, dumping a big RAID5 in
parallell would hurt performance.  However, if your array is old, it
may be highly fragmented.  The extra I/O requests might be smoothed
over by an elevator algorith somewhere, and you might fit more data
into the same time...

I'd say it calls for an experiment.

--
Ross Vandegrift
[EMAIL PROTECTED]

The good Christian should beware of mathematicians, and all those who
make empty prophecies. The danger already exists that the mathematicians
have made a covenant with the devil to darken the spirit and to confine
man in the bonds of Hell.
--St. Augustine, De Genesi ad Litteram, Book II, xviii, 37


Re: vtape, end of tape waste

2006-05-23 Thread Ross Vandegrift
On Tue, May 23, 2006 at 04:28:31PM -0400, Jon LaBadie wrote:
 But running out of disk space caused me to look more
 closely at the situation and I realized that the failed
 taping is left on the disk.  This of course mimics what
 happens on physical tape.  However with the file:driver
 if this failed, and useless tape file were deleted,
 it would free up space for other data.

Our setups avoid this situation by having a dedicated chunk of holding
space.  I cut out 150-200GiB on each Amanda server just for holding
space.  That way, no dump ever fails because the data in holding was
using space in the vtape filesystem.

I know throwing more hardware/disk space at the problem isn't a
particularly interesting or clever solution, but I can vouch for the
fact that it works!

-- 
Ross Vandegrift
[EMAIL PROTECTED]

The good Christian should beware of mathematicians, and all those who
make empty prophecies. The danger already exists that the mathematicians
have made a covenant with the devil to darken the spirit and to confine
man in the bonds of Hell.
--St. Augustine, De Genesi ad Litteram, Book II, xviii, 37


Re: windows vshadow

2006-05-16 Thread Ross Vandegrift
On Tue, May 16, 2006 at 10:05:22AM -0400, Jon LaBadie wrote:
 The vshadow.exe is only available for WinXP and 2003 server.
 So it is not a general solution, but maybe a start?

Is this related to their Volume Management Services?  IIRC, this
interacts with a backgroup service like LVM or Veritas.  LEts you
create virtual volumes and the like.  Does it enable you to take
snapshots of existing filesystems without rebuilding them into managed
virtual volumes (or whatever the MS verbiage for these things is)?

For what it's worth, if it is the same thing that I have heard of,
I've heard it works ok.

-- 
Ross Vandegrift
[EMAIL PROTECTED]

The good Christian should beware of mathematicians, and all those who
make empty prophecies. The danger already exists that the mathematicians
have made a covenant with the devil to darken the spirit and to confine
man in the bonds of Hell.
--St. Augustine, De Genesi ad Litteram, Book II, xviii, 37


Re: Would like to get list's impression on (amanda 4TB backup)

2006-05-16 Thread Ross Vandegrift
On Tue, May 16, 2006 at 11:17:34AM -0400, Jon LaBadie wrote:
 On Tue, May 16, 2006 at 09:53:13AM -0500, Gordon J. Mills III wrote:
   here are some of my comments about TSM
 
 incremental forever is a feature I've heard of in some other
 backup systems.  I wouldn't expect that to fit the mold of amanda
 as recovery would then truly demand amanda software and indexes
 to be present.

I've been thinking about a new vtape changer that might make it easy
to do this kind of incremental forever backup.

One issue that I run into: Most of my system have 18-20 virtual tapes.
runtapes is 2, but I really try hard to keep things at one tape.  We
need to kep to weeks of backups.

Suppose some event happens and someone needs to make an emergency
backup of what's on a server, right now.  They run amdump, it clears
the oldest tape.  If this happens too many times in a dumpcycle, oops,
we don't have our two weeks of backups.


I've been thinking about a tape changer that uses a timestamp for the
label.  No tape is ever reused.  When amdump runs, something creates a
new tape, labels it, and loads it.  The backup is done to this tape.

You then have a cronjob that runs nightly to cull old backups.  Give
it a timeframe, it deletes any tapes that are older than the
timeframe.

If you had a model like this, where tapes could stick around forever
if nothing deleted them, you'd just need a flag that a given timestamp
is a full backup and should never be thrown out.


 With many people doing amanda backups
 to vtapes, it would be nice to be able to archive desired
 parts to offsite ptapes.

Shouldn't this be easy if you plan to make your vtapes the same size
as your ptapes?

-- 
Ross Vandegrift
[EMAIL PROTECTED]

The good Christian should beware of mathematicians, and all those who
make empty prophecies. The danger already exists that the mathematicians
have made a covenant with the devil to darken the spirit and to confine
man in the bonds of Hell.
--St. Augustine, De Genesi ad Litteram, Book II, xviii, 37


Re: Would like to get list's impression on (amanda 4TB backup)

2006-05-16 Thread Ross Vandegrift
On Tue, May 16, 2006 at 12:31:57PM -0400, Jon LaBadie wrote:
 On Tue, May 16, 2006 at 11:58:55AM -0400, Ross Vandegrift wrote:
  I've been thinking about a tape changer that uses a timestamp for the
  label.  No tape is ever reused.  When amdump runs, something creates a
  new tape, labels it, and loads it.  The backup is done to this tape.
 
 Couldn't this be done now with the autolabel feature and a V.large
 tapecycle that always called for a new tape?

Ooo!  That's exciting, I didn't know that there was such a think as
autolabel.  Ironically, the first google link when searching for
amanda autolabel is about TSM.  I checked the stuff on
www.amanda.org but I didn't find that option.  Could I get some more
info?

  Shouldn't this be easy if you plan to make your vtapes the same size
  as your ptapes?
 
 The logs+indexes might be a problem if it was going to a different config.
 And I was thinking of maybe selecting what I wanted, like all level 0's of
 a particular host:disk.  Or as of a specific date, one level 0 plus
 incrementals up to that date.

Oh that would be a cool tool.  It would solve a lot of problems for
smaller installations too.  Say I want to backup my workstation and
have the files accessible most of the time for easy restore.  However,
I do want occasional archival copies to tape/DVD/whatever.  

-- 
Ross Vandegrift
[EMAIL PROTECTED]

The good Christian should beware of mathematicians, and all those who
make empty prophecies. The danger already exists that the mathematicians
have made a covenant with the devil to darken the spirit and to confine
man in the bonds of Hell.
--St. Augustine, De Genesi ad Litteram, Book II, xviii, 37


Subset dumps?

2006-05-09 Thread Ross Vandegrift
Hello everyone,

Been a bad week for my backups.  Today, we had a mostly catastrophic
loss of a vtape filesystem, lost 14 of 18 vtapes on our busiest Amanda
server.  I'm just finishing picking up the pieces now and getting
things back on track.

With amdump I can:
1) Run dumps for an entire config: amdump master
2) Run dumps for a particular host: amdump hostname
3) Run dumps for a specific DLE: amdump hostname partition

Is there any way to run dumps for a subset of disklist entries?  For
example, is there someway I can do amdump [a-h]*?

We want to make a lot of full backups to make up for what we've lost,
and so we can get back into incrementals ASAP.  Forcing everything to
full greatly outpaces our holding space.

It would have almost been easier to do full dumps that would fill our
holding disk until we finished all the hosts, but I couldn't figure out a
good way to accomplish that...


-- 
Ross Vandegrift
[EMAIL PROTECTED]

The good Christian should beware of mathematicians, and all those who
make empty prophecies. The danger already exists that the mathematicians
have made a covenant with the devil to darken the spirit and to confine
man in the bonds of Hell.
--St. Augustine, De Genesi ad Litteram, Book II, xviii, 37


Re: Subset dumps?

2006-05-09 Thread Ross Vandegrift
On Tue, May 09, 2006 at 11:35:45AM +0200, Paul Bijnens wrote:
 On 2006-05-09 09:34, Ross Vandegrift wrote:
 Been a bad week for my backups.  Today, we had a mostly catastrophic
 loss of a vtape filesystem, lost 14 of 18 vtapes on our busiest Amanda
 server.  I'm just finishing picking up the pieces now and getting
 things back on track.
[snip]
 For the future:  If the vtapes are really important, make it a
 RAID system, and/or use RAIT to mirror the Amanda backups to
 tape + disk or disk + external disk.

Heh.  The filesystem was a 2.0 TB LUN on an EMC CX300 SAN.  RAID, redundant
connections, the whole nine yards.  Looks like we either hit a bug in
the HBA driver or tripped some kind of Linux 2.4.5-esque filesystem
damage.  It won't even fsck.

 Note that the host and disk arguments are expressions
 and that you may specify many of them.

Somehow, after you stare at man pages long enough, you start to think
you know what's there.  For me, at least, I only *think* I know.

I've often found that asking questions on a mailing list is a great
way to set this straight!  Thanks for the tip Paul.  Funny how it was
staring me in the face this whole time!

-- 
Ross Vandegrift
[EMAIL PROTECTED]

The good Christian should beware of mathematicians, and all those who
make empty prophecies. The danger already exists that the mathematicians
have made a covenant with the devil to darken the spirit and to confine
man in the bonds of Hell.
--St. Augustine, De Genesi ad Litteram, Book II, xviii, 37


Re: Virutal server host

2006-05-07 Thread Ross Vandegrift
On Sun, May 07, 2006 at 12:07:10PM -0400, Jon LaBadie wrote:
 If all you are doing is adding a additional name, like amanda_server
 to an existing hostname/ip_address pairing I can see name confusion
 easily arising.
 
 But would that be true if you ran a second NIC in the server?  It should
 respond with the hostname associated with that IP.

The most important thing, as far as Amanda is concerned, is that
forward and reverse lookups are sensibly matched.  So your hostname
should lookup to an address that contains a valid hostname for the
client/server.

All of our servers provide backups for two physically disparate
networks.  We manage this currently by creating entries in hosts files
for servers that need backups (uggg...), though I want to move it all
to DNS.

If you ever get confused about what entries should exist, the easiest
way to figure it out is to run amcheck and see what it complains
about!  Usually you'll get something descriptive like 10.0.0.2: No
such host, and then you'll know you need to configure a name.

 No second NIC, then what about additional IP addresses on the same NIC?
 Those should be associated with their own hostnames also.  Solaris, linux,
 and other unicies I'm familiar with all support multiple IP addresses on
 a single NIC.

If you're concerned about portability, this is probably the easiest
way.  Bind a second IP to your server, setup name lookups for that
second IP, and then float it around to whatever box you need to be
migrating to at the time.

-- 
Ross Vandegrift
[EMAIL PROTECTED]

The good Christian should beware of mathematicians, and all those who
make empty prophecies. The danger already exists that the mathematicians
have made a covenant with the devil to darken the spirit and to confine
man in the bonds of Hell.
--St. Augustine, De Genesi ad Litteram, Book II, xviii, 37


Heads up: RHELv4 U3 dump format issues

2006-05-07 Thread Ross Vandegrift
Hello everyone,

Just recovering from a panic attack, where I was unable to perform a
restore from dumps done on an RHEL v4 client.  Since others may see
this as well, I thought I'd drop a heads up.

You'll know you've seen this when amrecover complains that root is
missing from tape.  If you save the dump file from amrecover and look
at it with restore, you will get nothing useful.  You won't be able to
rebuild QFA data, you won't be able to get things off the tape.

If you have amrestore look through the tape (something like: amrestore
-f 0 -p file:/var/backups/tape01 doesnotexist  /dev/null), you'll
notice that the tape listing stops before it actually lists everything
there (easy for me to see on vtapes, heh).

What happened:

Some new version of dump must've gone into RHEL v4U3 which produces
dumps that make older versions of restore very unhappy.  I was able to
solve the issue by upgrading the server and the client to the newest
dump/restore.

In particular, Knoppix v4.0.2 (which I use for baremetal restores) is
affected by this issue.  I'm not sure if this is a RedHat specific
bug, or if it affects all newer versions of dump for ext2/3.

-- 
Ross Vandegrift
[EMAIL PROTECTED]

The good Christian should beware of mathematicians, and all those who
make empty prophecies. The danger already exists that the mathematicians
have made a covenant with the devil to darken the spirit and to confine
man in the bonds of Hell.
--St. Augustine, De Genesi ad Litteram, Book II, xviii, 37


Re: how to tell amanda, that it can use the full bandwith? - some general questions

2006-05-06 Thread Ross Vandegrift
On Sat, May 06, 2006 at 11:42:57PM +0200, Hans-Christian Armingeon wrote:
  Look into aspects other than netusage for performance problems.
 What could be a good place to look at? I have no link problems.

Check the output of amstatus when the run is completed.  AT the
bottom, it includes a breakdown of what Amanda was waiting on when it
was waiting.  If you're seeing slow transfers of individual dumps, the
problem is more than likely related to:

1) Too many dumps at once are fighting for bandwidth
2) The client in question is too loaded/slow
3) The server is too slow

In general, I've seen dumping take longer than transferring.

-- 
Ross Vandegrift
[EMAIL PROTECTED]

The good Christian should beware of mathematicians, and all those who
make empty prophecies. The danger already exists that the mathematicians
have made a covenant with the devil to darken the spirit and to confine
man in the bonds of Hell.
--St. Augustine, De Genesi ad Litteram, Book II, xviii, 37


Re: starttime question

2006-05-03 Thread Ross Vandegrift
On Wed, May 03, 2006 at 11:34:38AM -0400, Jon LaBadie wrote:
   Are the estimates for delayed DLEs also delayed?
   Or are they done at the beginning with all other DLEs?

I just took a look at the amdump logs for a server with some delayed
DLEs.  Estimates are performed along with everything else, and an
initial schedule is generated.

Interestingly, I noticed from reading the logs that estimates take
long enough on this server that it just exceeds the starttime.  So in
my case, it looks likes the dumps aren't actually delayed (heh, in
fact, the delayed dump is third in the schedule of ~200 DLEs).


-- 
Ross Vandegrift
[EMAIL PROTECTED]

The good Christian should beware of mathematicians, and all those who
make empty prophecies. The danger already exists that the mathematicians
have made a covenant with the devil to darken the spirit and to confine
man in the bonds of Hell.
--St. Augustine, De Genesi ad Litteram, Book II, xviii, 37


DLEs with large numbers of files

2006-05-02 Thread Ross Vandegrift
Hello everyone,

I recognize that this isn't really related to Amanda, but I thought
I'd see if anyone has a good trick...

A number of DLEs in my Amanda configuration have a huge number of
small files (sometimes hardlinks and symlinks, sometimes just copies)
- often times in the millions.  Of course this is a class corner case and
these DLEs can take a very long time to backup/restore.

Currently, they are mostly using dump (which will usually report
1-3MiB/s throughput).  Is there a possible performance advantage to using
tar instead?

On some of our installations I have bumped up the data timeouts.  I've
got one as high as 5400 seconds.  I suspect a reasonable maximum is
very installation dependant, but if anyone has thoughts, I'd love to
hear them.

Thanks for any ideas!

-- 
Ross Vandegrift
[EMAIL PROTECTED]

The good Christian should beware of mathematicians, and all those who
make empty prophecies. The danger already exists that the mathematicians
have made a covenant with the devil to darken the spirit and to confine
man in the bonds of Hell.
--St. Augustine, De Genesi ad Litteram, Book II, xviii, 37


Re: amcheck error : can not execute ...

2006-04-18 Thread Ross Vandegrift
On Tue, Apr 18, 2006 at 06:39:46PM +0200, Thomas Ginestet wrote:
 The problem is that I ran apt from this client in order to install 
 amanda.  Do you think I can reconfigure Amanda ? (dpkg-reconfigure with 
 special options ?)

All you should have to do is apt-get install dump.  Debian doesn't
install dump by default.

Ross


Re: Defining a sensible backup routine

2006-04-17 Thread Ross Vandegrift
On Mon, Apr 17, 2006 at 12:15:07PM +0100, Anne Wilson wrote:
 The DLEs are set to a 4-day rotation, but in practice I 
 am seeing level 0 backups almost every other day, for all of them.
 
 It would seem reasonable, to me, based on the small amount of changes,
 to have a level0 weekly, for all of them, I think.

Hi Anne,

If you want a level0 weekly why not set your dumpcycle to 7 days?  This
basically tells Amanda to try and do weekly level0 backups and
incrementals are okay in between.

Another thing you can do that helped me out a lot - change the bump
parameters!  I work with a fairly large Amanda setup where disk space
is usually at a premium.  I was able to save a lot of space by tuning
the bump parameters to do more higher-level incremental backups.

While the system previously wasted a lot of diskspace in the second
half of the cycle doing level 1 and 2 backups, now it usually bumps
every day, hitting level 5 or 6 by the end of the week.

Of course you take a hit on restore time, but restores are pretty
decent from virtual tape as it is, so I'm not too concerned about
there being five or six steps instead of two or three.

-- 
Ross Vandegrift
[EMAIL PROTECTED]

The good Christian should beware of mathematicians, and all those who
make empty prophecies. The danger already exists that the mathematicians
have made a covenant with the devil to darken the spirit and to confine
man in the bonds of Hell.
--St. Augustine, De Genesi ad Litteram, Book II, xviii, 37


Changing vtape parameters in a live system?

2006-04-17 Thread Ross Vandegrift
Hello everyone,

Is it safe to change the length of a vtape definition on a
live system?  One of our system was originally defined with 100GiB
vtapes, and I'd like to bump that size up to 200+GiB.

I guess if the answer is no I can always add a new tapetype
and manually rotate the old vtapes out, but I'm wondering if I can
just extend the length to make my life easier.

Thanks!

-- 
Ross Vandegrift
[EMAIL PROTECTED]

The good Christian should beware of mathematicians, and all those who
make empty prophecies. The danger already exists that the mathematicians
have made a covenant with the devil to darken the spirit and to confine
man in the bonds of Hell.
--St. Augustine, De Genesi ad Litteram, Book II, xviii, 37


Re: asking for real-life examples of Amanda implementations for upcoming book

2006-04-13 Thread Ross Vandegrift
Hi there, missed the original post, hope this is interesting:

 ---
 Name of your organization (optional, but highly desirable). If you
  can't tell the name of your employer, maybe you can tell what kind of
  organization that is, e.g. university, manufacturing company,
  financial services, government, etc

I'm not going to disclose my employer without permission, but we are
in the managed hosting industry.

 Any interesting Amanda stories you can tell.

 For how long have you been using Amanda? (I especially look for
  examples where people used Amanda for more than a year in production)

Amanda has been in use here for number of years.  It was well
entrenched over a year ago when I arrived.

 What is your current version of Amanda?

Depends on the machine.  Debian woody is currently on 2.4.2p2, RHEL is
shipping 2.4.5 with RHEL4 (2.4.4 with RHEL3?) We mostly keep to vendor
distributed versions on clients, though I am exploring the feasability
of upgrading Debian woody machines.

On the servers, I build the software from the latest patch release.
We're running the latest 2.4.5, but looking to upgrade to 2.5.0 soon.

 What OS do you use for Amanda server?

RHEL 3 and 4.

 How many Amanda servers do you have?

We have four servers.  One uses a fibre-channel attached EMC SAN, the
other three use fibre-channel attached Apple XRaids.  All backups are
done to virtual tapes on disk.

 How many Amanda clients per server do you have? (I especially look for
 examples with 10+ clients)

It varies somewhat; the server on the EMC box manages dumps for 198
disks on about 75 hosts.  The Apple Xraids can handle around 100 disks
on 50 hosts.

 What operating systems do you backup?

RHEL, Debian, Suse, FreeBSD.

 What is total amount of data you backup?

Around 5TB, give or take some.

 Do you use dump or tar and why?

Both, depending on host.  Whenever possible we use dump as it's almost
always better at making filesystem images that can be restored under
any circumstances.

However, we have seen a number of host/filesystem combinations that
break dump.  Fortunately, on all of these, tar is a happy camper!

 How often do you do full backups?

Our dumpcycle is 7 days.

 I am very interested in examples of tape-free implementations of
  Amanda, where backups are done only to disk.

Are there any specific details you are interested in?  We have FC
attached storage for backup/holding space.  We create two weeks worth
of virtual tapes and use chg-multi for the tape changer.

I considered moving to chg-disk after reading the latest
documentation, but I seem to recall that restores became more tedious
and so I stuck with chg-multi, which works great.

 How large is your holding disk?

We use 150GB.

 How often do you restore files? (on average and what are thy typical
 scenarios)

Rarely does a week go by and very seldom a month, without a restore.

 Do you use compression and where (hardware, client or server)?

We use fast client compression to get more hosts done in less time.

 Do you use encryption and which one?

Not currently, but the upgrade to 2.5.0 is motivated exclusively by
encryption support.

 How do you backup Windows? (if you have any)

Not Amanda.  UltraBac is most common setup.  Some clients are using
their own setup.

The inability to backup/restore security labels rules smbclient
backups out.

-- 
Ross Vandegrift
[EMAIL PROTECTED]

The good Christian should beware of mathematicians, and all those who
make empty prophecies. The danger already exists that the mathematicians
have made a covenant with the devil to darken the spirit and to confine
man in the bonds of Hell.
--St. Augustine, De Genesi ad Litteram, Book II, xviii, 37


amoverview error codes

2006-03-05 Thread Ross Vandegrift
Hello everyone,

I'm a bit foggy on one case of error codes in the amoverview output.

The documentation says that E indicates an error, and that E followed
by a number indicated an error flushing that was later corrected.  Cool.

I sometimes see errors that are of the form 1E and I can't figure
out what this would mean.  A level 1 dump was flushed, and then the
second time there was an error?  Doesn't make much sense to flush
twice, and I don't see this case in the documentation.

Any hints?


-- 
Ross Vandegrift
[EMAIL PROTECTED]

The good Christian should beware of mathematicians, and all those who
make empty prophecies. The danger already exists that the mathematicians
have made a covenant with the devil to darken the spirit and to confine
man in the bonds of Hell.
--St. Augustine, De Genesi ad Litteram, Book II, xviii, 37


Re: amrecover doesn't prompt for tape change?

2006-02-20 Thread Ross Vandegrift
On Mon, Feb 20, 2006 at 12:42:09PM +0100, Paul Bijnens wrote:
 In that case, you have already set the tape device, and you just have
 to physically insert the tape in that device before pressing Y.
 Or what is exactly the problem here?

We use virtual tapes on a RAID array - not really any way I can
physically change the tapes.

I figured I'd probably need to upgrade the client tools on those boxes
- they're running Debian woody, so, yea, prehistoric!

Ross


amrecover doesn't prompt for tape change?

2006-02-19 Thread Ross Vandegrift
Hello everyone,

When I do a local restore on a backup server with amrecover, I get
prompted to change tapes when it is required:

Extracting files using tape drive /dev/null on host linuxbackup1.
Load tape DailySet102 now
Continue [?/Y/n/t]?

I can then press t and set the tape correctly.


However, when I do a remote restore with amrecover, I get a different
prompt:

Load tape DailySet113 now
Continue? [Y/n]: 

This is fine for a restore that requires a single tape, since I can
use settape and the extract a second time.  However, restores
requiring multiple tapes will always fails, since I can't change tapes
in the middle of the restore.

I've verified that the remote server is correctly configured in
.amandahosts for access.


The only difference I can see between the two installation is that the
backup server runs 2.4.5, while the remote server runs 2.4.2p2.  Was
this feature added in between these versions?

-- 
Ross Vandegrift
[EMAIL PROTECTED]

The good Christian should beware of mathematicians, and all those who
make empty prophecies. The danger already exists that the mathematicians
have made a covenant with the devil to darken the spirit and to confine
man in the bonds of Hell.
--St. Augustine, De Genesi ad Litteram, Book II, xviii, 37