concerns

2002-11-04 Thread Galen Johnson
After further study of the restore issues I've had with amanda (mostly 
tar related) I have just today run into what I feel is another problem 
(I suspect is a combination of tar and amanda in this case).  I am 
concerned with the "incremental" backups.  Amanda seems to treat 
incrementals as differentials in the way it labels them for later 
recovery when used in conjunction with tar (I have no idea how it works 
with dump).

The way I understand incrementals is that and incremental is just the 
files changed since the last incremental or full.  A differential is the 
files changed since the last full backup (from what I've seen this is 
how amanda treats the "incremental" whereby it only increments the 
"level" by 1 if it meets certain criteria (which I'm fairly certain are 
definable in amanda.conf)).

I hope I've made clear what I'm trying to point out.  The "incremental" 
as amanda deals with it is really being treated as a differential in 
regards to a restore.  Can anyone recommend any suggestions to make this 
behavior a bit more along the  accepted norms?  I was thinking of 
setting the default amanda.conf bumpsize of 20 Mb to 1 Kb to see if it 
will set the incrementals to give truer incrementals.

Any thoughts on this matter are greatly appreciated.

=G=



Re: Performance degrading over time?

2002-11-04 Thread Anthony A. D. Talltree
>or adding 5 internal 36 GB drives in a RAID 0 configuration as a holding
>disk.

I see that IBM has SCSI disks available up to at least 146G.  I'd
recommend plunking in two of those instead so you have room to grow.




Re: Performance degrading over time?

2002-11-04 Thread Patrick M. Hausen
Hi!

> > > nothing goes from holding disk to tape unless the dump to holding disk
> > > has finished.  Then it goes like cat'ting or dd'ing to tape.  Small
> > > likelyhood of too slow for tape.
> > 
> > Aggreed - cat/dd >/dev/tape will surely be fast enough.
> > But I need a holding disk at least as large as my largest FS to dump?
> > So if I have one 170 GB RAID I need one 170 GB holding disk?
> 
> Repeating,
> 
> > > nothing goes from holding disk to tape unless the dump to holding disk
> > > has finished.

OK. Understood - finally.

> > The customer won't like that ;-)
> 
> Doesn't have to be high performance drives.
> Cheap IDE drives are way fast enough.

How do you fit cheap IDE drives into a Sun Enterprise 3500?

Configuration of the machine:

1 internal 9 GB (system) drive
1 external Sun Storedge A1000 RAID enclosure - 170 GB net storage
1 external LTO drive

Oooops :-)))


Well, you can't argue against the facts - I'll suggest either
getting rid of the small "trace" files on a nightly basis
or disabling that "feature" altogether - or adding 5 internal
36 GB drives in a RAID 0 configuration as a holding disk.

Hmmm ... possibly some cheap PC style external SCSI-to-IDE-RAID
will do the trick just as well. Sun won't officially support it,
but they don't support the HP LTO, either.


Thanks to all, end of thread (hopefully)

Patrick M. Hausen
Technical Director
-- 
punkt.de GmbH Internet - Dienstleistungen - Beratung
Scheffelstr. 17 a Tel. 0721 9109 -0 Fax: -100
76135 Karlsruhe   http://punkt.de



Adding new clients

2002-11-04 Thread Owain Pritchard
Hi, I am reletively new to Amanda but I have taken over 
responsibility of doing the backups.

I have just created a new configuration to run monthly backups and 
the folder contains copies of the files such as amanda.conf from 
the other config folder for daily backups.

I have added a new client into the disklist and it's access password 
in the amandapass file.

When I run amcheck on the new configuration, amcheck throws up 
the following error appears:-

"ERROR: info file 
/var/lib/amanda/BackupMonth/curinfo/Neli/__nlserver_d$/info: not 
readable"

This only happens when the new client is in the disklist.

Have I forgotten to do other things with setting up new clients?

Thank you



Re: Performance degrading over time?

2002-11-04 Thread Niall O Broin
On Mon, Nov 04, 2002 at 03:55:13PM +0100, Patrick M. Hausen wrote:

> But I need a holding disk at least as large as my largest FS to dump?
> So if I have one 170 GB RAID I need one 170 GB holding disk?

No - you need a holding disk preferrably as big as your TWO largest disklist
entries, which may be <= your largest FS. From "Using Amanda"


] Ideally, there should be enough holding disk space for the two largest
] backup images simultaneously, so one image can be coming into the holding
] disk while the other is being written to tape. If that is not practical, any
] amount that holds at least a few of the smaller images helps. 

> The customer won't like that ;-)

Customers rarely like anything which costs them money :-)

> Or is this what the chunksize parameter is for - taper will start when
> the first chunk is completely written to the holding disk?

No - chunksize merely specifies how big are the chunks the backup is split
into on disk. Really only needed on older OS which have a relatively small
limit (often 2GB) to the size of a single file.

> In this case I'm sure a holding disk will speed up things quite a
> bit even in my "pathological" case of only one big FS.

No - a holding disk will only help you at all if it's at least as big as
your two smallest disklist entries.




Kindest regards,



Niall  O Broin




Re: Performance degrading over time?

2002-11-04 Thread Frank Smith
--On Monday, November 04, 2002 15:04:27 +0100 "Patrick M. Hausen" <[EMAIL PROTECTED]> wrote:


Paul Bijnens wrote:


Patrick M. Hausen wrote:
>
> Seems like Oracle likes to create a lot of small ".trc" files over
> time. The filesystem in question is littered with thousands of them.
>
> Once we archived and deleted them, backup performance was back to normal.
>
> Would separate holding disk (we don't use one at all at the moment)
> help in a configuration like this? Additionally I'd suggest deleting

Probably, then your tapedrive can keep streaming (could half your
backuptime!), and amanda can do much more parallel then it can do
without holdingdisk (another doubling or more if you have many
clients).


Precisely. In all multi-client installations I run in my own network
I have holding-disks, so dumps can be run in parallel and output
buffered. But the machine in question is a one-server-client
installation that only backs up itself. And in addition it mainly backs
up one single file system.

So my question is: does a holding-disk speed up this process? I mean,
Amanda will start a dumper on the filesystem that starts filling
the holding-disk. At the same time (?) a taper will start wrtiting
the holding-disk's contents to tape. Now imagine the dumper getting
to slow for the tape ... the holding-disk won't be filled quick
enough either. Then there's a lot of seeks on the holding-disk
itself, if it's read and written at the same time.

Or is the mental picture I have about Amanda's operation incorrect?


What should happen is that dumper will write the entire dump to the
holding disk (either in one piece or in N chunksize pieces if you
specify chunksize) and then when it is done stream the entire image
to tape.
 If your holding disk is smaller than your dump then it will be
bypassed and you will be dumping directly to tape with the pauses
and reseeks you are seeing now.

Frank



Thanks,

Patrick M. Hausen
Technical Director
--
punkt.de GmbH Internet - Dienstleistungen - Beratung
Scheffelstr. 17 a Tel. 0721 9109 -0 Fax: -100
76135 Karlsruhe   http://punkt.de




--
Frank Smith[EMAIL PROTECTED]
Systems Administrator Voice: 512-374-4673
Hoover's Online Fax: 512-374-4501



Re: Performance degrading over time?

2002-11-04 Thread Patrick M. Hausen
Hello!

Jon H. LaBadie wrote:

> On Mon, Nov 04, 2002 at 03:04:27PM +0100, Patrick M. Hausen wrote:
> > 
> > So my question is: does a holding-disk speed up this process? I mean,
> > Amanda will start a dumper on the filesystem that starts filling
> > the holding-disk. At the same time (?) a taper will start wrtiting
> > the holding-disk's contents to tape. Now imagine the dumper getting
> > to slow for the tape ... the holding-disk won't be filled quick
> > enough either. Then there's a lot of seeks on the holding-disk
> > itself, if it's read and written at the same time.
> > 
> > Or is the mental picture I have about Amanda's operation incorrect?
> 
> yes, incorrect.
> 
> nothing goes from holding disk to tape unless the dump to holding disk
> has finished.  Then it goes like cat'ting or dd'ing to tape.  Small
> likelyhood of too slow for tape.

Aggreed - cat/dd >/dev/tape will surely be fast enough.
But I need a holding disk at least as large as my largest FS to dump?
So if I have one 170 GB RAID I need one 170 GB holding disk?

The customer won't like that ;-)

Or is this what the chunksize parameter is for - taper will start when
the first chunk is completely written to the holding disk?

In this case I'm sure a holding disk will speed up things quite a
bit even in my "pathological" case of only one big FS.
A little bit of tape stop-and-go between chunks won't hurt as much
as the current configuration does.


Thanks,

Patrick M. Hausen
Technical Director
-- 
punkt.de GmbH Internet - Dienstleistungen - Beratung
Scheffelstr. 17 a Tel. 0721 9109 -0 Fax: -100
76135 Karlsruhe   http://punkt.de



Re: Restore Problem

2002-11-04 Thread Joshua Baker-LePain
On Sun, 3 Nov 2002 at 3:11pm, Bill Hults wrote

> The tape server's name is bs1 which is where I want to restore the files to.
> I want to restore 2 partitions on fs2.
> The set is DailySet1
> All info for bs1 -
> There is a listing in /var/lib/amanda/DailySet1/curinfo for fs2 but not in
> '../index'. There are listings in ../index for all the other machines that
> are backed up.

In indexing turned on for fs2?  If not, you'll need to use amrestore, not 
amrecover.

> When I run 'amrecover DailySet1 -t /dev/st0' I get a 'No index records for
> host bs1'. It also trys the FQDN.

'sethost fs2'

but that won't work if there are no index records.

> I can run 'amadmin DailySet1 find fs2' & get a listing for both partitions
> and the level 0 & level 1 tapes I would need.

Then use amrestore to get those off the tapes and restore 'em by hand.

-- 
Joshua Baker-LePain
Department of Biomedical Engineering
Duke University




Re: Performance degrading over time?

2002-11-04 Thread Patrick M. Hausen
Paul Bijnens wrote:

> Patrick M. Hausen wrote:
> > 
> > Seems like Oracle likes to create a lot of small ".trc" files over
> > time. The filesystem in question is littered with thousands of them.
> > 
> > Once we archived and deleted them, backup performance was back to normal.
> > 
> > Would separate holding disk (we don't use one at all at the moment)
> > help in a configuration like this? Additionally I'd suggest deleting
> 
> Probably, then your tapedrive can keep streaming (could half your
> backuptime!), and amanda can do much more parallel then it can do 
> without holdingdisk (another doubling or more if you have many
> clients).

Precisely. In all multi-client installations I run in my own network
I have holding-disks, so dumps can be run in parallel and output
buffered. But the machine in question is a one-server-client
installation that only backs up itself. And in addition it mainly backs
up one single file system.

So my question is: does a holding-disk speed up this process? I mean,
Amanda will start a dumper on the filesystem that starts filling
the holding-disk. At the same time (?) a taper will start wrtiting
the holding-disk's contents to tape. Now imagine the dumper getting
to slow for the tape ... the holding-disk won't be filled quick
enough either. Then there's a lot of seeks on the holding-disk
itself, if it's read and written at the same time.

Or is the mental picture I have about Amanda's operation incorrect?


Thanks,

Patrick M. Hausen
Technical Director
-- 
punkt.de GmbH Internet - Dienstleistungen - Beratung
Scheffelstr. 17 a Tel. 0721 9109 -0 Fax: -100
76135 Karlsruhe   http://punkt.de



Re: Performance degrading over time?

2002-11-04 Thread Patrick M. Hausen
Hi all!

> Since your dumper and taper times are always nearly identical you probably
> aren't using a holding disk and are dumping directly to tape.  And since
> the rates for one filesystem have remained constant while the other one has
> dropped I would look into possible recent changes on the larger (slower)
> filesystem. Try doing a dump to /dev/null and see how fast (or slow) that is.
> If data isn't fed to the tape fast enough the tape drive has to stop and
> reposition itself every time it runs out of data, which will slow it down
> considerably.

We finally found the culprit, yet have to decide what to do about it.

Seems like Oracle likes to create a lot of small ".trc" files over
time. The filesystem in question is littered with thousands of them.

Once we archived and deleted them, backup performance was back to normal.

Would separate holding disk (we don't use one at all at the moment)
help in a configuration like this? Additionally I'd suggest deleting
the trace (?) files in a nightly run - seems like nobody needs
them, anyway.

Thanks for your help,
Patrick M. Hausen
Technical Director
-- 
punkt.de GmbH Internet - Dienstleistungen - Beratung
Scheffelstr. 17 a Tel. 0721 9109 -0 Fax: -100
76135 Karlsruhe   http://punkt.de



Re: Strange problems with HP Colorado 20GB

2002-11-04 Thread Per Lundberg
Christoph Scheeder wrote:

Hey Christoph (& list),


from my experience the original HP-Colorado-drives do not work reliable 
under the newer Linux-kernels. My drives stoped working in the early 2.2.x tree.

:-(


They seemed to backup fine using the scsi-emulation, but restore was 
impossible.

That seems to take away a bit of the idea of making backups... :-)


Try if you can put valid data on it using tar or dd, and if you can get 
it back.

I can't.


If not, i would suggest to get a non hp colorado-drive.


Thanks, I think that'll be the best long-term solution. I'll try putting 
the Colorado device in a Windows machine instead and see if I can get it 
to work.

--
Best regards,

Per Lundberg / CAPIO AB
Phone: +46-18-4186040
Fax: +46-18-4186049
Web: http://www.capio.com




unsubscribe

2002-11-04 Thread sysadmin
unsubscribe