Re: Cannot restore XFS partitions with Amanda 2.4.5

2006-05-23 Thread Jean-Francois Malouin
* James Lamanna [EMAIL PROTECTED] [20060522 21:52]:
 Hi.
 I have a couple of XFS + TAR partitions that I'm backing up onto tape.
 There are no errors thrown when backing up, however whenever I try to
 amrestore one of the XFS partition dump files I get the following:
 
 # amrestore /dev/nst0
 amrestore:   1: restoring fs0.fs0-users.20060522.0
 amrestore: read error: Input/output error
 gzip: stdin: unexpected end of file
 
 now if I skip that partition (with mt -f /dev/nst0 fsf 1), I can
 amrestore the TAR paritions fine (until I get to the other XFS
 partition).
 
 This is with Amanda 2.4.5.
 dumptype is
 index
 compress client best
 
 Any ideas what might be happening here?

Do the basic tests with the basic tools, dd, mt and xfsrestore:
position the tape head, and see what xfsrestore tells you.

mt -f /dev/nst0 fsf xxx
dd if=/dev/nst0 count=1 bs=32k

and if at the right place, reposition the tape head and do a
interactive xfsrestore (I don't how you compressed so just squeeze a
gzip pipe in between dd and xfsrestore below, the command should be
given by the dd step above):

mt -f /dev/nst0 status
mt -f /dev/nst0 fsf xxx
dd if=/dev/nst0 bs=32k skip=1 | xfsrestore -i - .

If xfsrestore is happy, try to list the archive content and see if you
can restore some files/dirs.

The debug output from your amrestore attempt should give you a clue as
to what's going wrong.

hth
jf

 
 Thanks.
 
 -- James

-- 
Commitment to quality is our number one gaol.


Re: Amverifyrun not working in 2.5.0

2006-05-23 Thread Jean-Louis Martineau

Gordon J. Mills III wrote:

It did work in 2.4.5. I have 2 other servers that are running 2.4.5
successfully (as this one was before yesterday). I used this one as a test
since it is not in production yet, but will be soon.

I compared the versions of the script and they are identical. It is the
amdump.1 file that appears to have changed and now doesn't contain the lines
that the script is grepping for.
  

Gordon,

Yes, it's the line from amdump.1 that changed.

Jean-Louis


Parallel dumps of a single file system?

2006-05-23 Thread Paul Lussier
Hi all,I have a 1 TB RAID5 array which I inherited. The previous admin configured it to be a single file system as well. The disklist I have set up currently splits this file system up into multiple DLEs for backup purposes and dumps them using gtar.
In the past, on systems with multiple partitions, I would configure all file systems on different physical drives to be backed up in parallel. Since this system has but 1 file system, I've been backing them up one at a time.
But since this is a RAID array, it really wouldn't matter whether this were many file systems or 1, since everything is RAIDed out across all disks, would it?So, my question is this: Am I doing the right thing by dumping these DLEs serially, or can I dump them in parallel?
For example, I have my user directories split out like this in the disklist file: space-monster:/u1/user space-monster:/u1/user/ad space-monster:/u1/user/eh space-monster:/u1/user/il space-monster:/u1/user/mp
 space-monster:/u1/user/qt space-monster:/u1/user/uzSo, does it matter whether I have a RAID array with 1 or 23 file systems on it? Am I going about this the correct way, or can I use some parallelism?
Thanks,--Paul


Re: Parallel dumps of a single file system?

2006-05-23 Thread Andreas Hallmann

Paul Lussier wrote:


Hi all,

I have a 1 TB RAID5 array which I inherited.  The previous admin 
configured it to be a single file system as well.  The disklist I have 
set up currently splits this file system up into multiple DLEs for 
backup purposes and dumps them using gtar.


In the past, on systems with multiple partitions, I would configure 
all file systems on different physical drives to be backed up in 
parallel.  Since this system has but 1 file system, I've been backing 
them up one at a time.
But since this is a RAID array, it really wouldn't matter whether this 
were many file systems or 1, since everything is RAIDed out across all 
disks, would it?


There is nothing to RAID out. Avoiding spindle movement is both the key 
longer disk life time and performance.
Since in the raid blocks are spreed sequentially (w.r.t the file) among 
most (raid5) of the avail platters, it will behave more like a single 
spindle with more layers.


So, my question is this: Am I doing the right thing by dumping these 
DLEs serially, or can I dump them in parallel?


Dumping this DLEs sequentially is your only option to keep spindle 
movements low. So your doing it the way I would do it.

Anything else should reduce your throughput.

Andreas


Re: Parallel dumps of a single file system?

2006-05-23 Thread Paul Lussier
On 5/23/06, Andreas Hallmann [EMAIL PROTECTED] wrote:
Since in the raid blocks are spreed sequentially (w.r.t the file) amongmost (raid5) of the avail platters, it will behave more like a singlespindle with more layers. So, my question is this: Am I doing the right thing by dumping these
 DLEs serially, or can I dump them in parallel?Dumping this DLEs sequentially is your only option to keep spindlemovements low. So your doing it the way I would do it.Anything else should reduce your throughput.
Does that imply that if this RAID set were split into multiple file systems, I'd still be better off dumping them one at a time?I'm looking for ways to speed up my daily incremental backups. We may well be purchasing a new RAID array in the near future. Which may allow me to migrate the existing data to it and split it up into multiple file systems, then go back and re-format the old one.
Thanks,--Paul


Re: Cannot restore XFS partitions with Amanda 2.4.5

2006-05-23 Thread James Lamanna

On 5/23/06, Jean-Francois Malouin
[EMAIL PROTECTED] wrote:

* James Lamanna [EMAIL PROTECTED] [20060522 21:52]:
 Hi.
 I have a couple of XFS + TAR partitions that I'm backing up onto tape.
 There are no errors thrown when backing up, however whenever I try to
 amrestore one of the XFS partition dump files I get the following:

 # amrestore /dev/nst0
 amrestore:   1: restoring fs0.fs0-users.20060522.0
 amrestore: read error: Input/output error
 gzip: stdin: unexpected end of file

 now if I skip that partition (with mt -f /dev/nst0 fsf 1), I can
 amrestore the TAR paritions fine (until I get to the other XFS
 partition).

 This is with Amanda 2.4.5.
 dumptype is
 index
 compress client best

 Any ideas what might be happening here?

Do the basic tests with the basic tools, dd, mt and xfsrestore:
position the tape head, and see what xfsrestore tells you.

mt -f /dev/nst0 fsf xxx
dd if=/dev/nst0 count=1 bs=32k

and if at the right place, reposition the tape head and do a
interactive xfsrestore (I don't how you compressed so just squeeze a
gzip pipe in between dd and xfsrestore below, the command should be
given by the dd step above):

mt -f /dev/nst0 status
mt -f /dev/nst0 fsf xxx
dd if=/dev/nst0 bs=32k skip=1 | xfsrestore -i - .

If xfsrestore is happy, try to list the archive content and see if you
can restore some files/dirs.

The debug output from your amrestore attempt should give you a clue as
to what's going wrong.



Using dd gives the same input/output errors.
xfsrestore compains of a premature EOF (since dd/amrestore aren't
reading the whole backup for some reason).
Unfortunately nothing is in dmesg or any other log file as to why I
get input/output errors _only_ on the XFS parition dump files.

-- James




hth
jf


 Thanks.

 -- James

--
Commitment to quality is our number one gaol.





Re: Parallel dumps of a single file system?

2006-05-23 Thread Jon LaBadie
On Tue, May 23, 2006 at 10:39:09AM -0400, Paul Lussier wrote:
 On 5/23/06, Andreas Hallmann [EMAIL PROTECTED] wrote:
 
 Since in the raid blocks are spreed sequentially (w.r.t the file) among
 most (raid5) of the avail platters, it will behave more like a single
 spindle with more layers.
 
  So, my question is this: Am I doing the right thing by dumping these
  DLEs serially, or can I dump them in parallel?
 
 Dumping this DLEs sequentially is your only option to keep spindle
 movements low. So your doing it the way I would do it.
 Anything else should reduce your throughput.
 
 
 I'm looking for ways to speed up my daily incremental backups.  We may well
 be purchasing a new RAID array in the near future.  Which may allow me to
 migrate the existing data to it and split it up into multiple file systems,
 then go back and re-format the old one.

This is something you could easily try in stages.  For example,
suppose you have 8 DLEs defined on that raid, all as spindle 1.
Redefine 2-4 of them as spindle 2.  Also make sure your client
is not dumper limited.  If you see significant improvement,
define a few more DLEs as spindle 3 etc.

Recently I had a look at amplot results for my new vtape setup.
One thing it showed was that for 2/3 of the time, only one of the
default four dumpers was active.  I changed the default number of
dumpers to six and changed the number of simultaneous dumps per
client from one to two.  My total backup time dropped from over
four hours to one and a half hours.

-- 
Jon H. LaBadie  [EMAIL PROTECTED]
 JG Computing
 4455 Province Line Road(609) 252-0159
 Princeton, NJ  08540-4322  (609) 683-7220 (fax)


Re: Parallel dumps of a single file system?

2006-05-23 Thread Ross Vandegrift
On Tue, May 23, 2006 at 11:04:44AM -0400, Jon LaBadie wrote:
 Recently I had a look at amplot results for my new vtape setup.
 One thing it showed was that for 2/3 of the time, only one of the
 default four dumpers was active.

This is a good point.  amplot is awesome for checking out what kind of
stuff is slowing down your backups!  Also check the output at the end
of amstatus when a run is finished.  It'll give you a summary of the
same information.  But there's nothing like a cool graph!

As far as the original poster's question: I think you should try it
out.  Whether it's a performance win or loss is going to depend
heavily on how the data has ended up across those disks.

Your RAID5 performance is always dominated by the time it takes to
seek for data.  If all n disks can just just stream for a while, you
get full streaming performance from the disks.  But if even one of
them needs to seek to find its blocks, you're going to have to wait
until that disk finishes.

This makes me think that in most cases, dumping a big RAID5 in
parallell would hurt performance.  However, if your array is old, it
may be highly fragmented.  The extra I/O requests might be smoothed
over by an elevator algorith somewhere, and you might fit more data
into the same time...

I'd say it calls for an experiment.

--
Ross Vandegrift
[EMAIL PROTECTED]

The good Christian should beware of mathematicians, and all those who
make empty prophecies. The danger already exists that the mathematicians
have made a covenant with the devil to darken the spirit and to confine
man in the bonds of Hell.
--St. Augustine, De Genesi ad Litteram, Book II, xviii, 37


holding disk problems

2006-05-23 Thread Karsten Fuhrmann

Hello list,
i installed a new holding disk in my system and copied all the files  
of the old holding disks to the new one.
When i start a amflush now to flush them to tape amanda is gives me  
the following results :


snip ---
HOSTNAME   DISKL ORIG-MB  OUT- 
MB  COMP%   MMM:SS KB/s   MMM:SS KB/s
  
  
-
darkstar.c /boot   1  41   
41--  N/A  N/A  0:03  13502.0
darkstar.c /disk1/Ink_and_Paint1  19   
19--  N/A  N/A  0:03   5867.4
darkstar.c /disk1/LAURA_XMAS_SPECIAL   11265 
1265--  N/A  N/A  1:06  19586.2
darkstar.c /disk1/admin  NO FILE TO FLUSH  
-
darkstar.c /disk1/olli   NO FILE TO FLUSH  
-

snap 

All these 'NO FILE TO FLUSH' lines are holding files from the former  
holding disks.


In the amflush.1 file i found the following error messages :

size_holding_files: open of /raid/15/2006051300/stern.cartoon- 
film.de._usr.4.1 failed: No such file or directory
stat /raid/15/2006051300/stern.cartoon-film.de._usr.4.1: No such  
file or directory
build_diskspace: open of /raid/15/2006051300/stern.cartoon- 
film.de._usr.4.1 failed: No such file or directory
size_holding_files: open of /raid/15/2006051300/stern.cartoon- 
film.de._raid_13_LINETEST.0.1 failed: No such file or directory
stat /raid/15/2006051300/stern.cartoon-film.de._raid_13_LINETEST. 
0.1: No such file or directory
build_diskspace: open of /raid/15/2006051300/stern.cartoon- 
film.de._raid_13_LINETEST.0.1 failed: No such file or directory


The filename in this logfile are wrong, but amanda still uses the old  
holding disk names.


Is there a way to tell amanda that i moved the holding files  
somewhere else ? I checked the curinfo files but didnt found any hint  
where this information is stored! At least the holding disk path isnt  
there in cleartext!


Greetings,
Karsten Fuhrmann

Cartoon-Film Thilo Rothkirch
System Administration
phone:  +49 30 698084-109
fax: +49 30 698084-29
email: [EMAIL PROTECTED]





Re: holding disk problems

2006-05-23 Thread Guy Dallaire
2006/5/23, Karsten Fuhrmann [EMAIL PROTECTED]:
Hello list,i installed a new holding disk in my system and copied all the filesof the old holding disks to the new one.Did you mount the new holding disk at the same mount point that the previous one ?
All these 'NO FILE TO FLUSH' lines are holding files from the formerholding disks.
In the amflush.1 file i found the following error messages :size_holding_files: open of /raid/15/2006051300/stern.cartoon-film.de._usr.4.1 failed: No such file or directorystat /raid/15/2006051300/stern.cartoon-
film.de._usr.4.1: No suchfile or directorybuild_diskspace: open of /raid/15/2006051300/stern.cartoon-film.de._usr.4.1 failed: No such file or directorysize_holding_files: open of /raid/15/2006051300/stern.cartoon-
film.de._raid_13_LINETEST.0.1 failed: No such file or directorystat /raid/15/2006051300/stern.cartoon-film.de._raid_13_LINETEST.0.1: No such file or directorybuild_diskspace: open of /raid/15/2006051300/stern.cartoon-
film.de._raid_13_LINETEST.0.1 failed: No such file or directoryThe filename in this logfile are wrong, but amanda still uses the oldholding disk names.Is there a way to tell amanda that i moved the holding files
somewhere else ? I checked the curinfo files but didnt found any hintwhere this information is stored! At least the holding disk path isntthere in cleartext!I would make a symbolic link from the old holding disk location to the new one and try to re-run amdump. 
Ex: ln -s /you_new/holding_disk_directory /your_old_holding_disk_directoryAfter running amdump, the new amanda runs should use the new holding disk 


Re: holding disk problems

2006-05-23 Thread Jon LaBadie
On Tue, May 23, 2006 at 12:29:13PM -0400, Guy Dallaire wrote:
 2006/5/23, Karsten Fuhrmann [EMAIL PROTECTED]:
 
 Hello list,
 i installed a new holding disk in my system and copied all the files
 of the old holding disks to the new one.
 
 
 Did you mount the new holding disk at the same mount point that the previous
 one ?
 
 All these 'NO FILE TO FLUSH' lines are holding files from the former
 holding disks.
 
 In the amflush.1 file i found the following error messages :
 
 size_holding_files: open of /raid/15/2006051300/stern.cartoon-
 film.de._usr.4.1 failed: No such file or directory
 stat /raid/15/2006051300/stern.cartoon-film.de._usr.4.1: No such
 file or directory
 build_diskspace: open of /raid/15/2006051300/stern.cartoon-
 film.de._usr.4.1 failed: No such file or directory
 size_holding_files: open of /raid/15/2006051300/stern.cartoon-
 film.de._raid_13_LINETEST.0.1 failed: No such file or directory
 stat /raid/15/2006051300/stern.cartoon-film.de._raid_13_LINETEST.
 0.1: No such file or directory
 build_diskspace: open of /raid/15/2006051300/stern.cartoon-
 film.de._raid_13_LINETEST.0.1 failed: No such file or directory
 
 The filename in this logfile are wrong, but amanda still uses the old
 holding disk names.
 
 Is there a way to tell amanda that i moved the holding files
 somewhere else ? I checked the curinfo files but didnt found any hint
 where this information is stored! At least the holding disk path isnt
 there in cleartext!
 
 
 I would make a symbolic link from the old holding disk location to the new
 one and try to re-run amdump.
 
 Ex: ln -s /you_new/holding_disk_directory /your_old_holding_disk_directory
 
 After running amdump, the new amanda runs should use the new holding disk

My suggestion was going to be similar to Guy's.

I'd consider adding your old holding disk location to amanda.conf
as a second holding disk.  Then make same name symbolic links to
the current location of the files and run amflush.

If that is succesful, remove the second holding disk definition.

I'm guessing the original location of the files is in the dump logs.

-- 
Jon H. LaBadie  [EMAIL PROTECTED]
 JG Computing
 4455 Province Line Road(609) 252-0159
 Princeton, NJ  08540-4322  (609) 683-7220 (fax)


tapetape (not in faq-o-matic)

2006-05-23 Thread Brian Cuttler

Does anyone have the tape type for the LTO3 (Quantum) ?

Are there any other parameters I should tweak to get better
performance/utilization ?

This is still a reasonable default ?

tapebufs 20

I am running the StorEdge C2 jukebox with lto3 drive on a SunFire 280R
under Solaris 9 with 4 gig of memory.

thank you,

Brian
---
   Brian R Cuttler [EMAIL PROTECTED]
   Computer Systems Support(v) 518 486-1697
   Wadsworth Center(f) 518 473-6384
   NYS Department of HealthHelp Desk 518 473-0773



Re: tapetape (not in faq-o-matic)

2006-05-23 Thread Pavel Pragin

Brian Cuttler wrote:


Does anyone have the tape type for the LTO3 (Quantum) ?

Are there any other parameters I should tweak to get better
performance/utilization ?

This is still a reasonable default ?

tapebufs 20

I am running the StorEdge C2 jukebox with lto3 drive on a SunFire 280R
under Solaris 9 with 4 gig of memory.

thank you,

Brian
---
  Brian R Cuttler [EMAIL PROTECTED]
  Computer Systems Support(v) 518 486-1697
  Wadsworth Center(f) 518 473-6384
  NYS Department of HealthHelp Desk 518 473-0773

 


Try using this command to determine tapetype:
amtapetype -f /dev/nst0   (/dev/nst0) will be diff for solaris i think



vtape, end of tape waste

2006-05-23 Thread Jon LaBadie
I'm running vtapes on a new server.  The vtapes
are split across two external disk drives.

Realizing that some tapes would not fill completely,
I decided that rather than define the tape size to be
exactly disk/N, I would add a fudge factor to the size.

Things worked exactly as anticipated.  The install has
now reached beyond the last tape which ran out of disk
space just before it ran our of the last tapes' length.
And as usual, the taping restarted on the next vtape
on the disk with remaining space.

But running out of disk space caused me to look more
closely at the situation and I realized that the failed
taping is left on the disk.  This of course mimics what
happens on physical tape.  However with the file:driver
if this failed, and useless tape file were deleted,
it would free up space for other data.

Curious, I added up all the sizes of the failed, partially
taped dumps.  They totalled 46+ GB.  That is substantially
more than I dump daily.

Has anyone addressed this situation?

Before you ask, no I've not gone to tape-spanning (yet).

-- 
Jon H. LaBadie  [EMAIL PROTECTED]
 JG Computing
 4455 Province Line Road(609) 252-0159
 Princeton, NJ  08540-4322  (609) 683-7220 (fax)


Re: tapetape (not in faq-o-matic)

2006-05-23 Thread Joshua Baker-LePain

On Tue, 23 May 2006 at 3:55pm, Brian Cuttler wrote


Does anyone have the tape type for the LTO3 (Quantum) ?


This is what I use:

define tapetype LTO3comp {
# All values guesswork :)  jlb, 8/31/05
# except blocksize ;) jlb, 9/15/05
length 42 mbytes
blocksize 2048
filemark 5577 kbytes
speed 6 kps
lbl-templ 3hole.ps
}

I leave hardware compression on (why not, with LTO), but most of my data 
isn't all that compressible.  I get much better speed with the blocksize 
above (2MiB) rather than amanda's default of 64K.  I determined that by 
testing raw write speed to the tape with tar and various blocksizes.



Are there any other parameters I should tweak to get better
performance/utilization ?

This is still a reasonable default ?

tapebufs 20


On Linux at least, with the blocksize above I had to dial back tapebufs to 
15 or I got this warning in my nightly emails:


  taper: attach_buffers: (20 tapebufs: 41947616 bytes) Invalid argument
  taper: attach_buffers: (19 tapebufs: 39850440 bytes) Invalid argument
  taper: attach_buffers: (18 tapebufs: 37753264 bytes) Invalid argument
  taper: attach_buffers: (17 tapebufs: 35656088 bytes) Invalid argument
  taper: attach_buffers: (16 tapebufs: 33558912 bytes) Invalid argument


I am running the StorEdge C2 jukebox with lto3 drive on a SunFire 280R
under Solaris 9 with 4 gig of memory.


My usual warning with LTO3 is to make sure that your disks can keep up 
with your tape.  Yes, you read that right.  Especially with amanda dumping 
to holding disk while trying to write to tape, it's tough to feed LTO3 as 
fast as it wants to be fed (80MB/s native write speed).  LTO3 can throttle 
back to half that without shoeshining the drive, but you don't want to see 
your write speeds below that.


--
Joshua Baker-LePain
Department of Biomedical Engineering
Duke University


Re: tapetape (not in faq-o-matic)

2006-05-23 Thread Jon LaBadie
On Tue, May 23, 2006 at 01:22:02PM -0700, Pavel Pragin wrote:
 Brian Cuttler wrote:
 
 Does anyone have the tape type for the LTO3 (Quantum) ?
 
 Are there any other parameters I should tweak to get better
 performance/utilization ?
 
 This is still a reasonable default ?
 
 tapebufs 20
 
 I am running the StorEdge C2 jukebox with lto3 drive on a SunFire 280R
 under Solaris 9 with 4 gig of memory.
 
 
 Try using this command to determine tapetype:
 amtapetype -f /dev/nst0   (/dev/nst0) will be diff for solaris i think
 

A tapetype run on an lto-3 drive without a good estimate option
might take about 14 days to complete :(

-- 
Jon H. LaBadie  [EMAIL PROTECTED]
 JG Computing
 4455 Province Line Road(609) 252-0159
 Princeton, NJ  08540-4322  (609) 683-7220 (fax)


Re: tapetape (not in faq-o-matic)

2006-05-23 Thread Pavel Pragin

Jon LaBadie wrote:


On Tue, May 23, 2006 at 01:22:02PM -0700, Pavel Pragin wrote:
 


Brian Cuttler wrote:

   


Does anyone have the tape type for the LTO3 (Quantum) ?

Are there any other parameters I should tweak to get better
performance/utilization ?

This is still a reasonable default ?

tapebufs 20

I am running the StorEdge C2 jukebox with lto3 drive on a SunFire 280R
under Solaris 9 with 4 gig of memory.


 


Try using this command to determine tapetype:
amtapetype -f /dev/nst0   (/dev/nst0) will be diff for solaris i think

   



A tapetype run on an lto-3 drive without a good estimate option
might take about 14 days to complete :(

 


at least it will be accurate ;-)



Re: vtape, end of tape waste

2006-05-23 Thread Ian Turner
Jon,

There is no good short-term solution to this problem. Sorry. :-( Tape spanning 
helps, but is not a panacea.

This is one of the limitations of the vtape API that I was talking about -- it 
tries to reimplement tape semantics on a filesystem, even when that doesn't 
make sense.

When the Device API is done, this problem will go away, though the combination 
of two new features: Partial device recycling means that you can delete only 
certain files from a virtual tape: This would include failed dumps, but might 
also include old dumps. The second new feature is appending to volumes, which 
I hope is self-explanatory.

With the combination of these two features, the future of disk-based storage 
is that you will have only one virtual tape (a VFS Volume), to which you 
constantly append data at the end and recycle data from the beginning. This 
makes a lot more sense in terms of provisioning space, as well as making use 
of provisioned space. Also, with only one volume, you no longer need the 
chg-disk changer, which simplifies setup quite a bit.

Besides the Device API bits, which are mostly only interesting to developers, 
there will need to be discussion in the community about the right way to 
do the user side of these features: In particular, partial volume recycling 
will require rethinking Amanda's retention policy scheme. One possibility 
(but not the only one) might be to talk about keeping a particular dumptype 
for a particular number of days. Now might be too soon, but at some point I'd 
like to discuss this with you in greater detail.

Cheers,

--Ian

On Tuesday 23 May 2006 16:28, Jon LaBadie wrote:
 I'm running vtapes on a new server.  The vtapes
 are split across two external disk drives.

 Realizing that some tapes would not fill completely,
 I decided that rather than define the tape size to be
 exactly disk/N, I would add a fudge factor to the size.

 Things worked exactly as anticipated.  The install has
 now reached beyond the last tape which ran out of disk
 space just before it ran our of the last tapes' length.
 And as usual, the taping restarted on the next vtape
 on the disk with remaining space.

 But running out of disk space caused me to look more
 closely at the situation and I realized that the failed
 taping is left on the disk.  This of course mimics what
 happens on physical tape.  However with the file:driver
 if this failed, and useless tape file were deleted,
 it would free up space for other data.

 Curious, I added up all the sizes of the failed, partially
 taped dumps.  They totalled 46+ GB.  That is substantially
 more than I dump daily.

 Has anyone addressed this situation?

 Before you ask, no I've not gone to tape-spanning (yet).

-- 
Forums for Amanda discussion: http://forums.zmanda.com/


Re: vtape, end of tape waste

2006-05-23 Thread Ross Vandegrift
On Tue, May 23, 2006 at 04:28:31PM -0400, Jon LaBadie wrote:
 But running out of disk space caused me to look more
 closely at the situation and I realized that the failed
 taping is left on the disk.  This of course mimics what
 happens on physical tape.  However with the file:driver
 if this failed, and useless tape file were deleted,
 it would free up space for other data.

Our setups avoid this situation by having a dedicated chunk of holding
space.  I cut out 150-200GiB on each Amanda server just for holding
space.  That way, no dump ever fails because the data in holding was
using space in the vtape filesystem.

I know throwing more hardware/disk space at the problem isn't a
particularly interesting or clever solution, but I can vouch for the
fact that it works!

-- 
Ross Vandegrift
[EMAIL PROTECTED]

The good Christian should beware of mathematicians, and all those who
make empty prophecies. The danger already exists that the mathematicians
have made a covenant with the devil to darken the spirit and to confine
man in the bonds of Hell.
--St. Augustine, De Genesi ad Litteram, Book II, xviii, 37