Re: file: drive change question

2009-02-11 Thread Michael Loftis
I don't see why not.  I don't use MC anymore so I don't know if it keeps 
permissions but if your amanda run went cleanly that's the only thing 
that'd be sensitive and that can be fixed by chmod.


As long as you're confident the data is there.

--On February 11, 2009 2:59:38 PM -0500 Gene Heskett 
gene.hesk...@verizon.net wrote:



Greetings;

I have added a 1.0 TB Maxtor drive to the system, and copied, using mc,
everything that was on the 400GB deathstar I had been using for my
virtual  tapes to this new drive.

The I edited /etc/fstab to reflect the LABEL name of the new disk in the
line  that mounts /amandatapes in that file.

Then I did 'umount /amandatapes', checked with df to see that it was
gone, and  issued a 'mount /amandatapes', checked with df and the new
drive was there  as /amandatapes.

And as the user amanda, 'amcheck Daily' is happy as a clam.

I have not rebooted yet, don't see why I'd need to, but can anyone come
up  with a good reason why after checking tonight's run of amanda, that I
can't  repartition that 400GB deathstar (which is still breathing just
fine) and  install a ubuntu-9.04 preview just for grins?  Reason?  /me
tired of being  fedoras lab test rat.  /me tired of not being able to
play the videos my kids  send me.  Yes, I like to bleed, and play the
canary bit in a coal mine play,  but when I break something, I want it to
be something _I_ did that broke it.

Comments?

--
Cheers, Gene
There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order.
-Ed Howdershelt (Author)
OK, so you're a Ph.D.  Just don't touch anything.




--
Genius might be described as a supreme capacity for getting its possessors
into trouble of all kinds.
-- Samuel Butler


Re: barcodes?

2008-12-03 Thread Michael Loftis



--On December 3, 2008 4:14:12 PM -0500 Chris Hoogendyk 
[EMAIL PROTECTED] wrote:




Is this behavior the same across all Amanda code? Or might amrecover
behave differently from amdump? I'm not sure why I would have thought it
cycled through the tapes looking at the labels unless amrecover were
perhaps different. I've developed the habit of using the front panel on
the library to load the tape that amrecover will be expecting before
starting it up.


For restores we use amtape label BLAH for backups I wrote a script that 
sorts out the next N tapes that amanda will need fromt he library.  This is 
because when I started using amanda, amdump and esp. amcheck did not really 
do anything with the barcodes, they'd just call amtape next until they read 
the right label off the tape.  With 50+ tapes in a library that tapes all 
night. :)



--
Genius might be described as a supreme capacity for getting its possessors
into trouble of all kinds.
-- Samuel Butler


Re: Backing up virtual machines

2008-11-20 Thread Michael Loftis



--On November 20, 2008 2:51:24 PM -0500 Jon LaBadie [EMAIL PROTECTED] wrote:


Just dipping my toe into virtual machines and
wondering how others are doing backups of them.

I'm using VirtualBox with a host of Fedora 9.
My guest is WinXP Pro.  The objective is to
no longer need to dual-boot this machine.

The virtual guest does appear on my network as
a separate host and static IP when it is up.

I can see two ways to backup it up;

 - backup the VBox files (*.vdi and others) on
   the host Fedora file system

 - consider the virtual machine a separate host
   and amanda client

How have others dealt with this situation?
Or maybe both forms are used simultaneously?


Both have problems.  Backing up the vdi's requires the machine be down. 
Backing up inside each VM requires you be careful not to overload the VM 
host during backup windows.  For me, backing up inside each VM (each VM as 
an amanda client) was/is the only way to go, despite the pitfall.




jl
--
Jon H. LaBadie  [EMAIL PROTECTED]
 JG Computing
 12027 Creekbend Drive  (703) 787-0884
 Reston, VA  20194  (703) 787-0922 (fax)




--
Genius might be described as a supreme capacity for getting its possessors
into trouble of all kinds.
-- Samuel Butler


Re: Backing up virtual machines

2008-11-20 Thread Michael Loftis



--On November 20, 2008 4:40:28 PM -0500 Jon LaBadie [EMAIL PROTECTED] wrote:


On Thu, Nov 20, 2008 at 01:04:33PM -0700, Michael Loftis wrote:



--On November 20, 2008 2:51:24 PM -0500 Jon LaBadie [EMAIL PROTECTED]
wrote:

 Just dipping my toe into virtual machines and
 wondering how others are doing backups of them.

 I'm using VirtualBox with a host of Fedora 9.
 My guest is WinXP Pro.  The objective is to
 no longer need to dual-boot this machine.

 The virtual guest does appear on my network as
 a separate host and static IP when it is up.

 I can see two ways to backup it up;

 - backup the VBox files (*.vdi and others) on
   the host Fedora file system

 - consider the virtual machine a separate host
   and amanda client

 How have others dealt with this situation?
 Or maybe both forms are used simultaneously?

Both have problems.
Backing up the vdi's requires the machine be down.


Why is this a requirement Michael?  Is it because the
VDI is changing while being backed up?  Currently the
VDI is not on a FS that can be snapshot'ted.  If it was,
for example an LVM file system on Linux, would the VDI
of a running virtual machine be suitable for backup if
taken from a snapshot (snapshot from the FS, not VBox).

To my thinking, another major problem is the lack of
access to individual files within the VDI file.


Yeah live filesystem live device live everything.  Restores would be 
complicated to say the least.  I've also had problems with VMWare disk 
images if you back them up/copy them live they ended up completely 
corrupted and unusable on restore - the VMWare tool being unable to repair 
the disk image.  I'd imagine there's the same problem with vbox.





Backing up inside each VM requires you be careful not to overload the VM
host during backup windows.  For me, backing up inside each VM (each VM
as  an amanda client) was/is the only way to go, despite the pitfall.



That is what I expect to do.  I just hate windows backups with
all the hassle over can't backup system and any in-use files.


Yeah I know.  Backing up windows is worse than herding cats.  You've got 
files, then you've got the hidden registry component to backup too...and 
even with all that you might not be able to restore a windows box.




jl
--
Jon H. LaBadie  [EMAIL PROTECTED]
 JG Computing
 12027 Creekbend Drive  (703) 787-0884
 Reston, VA  20194  (703) 787-0922 (fax)




--
Genius might be described as a supreme capacity for getting its possessors
into trouble of all kinds.
-- Samuel Butler


Re: Fullbackup on saturdays.

2008-11-14 Thread Michael Loftis

No.  It does it better.  There's a FAQ on this too.

--On November 14, 2008 2:02:20 PM + Prashant Ramhit 
[EMAIL PROTECTED] wrote:



Hi All,
How is it possible to configure amanda for the following set up

Full backup on Saturdays and
Incremental on Monday till Friday.
On 2 Tapes only.

dumpcycle 1
tapecycle 2

Kind Regards,
Prashant







--
Genius might be described as a supreme capacity for getting its possessors
into trouble of all kinds.
-- Samuel Butler


Re: Tape Autoloader

2008-10-20 Thread Michael Loftis



--On October 20, 2008 11:20:39 AM +0100 Prashant Ramhit 
[EMAIL PROTECTED] wrote:



Hi All,

I have been using a LT04 Tape for backup. And now I received a new LT04
Quantum Superloader.

I am trying to get it working but it is not.


Several things, your tapes are labelled as if they're cleaning tapes, if 
they're not, change the barcodes on them (it will confuse things later). 
Also you're giving amanda the wrong changerdev.  Your mtx command there 
lists sg2, why would you give amanda a different changerdev?  Along that 
same line are you sure of the SCSI device for the drive you want is nst0?


amanda-users-request is NOT for reaching the list, don't CC it.  It's ONLY 
to reach the list administrators.




My config is as follows,

amanda.conf

runtapes 2
tpchanger chg-zd-mtx
changerfile /var/lib/amanda/changer.conf
changerdev /dev/sg0
tapetype LTO-4
tapedev /dev/nst0
labelstr ^CLN14[1-2]L2*$
amrecover_changer chg-zd-mtx


changer.conf

eject  1 # Tapedrives need an eject command
sleep 10 # Seconds to wait until the tape gets ready
cleanmax 10
havebarcode 1
havereader=1
offline_before_unload=1
offlinestatus=1
OFFLINE_BEFORE_UNLOAD=1


[EMAIL PROTECTED]:/etc/amanda/fullback# mtx -f /dev/sg2 status
  Storage Changer /dev/sg2:1 Drives, 16 Slots ( 0 Import/Export )
Data Transfer Element 0:Empty
  Storage Element 1:Full :VolumeTag=CLN141L2
Storage Element 2:Full :VolumeTag=CLN142L2
Storage Element 3:Empty
  Storage Element 4:Empty
  Storage Element 5:Empty
  Storage Element 6:Empty
  Storage Element 7:Empty
  Storage Element 8:Empty
  Storage Element 9:Empty
  Storage Element 10:Empty
  Storage Element 11:Empty
  Storage Element 12:Empty
  Storage Element 13:Empty
  Storage Element 14:Empty
  Storage Element 15:Empty
  Storage Element 16:Empty


Anyone has an idea, what is wrong.

Thanks
Prashant





--
Genius might be described as a supreme capacity for getting its possessors
into trouble of all kinds.
-- Samuel Butler


Re: dumpcycle

2008-09-23 Thread Michael Loftis



--On September 23, 2008 12:12:47 PM -0700 aminukapon [EMAIL PROTECTED] 
wrote:



hello all,

I need a brief clarification on what the dumpcycle is  and how it
involves incremental backup and full backups . Most of what I have read
online have been confusing thus far.


dumpcycle is the maximum number of runs between full dumps (eg, max number 
of days between full dumps since backup is done daily typically)




Thanks
Amin







--
Genius might be described as a supreme capacity for getting its possessors
into trouble of all kinds.
-- Samuel Butler


Re: IO Errors backing up to new LTO3

2008-06-16 Thread Michael Loftis



--On June 17, 2008 12:49:19 PM +1000 Andrew Best [EMAIL PROTECTED] wrote:


2008/6/16 Andrew Best [EMAIL PROTECTED]:



Im interested in hearing what people can suggest to try and resolve this.


Better quality cables and (*ACTIVE*) terminators, get them as short as 
possible too.


--
Genius might be described as a supreme capacity for getting its possessors
into trouble of all kinds.
-- Samuel Butler


Re: [Fwd: UB DRD AMANDA VERIFY REPORT FOR DRD021]

2007-12-14 Thread Michael Loftis



--On December 14, 2007 2:15:29 PM -0500 Lawrence McMahon 
[EMAIL PROTECTED] wrote:



Hi;
   Our daily report of this tape came out ok, but when I did an amverify,
I got the attached.
Does this seem like a tape problem, or tape drive problem?


There should be a section detailing what was STRANGE about the verify, and 
you seem to have failed to include that  If that's indeed the whole 
output then I'm at a loss.




These dumps were to tape DRD021.
The next tape Amanda expects to use is: DRD022.

FAILURE AND STRANGE DUMP SUMMARY:
  ubcard.car /opt lev 1 STRANGE
  igor.cc.bu /usr lev 1 STRANGE


STATISTICS:
  Total   Full  Daily
      
Estimate Time (hrs:min)0:11
Run Time (hrs:min) 4:51
Dump Time (hrs:min)9:12   4:07   5:05
Output Size (meg)   63639.939047.424592.4
Original Size (meg) 63639.939047.424592.4
Avg Compressed Size (%) -- -- --(level:#disks ...)
Filesystems Dumped  279 26253   (1:192 2:40 3:18
4:3)
Avg Dump Rate (k/s)  1968.5 2700.1 1376.3

Tape Time (hrs:min)3:48   2:14   1:34
Tape Size (meg) 62769.438177.024592.4
Tape Used (%)  93.7   57.0   36.7   (level:#disks ...)
Filesystems Taped   279 26253   (1:192 2:40 3:18
4:3)
Avg Tp Write Rate (k/s)  4708.1 4873.7 4472.3

USAGE BY TAPE:
  LabelTime  Size  %Nb
  DRD021   3:48   62769.4   93.7   279






--
Genius might be described as a supreme capacity for getting its possessors
into trouble of all kinds.
-- Samuel Butler


Re: [Fwd: UB DRD AMANDA VERIFY REPORT FOR DRD021]

2007-12-14 Thread Michael Loftis
  $  Dec
14 13:50:03 apocalypse.acsu.buffalo.edu scsi: [ID 107833 kern.notice]
Sense Key: Media Error
Dec 14 13:50:03 apocalypse.acsu.buffalo.edu scsi: [ID 107833 kern.notice]
ASC: 0x11 (unrecovered read error), ASCQ: 0x0, FRU: 0x0
Dec 14 13:51:29 apocalypse.acsu.buffalo.edu scsi: [ID 107833
kern.warning] WARNI
NG: /[EMAIL PROTECTED],0/[EMAIL PROTECTED]/[EMAIL PROTECTED],1/[EMAIL 
PROTECTED],0 (st12):
Dec 14 13:51:29 apocalypse.acsu.buffalo.edu Error for Command: read
Error Level: Fatal
Dec 14 13:51:29 apocalypse.acsu.buffalo.edu scsi: [ID 107833 kern.notice]
Requested Block: 12495 Error Block: 12495
Dec 14 13:51:29 apocalypse.acsu.buffalo.edu scsi: [ID 107833 kern.notice]
Vendor: QUANTUMSerial Number:  U  $  Dec
14 13:51:29 apocalypse.acsu.buffalo.edu scsi: [ID 107833 kern.notice]
Sense Key: Media Error
Dec 14 13:51:29 apocalypse.acsu.buffalo.edu scsi: [ID 107833 kern.notice]
ASC: 0x11 (unrecovered read error), ASCQ: 0x0, FRU: 0x0
Dec 14 14:27:51 apocalypse.acsu.buffalo.edu scsi: [ID 365881 kern.info]
/[EMAIL PROTECTED],
0/[EMAIL PROTECTED]/[EMAIL PROTECTED],1 (glm1):
Dec 14 14:27:51 apocalypse.acsu.buffalo.edu Cmd (0x8fba10) dump for
Target 5
 Lun 0:
Dec 14 14:27:51 apocalypse.acsu.buffalo.edu scsi: [ID 365881 kern.info]
/[EMAIL PROTECTED],
0/[EMAIL PROTECTED]/[EMAIL PROTECTED],1 (glm1):
Dec 14 14:27:51 apocalypse.acsu.buffalo.edu cdb=[ 0x8 0x0 0x0
0x80 0
x0 0x0 ]
Dec 14 14:27:51 apocalypse.acsu.buffalo.edu scsi: [ID 365881 kern.info]
/[EMAIL PROTECTED],
0/[EMAIL PROTECTED]/[EMAIL PROTECTED],1 (glm1):
Dec 14 14:27:51 apocalypse.acsu.buffalo.edu pkt_flags=0x0
pkt_statistics=0x6
1 pkt_state=0x7
Dec 14 14:27:51 apocalypse.acsu.buffalo.edu scsi: [ID 365881 kern.info]
/[EMAIL PROTECTED],
0/[EMAIL PROTECTED]/[EMAIL PROTECTED],1 (glm1):
Dec 14 14:27:51 apocalypse.acsu.buffalo.edu pkt_scbp=0x0
cmd_flags=0x8e1
Dec 14 14:27:51 apocalypse.acsu.buffalo.edu scsi: [ID 107833
kern.warning] WARNI
NG: /[EMAIL PROTECTED],0/[EMAIL PROTECTED]/[EMAIL PROTECTED],1 (glm1):
Dec 14 14:27:51 apocalypse.acsu.buffalo.edu Disconnected command
timeout for
 Target 5.0
Dec 14 14:27:51 apocalypse.acsu.buffalo.edu genunix: [ID 408822
kern.info] NOTIC
E: glm1: fault detected in device; service still available
Dec 14 14:27:51 apocalypse.acsu.buffalo.edu genunix: [ID 611667
kern.info] NOTIC
E: glm1: Disconnected command timeout for Target 5.0
Dec 14 14:27:51 apocalypse.acsu.buffalo.edu glm: [ID 160360 kern.warning]
WARNIN
G: ID[SUNWpd.glm.cmd_timeout.6016]
Dec 14 14:27:51 apocalypse.acsu.buffalo.edu scsi: [ID 107833
kern.warning] WARNI
NG: /[EMAIL PROTECTED],0/[EMAIL PROTECTED]/[EMAIL PROTECTED],1/[EMAIL 
PROTECTED],0 (st12):
Dec 14 14:27:51 apocalypse.acsu.buffalo.edu SCSI transport failed:
reason 't
imeout': giving up
apocalypse:~(58)




Michael Loftis wrote:



--On December 14, 2007 2:15:29 PM -0500 Lawrence McMahon
[EMAIL PROTECTED] wrote:


Hi;
   Our daily report of this tape came out ok, but when I did an
amverify,
I got the attached.
Does this seem like a tape problem, or tape drive problem?


There should be a section detailing what was STRANGE about the verify,
and you seem to have failed to include that  If that's indeed the
whole output then I'm at a loss.



These dumps were to tape DRD021.
The next tape Amanda expects to use is: DRD022.

FAILURE AND STRANGE DUMP SUMMARY:
  ubcard.car /opt lev 1 STRANGE
  igor.cc.bu /usr lev 1 STRANGE


STATISTICS:
  Total   Full  Daily
      
Estimate Time (hrs:min)0:11
Run Time (hrs:min) 4:51
Dump Time (hrs:min)9:12   4:07   5:05
Output Size (meg)   63639.939047.424592.4
Original Size (meg) 63639.939047.424592.4
Avg Compressed Size (%) -- -- --(level:#disks
...)
Filesystems Dumped  279 26253   (1:192 2:40 3:18
4:3)
Avg Dump Rate (k/s)  1968.5 2700.1 1376.3

Tape Time (hrs:min)3:48   2:14   1:34
Tape Size (meg) 62769.438177.024592.4
Tape Used (%)  93.7   57.0   36.7   (level:#disks
...)
Filesystems Taped   279 26253   (1:192 2:40 3:18
4:3)
Avg Tp Write Rate (k/s)  4708.1 4873.7 4472.3

USAGE BY TAPE:
  LabelTime  Size  %Nb
  DRD021   3:48   62769.4   93.7   279






--
Genius might be described as a supreme capacity for getting its
possessors
into trouble of all kinds.
-- Samuel Butler










--
Genius might be described as a supreme capacity for getting its possessors
into trouble of all kinds.
-- Samuel Butler


Re: Does anybody have a LTO4 tapetype ?

2007-12-05 Thread Michael Loftis

you can use the tapetype program to generate your own.

--On December 6, 2007 12:33:31 AM +0100 [EMAIL PROTECTED] wrote:


Hello,

I get a brand new LTO4 tape and I wonder about the tape definition
to use.

Regards

JPP






--
Genius might be described as a supreme capacity for getting its possessors
into trouble of all kinds.
-- Samuel Butler


RE: Encryption, compression

2007-10-30 Thread Michael Loftis
Good crypto will produce relatively random output data.  Compressing prior 
to encrypting if storing encrypted is typically a must.


--On October 30, 2007 6:06:09 PM -0500 [EMAIL PROTECTED] wrote:


In my (admittedly limited) experience with encryption and compression,
the rule  of thumb has always been to compress first (removing
exploitable redundancy and  pattern repetitions) and then encrypt.  It
also has the advantage that you are encrypting less volume and reducing
the exploitable surface area of the data.

Of course, your mileage may vary, but that is the experience I have and
advice  I've been given.

Don Ritchey
IT ED RTS Tech Services, Senior IT Analyst (UNIX)


-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Chris Hoogendyk Sent:
Tuesday, October 30, 2007 4:38 PM
To: AMANDA users
Subject: Re: Encryption, compression




Brian Cuttler wrote:

Amanda users,

I may have missed it in the mailing list... I know that
encryption came available in 2.5.0, either server side
or client side, or the channel (though I think encrypting
on the client provides an encrypted channel by default, true ?)

Anyway, I was wondering and haven't seen... how to encryption
and compression play against one another. Some data compresses
very well, some doesn't, If you are encrypting, doesn't that
tend to cause the data to be less compressable ?

We are looking an encryption on the tape for one of our amanda
servers, just want to sort of know what to expect when I upgrade
the client and server and turn on encryption, compression is
already enabled.



hmm, I just saw something on this. Don't remember where, and I deleted it.

It's interesting that when you google compressing encrypted data, you
get on the first page

 A wikipedia entry claiming you cannot compress encrypted data

 A storagemojo guru saying that it is a mathematical faux pas to say
that encrypted data can be compressed

 An EECS Berkeley and IEEE Publication detailing the mathematics of
compressing encrypted data (it works)
   (7 of the 10 links on the first page were to copies of this
paper)


I think I recall that the item I saw earlier indicated significant
compression of encrypted data.

I'm going to make the wild speculation that particular results will
depend on your encryption keys and your compression methods as well as
your original data. That said, the bottom line is always real world
tests. Therefore, if no one comes up with detailed examples and data, I
would suggest just doing it and recording the results. Choose your
methods and your data and then make a results table with the size of the
original data, the size compressed, the size compressed and then
encrypted, the size encrypted, and the size encrypted and then
compressed. Send it back to the list with the algorithms, methodology
and results.



---

Chris Hoogendyk

-
   O__   Systems Administrator
  c/ /'_ --- Biology  Geology Departments
 (*) \(*) -- 140 Morrill Science Center
~~ - University of Massachusetts, Amherst

[EMAIL PROTECTED]

---

Erdös 4



-
**
This e-mail and any of its attachments may contain Exelon
Corporation proprietary information, which is privileged,
confidential, or subject to copyright belonging to the Exelon
Corporation family of Companies.
This e-mail is intended solely for the use of the individual or
entity to which it is addressed.  If you are not the intended
recipient of this e-mail, you are hereby notified that any
dissemination, distribution, copying, or action taken in relation
to the contents of and attachments to this e-mail is strictly
prohibited and may be unlawful.  If you have received this e-mail
in error, please notify the sender immediately and permanently
delete the original and any copy of this e-mail and any printout.
Thank You.
**






--
Genius might be described as a supreme capacity for getting its possessors
into trouble of all kinds.
-- Samuel Butler



Re: missing function in amanda?

2007-03-19 Thread Michael Loftis



--On March 18, 2007 11:16:49 PM -0400 Gene Heskett 
[EMAIL PROTECTED] wrote:



On Sunday 18 March 2007, [EMAIL PROTECTED] wrote:

On Sun, Mar 18, 2007 at 08:15:30PM -0400, Gene Heskett wrote:

If we had such a tool, I could run an estimate phase in advance of the
regular run and be in a position to reboot to the older kernel ahead
of time instead of totally screwing with the amanda database.


I'm not sure that is the most compelling use-case! ;)


Well, given enough time I expect I could come up with more excuses. :)


That said, it's always useful to be able to decompose operations for
debugging purposes.  At this point, the system is not factored in such a
way as to make that break a clean one.  I can think of ways to
accomplish something *like* what you're asking, but they're all hacks
that would be harder than

 if [ $(ssh mybox uname -r) == 2.6.foo.bar ]
 then
   echo Please reboot mybox | page_gene
 fi


Well, I know that 2.6.21-rc1-rc2-rc3 are making tar look broken.  So if
it  screws up tonight, I go get the identically versioned but hand built
tar-1.15-1 from my old FC2 install, nearly a megabyte in size, and move
the tar-1.15-1 from the FC6 rpm install out of the way.  That one is only
about 240k.  Mine is obviously staticly linked as its some over 830k in
size.  If that won't work, and all the other file inspection tools all
report sane dates and such, but tar still insists on backing up most of
the 45GB I have here in one fell swoop when told to do a level 3 or 4,
then tar is indeed broken and the bugzilla entry I made against it 3 days
ago will get re-inforced with more data.  Considering that a vtape here
is sized at about 11GB, there is no reason for amanda to tell it to
backup 3x the data the tape will hold.

In this case, an amestimate utility would be handier than that famous
button on the equally famous door.  I could time a run and see what it
says, change something and repeat, and do it several times a day without
screwing up amanda's database all that badly.


You can make a run with no-record set on the DLEs



Thanks Dustin.

--
Cheers, Gene
There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order.
-Ed Howdershelt (Author)
What hath Bob wrought?





--
Genius might be described as a supreme capacity for getting its possessors
into trouble of all kinds.
-- Samuel Butler


Re: Detecting shoe shining with modern libraries?

2007-03-10 Thread Michael Loftis



--On March 10, 2007 10:45:02 AM -0800 Skylar Thompson 
[EMAIL PROTECTED] wrote:



Michael Loftis wrote:

Buried in the library.  Can't view the drives at all with any of the
Spectra libraries.  The T50 if you open it, and remove a storage
cartridge you can kinda see the drives but they're pretty obscured by
the loader mechanism.


Our Spectra T950 has a drive performance monitor. For LTO drives I think
anything below the slowest stream rate (40MB/s for LTO3) is going to be
shoe-shining to some extent. If your library doesn't have a performance
monitor, you might be able to estimate the data transfer on the server
with something like iostat.


iostat doesn't work for tape devices under Linux.  And the T50 doesn't have 
anything in the UI so I have to rely on after-the-fact reports form AMANDA. 
Ohh well.  You'd think there'd be a SCSI INQUIRY command that could be sent 
or something.




--
-- Skylar Thompson ([EMAIL PROTECTED])
-- http://www.cs.earlham.edu/~skylar/






--
Genius might be described as a supreme capacity for getting its possessors
into trouble of all kinds.
-- Samuel Butler


Re: SCSI card recommendations?

2007-03-08 Thread Michael Loftis



--On March 8, 2007 12:43:59 PM -0500 Greg Troxel [EMAIL PROTECTED] wrote:


I am about to buy a couple of LTO-2 drives to replace my DDS3 drives
which are no longer big enough.  I'm looking at HP and IBM in
particular, but it seems they are all Ultra160 or Ultra320.  My
understanding is that these are all both wide and LVD, and use either
the HD 68-pin or the VHDCI connector, and that I can cable either of
these to any ultra160/ultra320 controller (and probably ultra2 wide
lvd).

Is it likely that an Adaptec 2940-U2W would work with such a drive?
It's said to be LVD, so the only issue should be topping out at 80
MB/s. The drive would be the only thing on the bus.

Can anyone recommend a SCSI card to use with LTO-2 drives that will
  fit in a normal PCI slot
  work with NetBSD (netbsd-4 branch, preferably)

(I am not super cost sensitive; a $300 host adaptor that causes me zero
grief beats $100 and a few hours of trouble.)


SCSI is backwards compatible, an U320 LVD device will work on a U80 LVD 
controller, just at U80 LVD speeds.  I'd suggest just getting a U160 or 
U320 controller, and make sure to get *good* SCSI cables.  That's partly 
what bit me in the butt this last time.  I ended up with an adaptec 2944, 
which has been pretty solid.



If anyone has comments to get or stay away from any particular LTO-2
drive, I'd like to hear them.  Hardware comments would be helpful on
the zmanda wiki, but I only saw tapetypes.

Also, comments about media reliability would be appreciated.  It seems
the LTO concept is that everything just 100% works, but I'd be
interested to hear stay away from Brand X tapes; they are flaky
comments.

Thanks,
Greg






--
Genius might be described as a supreme capacity for getting its possessors
into trouble of all kinds.
-- Samuel Butler


Detecting shoe shining with modern libraries?

2007-03-08 Thread Michael Loftis
New drives are really quiet.  Not to mention some new libraries (like our 
Spectra T50) bury the drives in the middle of the library so you can't 
really hear the mechanism working like you can older libraries with louder 
drives.  Desktop units you can still pretty readily listen to and tell when 
they're shoe-shining, but as far as libraries go, does anyone do anything 
other than just do their best to listen?  That's what I've done 
for...well...ever but I'm beginning to wonder if there's a better way.




--
Genius might be described as a supreme capacity for getting its possessors
into trouble of all kinds.
-- Samuel Butler


Re: Detecting shoe shining with modern libraries?

2007-03-08 Thread Michael Loftis



--On March 8, 2007 4:57:51 PM -0500 Jon LaBadie [EMAIL PROTECTED] wrote:


On Thu, Mar 08, 2007 at 12:35:54PM -0700, Michael Loftis wrote:

New drives are really quiet.  Not to mention some new libraries (like
our  Spectra T50) bury the drives in the middle of the library so you
can't  really hear the mechanism working like you can older libraries
with louder  drives.  Desktop units you can still pretty readily listen
to and tell when  they're shoe-shining, but as far as libraries go, does
anyone do anything  other than just do their best to listen?  That's
what I've done  for...well...ever but I'm beginning to wonder if there's
a better way.



Does your drive have an activity light.  When I first put in my LTO-1,
the card was really old and gave low rates.  A newer card more than
doubled it.  Another difference was the activity light.  It is on
nearly solid now, it was on/off/on/off on the old slower scsi card.


Buried in the library.  Can't view the drives at all with any of the 
Spectra libraries.  The T50 if you open it, and remove a storage cartridge 
you can kinda see the drives but they're pretty obscured by the loader 
mechanism.


Re: SDLT-4 compareded to LTO-3

2007-03-02 Thread Michael Loftis



--On March 2, 2007 10:31:33 AM + Anthony   Worrall 
[EMAIL PROTECTED] wrote:



Hi

This is not strictly an amanda question but I thought I would see if any
one has any views on SDLT-4 compared to LTO-3.

We are currently looking at replacing our tape devices an are looking at
SDLT-4 which seems to be about the same price as LTO-3 but offer twice
the capacity. Has anyone got any experience of these drives. I am told
by our supplier that they are selling many more LTO-3 than SDLT-4. Is it
just that SDLT-4 is newer is there some reason?


SDLT-4 (DLT-S4) may cost less $/gb but it still costs more per tape.  So 
unless you're actually using that much tape it may noe be as attractive as 
it seems.  Several other issues with DLT-S4 are the relatively slow rated 
speed of 60MB/sec (LTO3 is rated at 80, and I routinely see 60 in 
production, I'd see more if I had faster hosts).  Also I don't know, but 
last I checked DLT still had to be streamed at the rate they were at.


Though the biggest reasons LTO is outselling DLT/SDLT is SDLT is viewed as 
being end of line first, and secondly, Quantum is the ONLY supplier of DLT 
drives.  LTO is available from HP/Compaq, IBM, and, yes, Quantum.  LTO-3 
also has atleast one advantage for (capable) libraries, each cartridge has 
a contactless (read... RFID like) memory that can report the tapes last 
known condition, as well as user data.  in theory atleast an LTO drive or 
library has only to read this tag to decide whether-or-not it can read the 
tape, and if it even should.  DLT and SDLT have the problem that if you 
load an older generation cartridge than your drive supports you may destroy 
the heads.


LTO has a slightly smaller cartridge too.  Though for the 'desktop' user 
this isn't a big deal, but if you've got a tape library, it makes a pretty 
big difference in the number of tapes you can put in your library.




Cheers

Anthony Worrall







--
Genius might be described as a supreme capacity for getting its possessors
into trouble of all kinds.
-- Samuel Butler


Re: Hardware Compatibility Question...

2007-02-09 Thread Michael Loftis



--On February 8, 2007 3:26:40 PM -0800 A R [EMAIL PROTECTED] wrote:




Hi All,

I have been tasked with updating my company's aging backup equipment. I
am planning on purchasing a new server to run amanda along with a new
robotic tape library with two LTO-3 drives.  I just want to make sure
that the hardware I have selected will work with amanda and the Linux
distribution that I have selected for this project.  I have about 10 TB
of data to back up weekly.  Here is what I have in mind:

Server:
OS: Debian Linux
2U - 6 SATA HD slots
Pentium dual core processor (anyone have trouble with dual core?)
2GB RAM (too much? to little?)
2TB of HD spooling space, RAID0 w/four 500GB drives
120GB of operating system space on RAID1 with two 120GB drives
LSI Logic LSI22320 Ultra320 SCSI Dual Channel PCIx card

Loader:
Overland ARCvault24 w/ two LTO-3 tape drives



I can't recommend LSI Logic SCSI cards.  The drivers in atleast 2.6 Linux 
are pretty ugly.  Lack error recovery requiring you to power down the 
machine if the card encounters a serious error (no, rmmod/modprobe is not 
enough I found out).


After having lots of issues with an LSI card replaced it with an Adaptec, 
very happy since.  I can't recommend Adaptec's RAID cards though. 
Especially not the bastardizations they made out of the ICP Vortex cards 
(now ICP* models, not the older GDT* models).


AMANDA can use almost any changer, using chg-zd-mtx or chg-scsi.  I use 
chg-zd-mtx which is basically a set of wrappers around mt and mtx -- mtx 
speaks the normal, standard SCSI changer protocol and I've actually yet to 
find a SCSI changer that you can't atleast do basic load, unload, and 
transfer operations with mtx.  This includes weird beasts like SCSI CD-ROM 
changers.


And I don't believe with those SATA drives you'll be able to run the 
library even with one tape drive at full speed.  LTO-3 peaks out at 
80mbyte/secI see 60mbyte/sec routinely in production, currently my tape 
host can't keep up with that.  SATA drives stream pretty well but AMANDA's 
spool area access resembles random I/O not really streaming I/O and it's 
really hard to keep tape drives fed at that rate.  10K rpm spindles might 
be able to even on SATA




I guess the main questions I'm trying to figure out are... Is this server
appropriate for the task at hand and is the tape equipment that I have
selected compatible with amanda?  I tried calling Overland, but
apparently they have never tested Amanda against the ARCvault, but it
clearly works with the Powerloader without trouble.  I see no mention of
the ARCvault anywhere on this group, so I figured I'd ask and see if
anyone out there has had any experience with that particular loader.

Thank you very much for your time.

~Andy


__
Looking for earth-friendly autos?
Browse Top Cars by Green Rating at Yahoo! Autos' Green Center.




--
Genius might be described as a supreme capacity for getting its possessors
into trouble of all kinds.
-- Samuel Butler


Re: Hardware Compatibility Question...

2007-02-09 Thread Michael Loftis



--On February 9, 2007 10:11:23 AM -0500 Joshua Baker-LePain 
[EMAIL PROTECTED] wrote:



I have both an AIT3 Powerloader and a LTO3 Neo2K working quite well with
amanda.  I see no reason why the ARCvault shouldn't work as well
(although, admittedly, I haven't looked too hard at it)...  After looking
at the manual, I see that the robotics are on the same SCSI ID as the
first drive, but a separate LUN.  RH derived distros require an option in
/etc/modprobe.conf to get that working.  I'm not sure about Debian.


Debian probes LUNs by default.



One note -- a quick look at the ARCvault24 docs doesn't show details on
the cabling for a 2 drive setup.  To run both LTO3 drives at the same
time, they *need* to be on their own SCSI channels.  If you can't do that
with the 24 and you need that capability, you may need to look at the
Neo2K.

Good luck.

--
Joshua Baker-LePain
Department of Biomedical Engineering
Duke University





--
Genius might be described as a supreme capacity for getting its possessors
into trouble of all kinds.
-- Samuel Butler


Re: strange failure with an lto-3

2007-02-07 Thread Michael Loftis



--On February 7, 2007 5:35:14 PM +0100 Julien Brulé 
[EMAIL PROTECTED] wrote:



hi all,

 i am trying to backup big filesystem and i get this errors


Have you tried doing a (on a new tape) simple tar of some of the tape host 
filesystem (atleast a few gigs) and then testing that tar out?  IE  tar cvf 
/dev/nstN /var ; mt rewind ; tar tvf /dev/nstN  It really sounds like 
you've got a SCSI cabling problem.  My LTO3 library was/is really sensitive 
to the quality of the SCSI cables.  If the simple tar test fails then it's 
very likely you need to get better cables.  I ended up with cables made by 
blackbox after a set of tripp lite's were totally junk.




Re: Amanda should fail dumps of directories

2007-01-31 Thread Michael Loftis



--On January 31, 2007 10:52:54 AM -0800 James Brown [EMAIL PROTECTED] 
wrote:



Is it possible for amanda to automatically fail backup
jobs of directories specified in the disklist with
dump dumptypes (as opposed to gtar)?  A dump of a
directory completes successfully even though the dump
utilitity is meant for filesystems.  For me at least,
we can not restore from such backups.


This is totally dependant upon the client OS behavior.  AMANDA actually 
doesn't know much about the individual dump, tar, gtar, smbtar, etc program 
that it invokes to get it's work done.  If your dump binary misbehaves and 
exits cleanly when given a directory AMANDA can't tell that something went 
askew.


Dump failing is the expected behavior on a directoryexactly what it 
does when it fails depends on the client OS, and particular dump utility 
involved.


It might be that your dump is failing and indicating as such but that 
AMANDA isn't understanding it correctly on your client OS.  It'd be very 
helpful to have all of those details.  Specific versions of AMANDA (client, 
and server), OS (and distro if Linux) and version of dump.




Thanks,
JB




_
___ Expecting? Get great news right away with email Auto-Check.
Try the Yahoo! Mail Beta.
http://advision.webevents.yahoo.com/mailbeta/newmail_tools.html





--
Genius might be described as a supreme capacity for getting its possessors
into trouble of all kinds.
-- Samuel Butler


Re: DDS4 drive doesn't like DDS4 tapes

2007-01-28 Thread Michael Loftis



--On January 28, 2007 2:03:26 PM -0600 Kirk Strauser [EMAIL PROTECTED] 
wrote:



On Sunday 28 January 2007 12:55, you wrote:


If its already had 2 years use, its possible the heads may be worn enough
to fail on the higher density tape, Kirk.


Great.  For curiosity's sake, would it likely have worked (and still be
working) if I'd switched to all brand-new DDS4 tapes from the start?


Try a cleaning tape first if you haven't.  but all drives wear, though i 
don't know the particulars of DDS but atleast with DLT drives sometiems 
reading/writing the previous generation tapes will wear your heads 
excessively, though they do well document that fact and warn you about it.


In general the previous generation compatibility in tape drives is meant as 
a short stopgap so you can migrate to current generation tape, or read a 
previous generation tape in an emergency.




vtape is interesting, but I still haven't convinced myself that spindles
are  the equal of tapes.
--
Kirk Strauser




--
Genius might be described as a supreme capacity for getting its possessors
into trouble of all kinds.
-- Samuel Butler


Re: tape order

2006-10-17 Thread Michael Loftis
Forget about tape order.  It's not important.  AMANDA bases the order on 
what is expiring/expired.  So just forget about tape order, it's going to 
get out of whack.  Especially as you replace tapes due to wear.


--On October 17, 2006 11:44:53 AM -0400 Steven Settlemyre 
[EMAIL PROTECTED] wrote:



I have 24 tapes and a 8 tape changer. For some reason, it is going
13-16-15-14-17. How can I fix this? can i just force it to take 14
after 13 by only having 14 in there when it's expecting 16? My tapecycle
is 10.

Steve





--
Genius might be described as a supreme capacity for getting its possessors
into trouble of all kinds.
-- Samuel Butler


Re: Mac OSX dump

2006-10-09 Thread Michael Loftis



--On October 9, 2006 2:32:36 PM -0400 McGraw, Robert P. 
[EMAIL PROTECTED] wrote:




I am trying to backup a Mac OSX using /sbin/dump. I am getting a sizecheck
problem.

Question number 1, is it possible to use /sbin/dump with amanda? If
someone is doing this can you guide me to what I am missing?



When using dump you must give it the device name, not the filesystem.



Thanks

Robert


Below is my sendcheck...debug output.


sendsize: debug 1 pid 7738 ruid 30002 euid 30002: start at Mon Oct  9
14:17:04 2006
sendsize: version 2.5.1p1
Could not open conf file
/pkgs/amanda-2.5.0p1/etc/amanda/amanda-client.conf: No such file or
directory
Could not open conf file
/pkgs/amanda-2.5.0p1/etc/amanda/daily/amanda-client.conf: No such file
or directory
sendsize: debug 1 pid 7738 ruid 30002 euid 30002: rename at Mon Oct  9
14:17:04 2006
sendsize[7738]: time 0.006: waiting for any estimate child: 1 running
sendsize[7740]: time 0.006: calculating for amname /, dirname /, spindle
-1 sendsize[7740]: time 0.007: getting size via dump for / level 0
sendsize[7740]: time 0.009: calculating for device / with
sendsize[7740]: time 0.009: running /sbin/dump 0sf 1048576 - /
sendsize[7740]: time 0.011: running /pkgs/amanda-2.5.0p1/libexec/killpgrp
sendsize[7740]: time 0.018:   DUMP: Date of this level 0 dump: Mon Oct  9
14:17:04 2006
sendsize[7740]: time 0.021:   DUMP: Date of last level 0 dump: the epoch
sendsize[7740]: time 0.022:   DUMP: Dumping / to standard output
sendsize[7740]: time 0.073:   DUMP: bad sblock magic number
sendsize[7740]: time 0.074:   DUMP: The ENTIRE dump is aborted.
sendsize[7740]: time 0.075: .
sendsize[7740]: estimate time for / level 0: 0.065
sendsize[7740]: no size line match in /sbin/dump output for /
sendsize[7740]: .
sendsize[7740]: Run /sbin/dump manually to check for errors
sendsize[7740]: time 0.075: asking killpgrp to terminate
dump_calc_estimates: warning - seek failed: Illegal seek
sendsize[7740]: time 1.075: getting size via dump for / level 1
sendsize[7740]: time 1.077: calculating for device / with
sendsize[7740]: time 1.077: running /sbin/dump 1sf 1048576 - /
sendsize[7740]: time 1.079: running /pkgs/amanda-2.5.0p1/libexec/killpgrp
sendsize[7740]: time 1.087:   DUMP: Date of this level 1 dump: Mon Oct  9
14:17:05 2006
sendsize[7740]: time 1.088:   DUMP: Date of last level 0 dump: the epoch
sendsize[7740]: time 1.089:   DUMP: Dumping / to standard output
sendsize[7740]: time 1.139:   DUMP: bad sblock magic number
sendsize[7740]: time 1.140:   DUMP: The ENTIRE dump is aborted.
sendsize[7740]: time 1.141: .
sendsize[7740]: estimate time for / level 1: 0.064
sendsize[7740]: no size line match in /sbin/dump output for /
sendsize[7740]: .
sendsize[7740]: Run /sbin/dump manually to check for errors
sendsize[7740]: time 1.141: asking killpgrp to terminate
dump_calc_estimates: warning - seek failed: Illegal seek
sendsize[7740]: time 2.141: getting size via dump for / level 2
sendsize[7740]: time 2.143: calculating for device / with
sendsize[7740]: time 2.143: running /sbin/dump 2sf 1048576 - /
sendsize[7740]: time 2.145: running /pkgs/amanda-2.5.0p1/libexec/killpgrp
sendsize[7740]: time 2.151:   DUMP: Date of this level 2 dump: Mon Oct  9
14:17:06 2006
sendsize[7740]: time 2.153:   DUMP: Date of last level 0 dump: the epoch
sendsize[7740]: time 2.154:   DUMP: Dumping / to standard output
sendsize[7740]: time 2.206:   DUMP: bad sblock magic number
sendsize[7740]: time 2.207:   DUMP: The ENTIRE dump is aborted.
sendsize[7740]: time 2.208: .
sendsize[7740]: estimate time for / level 2: 0.064
sendsize[7740]: no size line match in /sbin/dump output for /
sendsize[7740]: .
sendsize[7740]: Run /sbin/dump manually to check for errors
sendsize[7740]: time 2.208: asking killpgrp to terminate
dump_calc_estimates: warning - seek failed: Illegal seek
sendsize[7740]: time 3.208: done with amname / dirname / spindle -1
sendsize[7738]: time 3.209: child 7740 terminated normally
sendsize: time 3.209: pid 7738 finish time Mon Oct  9 14:17:07 2006

_
Robert P. McGraw, Jr.
Manager, Computer System EMAIL: [EMAIL PROTECTED]
Purdue University ROOM: MATH-807
Department of MathematicsPHONE: (765) 494-6055
150 N. University Street   FAX: (419) 821-0540
West Lafayette, IN 47907-2067






--
Genius might be described as a supreme capacity for getting its possessors
into trouble of all kinds.
-- Samuel Butler


Re: Got another odd one here...

2006-10-08 Thread Michael Loftis



--On October 8, 2006 2:44:51 AM -0400 Gene Heskett 
[EMAIL PROTECTED] wrote:





I do see one item I need to fix, but its not effected it in several
months, and thats the name ps reports vs the name of the script.

Since its hung, its not going to hurt if I reboot, and let things reset,
I'm going to be gone for 10 days or so.


It should timeout on it's own in #ofDLEs*(someConfigValICan'tRemember) 
seconds on it's own if (assuming) coyote there didn't get it's estimate 
packets somehow.  If you reboot the backup host make sure you run amcleanup 
CONFIGNAME or you'll have a stuck amanda instance.


Re: may be getting a StorEdge L9

2006-08-21 Thread Michael Loftis



--On August 21, 2006 8:22:52 PM +1000 Craig Dewick 
[EMAIL PROTECTED] wrote:




I might be getting hold of a StorEdge L9 with a DLT-8000 in it sometime
in the next few weeks. Is anyone using one of those arrays with Amanda?
Are there any gotcha's to watch out for? Any recommendations on the
'interfacing' software to go between Amanda and the array? Is 'mtx' still
the application of choice for that sort of thing?


They are just ATL libraries.  The old L1800 is an ATL 4/52, and they'll 
identify as such over SCSI usually too.  mtx is definitely the control of 
choice for them.  Biggest difference is the Sun units have a much nicer 
display/interface unit than the ATLs.


Be *very* cautious applying robotics updates, the robotics subsystem is 
pretty touchy about getting an update right, and last I checked they are 
still DOS only utilities, but, I had to run mine in a Windows DOS box to 
slow it down some (it kept confusing the library by not pausing quite long 
enough between blocks of data to the FLASH).




Thanks,

Craig.

--
Post by Craig Dewick (tm). Web @ http://lios.apana.org.au/~cdewick;.
Email 2 [EMAIL PROTECTED]. SunShack @ http://www.sunshack.org;
Forums @ http://www.sunshack.org/phpBB2;. Also Galleries, tech archive,
etc.
Sun Microsystems webring at
http://n.webring.com/hub?ring=sunmicrosystemsu;.





--
Genius might be described as a supreme capacity for getting its possessors
into trouble of all kinds.
-- Samuel Butler


Re: Dramatic reduction in backup time

2006-08-03 Thread Michael Loftis



--On August 3, 2006 9:05:03 AM -0700 Joe Donner (sent by Nabble.com) 
[EMAIL PROTECTED] wrote:




Yes, it's the exact same tape drive I've been using extensively for
testing, and all that time it had been sitting in the same position on
the floor.  I moved it on Monday, and then amanda took off like a
lightning bolt.

Wow, that's something I'll be classing as very weird, but very good.  I
was impressed by the 9 hours it took to do backups, but now I'm
speechless...

I've introduced a new tape for tonight's backup run.  Will see what it
tells me tomorrow.

Thanks very much for your help!


Yeah it sounds VERY much like a marginal SCSI cablewhat OS do you have? 
Sometimes termination issues will show up in dmesg, also dmesg will usually 
show the speed at which the device negotiatesit is shown elsewhere too 
depending on your OS.


Re: LVM snapshots

2006-07-07 Thread Michael Loftis



--On July 7, 2006 2:35:23 AM -0700 Joe Donner (sent by Nabble.com) 
[EMAIL PROTECTED] wrote:




Does anyone use or have knowledge of using LVM snapshots with Amanda
backups?

I believe it to be the same concept as Shadow Volume Copies in Windows
2003, and that is quite useful.

A little bit of info here:
http://arstechnica.com/articles/columns/linux/linux-20041013.ars

I'm just wondering what happens during the freeze - how freezing all
activity to and from the filesystem to reduce the risk of problems
affects the system?  One would imagine that disk writes are somehow
queued up and complete when the file system is unfreezed again?


It's only stopped long enough to make the bitmap table.  After that it's 
Copy On Write.  Meaning a snapshot is free until the 'original' starts to 
differ then the snapshot starts to take up space because you have to make 
copies of the original blocks for the snapshot once they start to change.


In 2.4 atleast though LVM snapshots are extremely limited.  The kernel has 
to find a contiguous section of RAM to put the bitmap table of COW pages 
in.  If you've a significantly sized LVM LV you won't be able to snapshot 
it even reliably.


A better place to ask how does LVM work would be one of the LVM discussion 
groups.


Here's a couple helpful links.

http://tldp.org/HOWTO/LVM-HOWTO/lvm2faq.html
http://www.tldp.org/HOWTO/LVM-HOWTO/





Joe
--
View this message in context:
http://www.nabble.com/LVM-snapshots-tf1905387.html#a5214340 Sent from the
Amanda - Users forum at Nabble.com.






--
Genius might be described as a supreme capacity for getting its possessors
into trouble of all kinds.
-- Samuel Butler


Re: A question on compression

2006-06-28 Thread Michael Loftis



--On June 27, 2006 3:42:10 PM -0400 Matt Emmott [EMAIL PROTECTED] 
wrote:





Hi,



I’m running Amanda 2.4.2 p2 on a Red Hat 8 server. The backups were set
up before my time, and I’m trying to wrap my head around how the
compression is set up  why it isn’t working for some backups. We’re
backing up 2 clients – One is a Red Hat server running Amanda, and the
other is a Windows server that I connect to over a Samba mount at
/home/windows_server_name. It’s the Samba mount that isn’t
compressing.


You've answered your own question below.  Three of them are configured to 
use a different dump-type, one that probably doesn't do compression.  It 
all has to do with the dump types.





Re: tar's default block size shoe-shinning

2006-06-19 Thread Michael Loftis



--On June 19, 2006 10:59:58 AM -0400 Joshua Baker-LePain [EMAIL PROTECTED] 
wrote:



On Mon, 19 Jun 2006 at 4:31pm, Cyrille Bollu wrote


But, when we purchased the backup server I agreed to follow my boss'
solution (it's always him you known ;-p) to buy that cheaper server with
maximum 1,5TB RAID5 (6*300GB) instead of that nice DAS with up to 3,9TB
RAID5. So, to save space I created one big volume containing both the OS
and the data.


What type of drives, and what RAID card?  What OS/distro are we talking,
and what SCSI card for the tape drive?


In a configuration where amanda only backup local (SCSI) drives, are
there any benefits from using a holding disk?


Not that I can think of.  And especially with an LTO3 drive, it's really
only going to hurt you.


Actually with AMANDA it might be a really good idea esp. with the faster 
drives because there's no way you can keep them streaming over the network. 
The problem becomes getting a holding area fast enoughRAID0 with 4-8 
decently fast SCSI driveslike 9 or 18Gb 10k RPM or 15k RPM split over a 
couple of channels.


AMANDA doesn't interleave onto tape like a lot of the other software does. 
Several commercial packages interleave blocks from everyone currently 
backing up while they're writing to tape, whereas AMANDA does not.  This 
causes AMANDA to slow down significantly when you get one slow or slowerish 
host.  This also will cause excessive shoe-shining since the network then 
gets in the way of the tape streaming too.


Re: tar's default block size shoe-shinning

2006-06-19 Thread Michael Loftis



--On June 19, 2006 12:09:58 PM -0400 Joshua Baker-LePain [EMAIL PROTECTED] 
wrote:



Erm, I think you missed the bit where he said he's only backing up disks
local to the backup server (and thus the tape drive).  No network
involved.



You're right, I did, my bad! :D  Going back to my hole now :D


RE: Disabling LTO-2 hardware compression

2006-05-03 Thread Michael Loftis



--On May 3, 2006 9:00:35 AM -0500 Gordon J. Mills III 
[EMAIL PROTECTED] wrote:



Just a note on this:
There needs to be a tape in the drive when you execute that command...at
least on my drives.
Anyone have any further info on this?


YEah that's a severe brokenness of Linux 2.6.



My drives are AIT35 autoloaders.

Regards,
Gordon


-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Michael Loftis
Sent: Tuesday, May 02, 2006 2:21 PM
To: Guy Dallaire; amanda users list
Subject: Re: Disabling LTO-2 hardware compression

mt -f st device datcompression 0

That should work but it will likely come back on after a
restart though.

--On May 2, 2006 10:17:48 AM -0400 Guy Dallaire
[EMAIL PROTECTED] wrote:

 Hi,

 Recently added an Overland LoaderXpress LTO 2 tape library
to my setup.
 Inside the loader is an OEM HP Ultrium tape drive. My tape
server is a
 centos 4.2 box (RHEL 4 clone)

 I'm using software compression with amanda 2.4.5p1 already.

 Problem is, the tape drive in the library always seems to have
 hardware compression ON. There is no way on the library
operator panel
 to force the compression OFF.

 I've heard that LTO2 drives can easily cope with already compressed
 data (and do not try to re compress it). Is this true ?
Otherwise, I
 fear that trying to compress already compressed data might actually
 use more tape space and reduce throughput.

 I've searched the amanda wiki and di not find anything.

 Has anyone devised a way to force hw compression off on a
similar setup ?

 Thanks



--
Genius might be described as a supreme capacity for getting
its possessors into trouble of all kinds.
-- Samuel Butler









--
Genius might be described as a supreme capacity for getting its possessors
into trouble of all kinds.
-- Samuel Butler


Re: Disabling LTO-2 hardware compression

2006-05-02 Thread Michael Loftis

mt -f st device datcompression 0

That should work but it will likely come back on after a restart though.

--On May 2, 2006 10:17:48 AM -0400 Guy Dallaire [EMAIL PROTECTED] wrote:


Hi,

Recently added an Overland LoaderXpress LTO 2 tape library to my setup.
Inside the loader is an OEM HP Ultrium tape drive. My tape server is a
centos 4.2 box (RHEL 4 clone)

I'm using software compression with amanda 2.4.5p1 already.

Problem is, the tape drive in the library always seems to have hardware
compression ON. There is no way on the library operator panel to force
the compression OFF.

I've heard that LTO2 drives can easily cope with already compressed data
(and do not try to re compress it). Is this true ? Otherwise, I fear that
trying to compress already compressed data might actually use more tape
space and reduce throughput.

I've searched the amanda wiki and di not find anything.

Has anyone devised a way to force hw compression off on a similar setup ?

Thanks




--
Genius might be described as a supreme capacity for getting its possessors
into trouble of all kinds.
-- Samuel Butler


Re: DLEs with large numbers of files

2006-05-02 Thread Michael Loftis



--On May 2, 2006 5:49:47 PM -0400 Ross Vandegrift [EMAIL PROTECTED] wrote:


Hello everyone,

I recognize that this isn't really related to Amanda, but I thought
I'd see if anyone has a good trick...

A number of DLEs in my Amanda configuration have a huge number of
small files (sometimes hardlinks and symlinks, sometimes just copies)
- often times in the millions.  Of course this is a class corner case and
these DLEs can take a very long time to backup/restore.


I use estimated sizes and tar on these types of DLEs.  Dump may be faster 
if you can get away with it, but realistically they're both getting limited 
by what amounts to stat() calls on the filesystem to ascertain the 
modification times of the various files.  You can try for a filesystem that 
has better small files performance or upgrade your storage hardware to 
support more IOPS.




Currently, they are mostly using dump (which will usually report
1-3MiB/s throughput).  Is there a possible performance advantage to using
tar instead?

On some of our installations I have bumped up the data timeouts.  I've
got one as high as 5400 seconds.  I suspect a reasonable maximum is
very installation dependant, but if anyone has thoughts, I'd love to
hear them.

Thanks for any ideas!

--
Ross Vandegrift
[EMAIL PROTECTED]

The good Christian should beware of mathematicians, and all those who
make empty prophecies. The danger already exists that the mathematicians
have made a covenant with the devil to darken the spirit and to confine
man in the bonds of Hell.
--St. Augustine, De Genesi ad Litteram, Book II, xviii, 37





--
Genius might be described as a supreme capacity for getting its possessors
into trouble of all kinds.
-- Samuel Butler


Re: amanda 2.5.0 : tar 1.15 required ?

2006-03-31 Thread Michael Loftis



--On March 31, 2006 1:36:36 PM -0500 Matt Hyclak [EMAIL PROTECTED] 
wrote:




That doesn't mean that 1.15 should be required by the spec file. tar 1.14
from Redhat works just fine
(see https://www.redhat.com/advice/speaks_backport.html)



I don't know about Redhat tar 1.14 but GNU tar 1.14 is severely broken.  It 
'seems' to create ok archives, but in reality a lot of the time creates 
garbage even it can't read.  Tan into this quite a bit, one symptom is the 
'invalid base64 header' error.


Re: Attempt to contact amanda gives sshd error -- dont know why sshd is involved!

2006-03-30 Thread Michael Loftis



--On March 30, 2006 1:01:29 PM -0800 Kevin Till [EMAIL PROTECTED] 
wrote:




the Did not receive identification string from:::10.10.32.247
should not have anything to do with Amanda. Seems to me someone try to
login as amanda to that machine.


It doesn't.  This actually happens when someone or something opens the SSH 
port and never does anything and closes it.  Monitoring apps do this a lot 
looking for the SSH banner.


'amanda' in the SSH lines is your host's name.  The START lines indicate 
xinetd is starting amandad, so you should look in 
/tmp/amanda/amandad*.debug and /tmp/amanda/amcheck*.debug.   They're 
date+time stamped files.



Couple things to check, is amandad started correctly on the client?
1) /etc/init.d/xinetd restart and see if there is any error on
/var/log/messages.

2) any error in /tmp/amanda/amcheck*.debug?

Thanks!

--Kevin Till
Zmanda




amcheck -m Daily

/var/log/secure gives me this:


[EMAIL PROTECTED] log]# tail secure
Mar 30 13:15:03 amanda sshd[19449]: Did not receive identification
string from:::10.10.32.247
Mar 30 13:20:03 amanda sshd[19495]: Did not receive identification
string from:::10.10.32.247
Mar 30 13:25:03 amanda sshd[19538]: Did not receive identification
string from:::10.10.32.247
Mar 30 13:30:04 amanda sshd[19584]: Did not receive identification
string from:::10.10.32.247
Mar 30 13:32:24 amanda xinetd[2244]: START: amanda pid=19702
from=10.10.32.250
Mar 30 13:32:24 amanda xinetd[2244]: START: amanda pid=19705
from=10.10.32.250
Mar 30 13:35:04 amanda sshd[20042]: Did not receive identification
string from:::10.10.32.247
Mar 30 13:40:05 amanda sshd[20088]: Did not receive identification
string from:::10.10.32.247
Mar 30 13:45:05 amanda sshd[20131]: Did not receive identification
string from:::10.10.32.247
[EMAIL PROTECTED] log]#






--
Thank you!
Kevin Till

Amanda documentation: http://wiki.zmanda.com
Amanda forums:http://forums.zmanda.com





--
Genius might be described as a supreme capacity for getting its possessors
into trouble of all kinds.
-- Samuel Butler


Re: Restore WO Amanda, what am I doing wrong?

2006-03-29 Thread Michael Loftis

use /dev/nst instead.  /dev/st rewinds when you close.

--On March 29, 2006 10:52:57 PM -0500 stan [EMAIL PROTECTED] wrote:


I'm trying to bootstrap replacing a failed Amanda machine.

I've built the new machine, and moslty got it working.
So, it's time to get the config files, and maybe indexes
off of the tapes from the faile machine.

I looked in the docs, and this should be simple. But:


Script started on Thu Mar 30 03:41:04 2006
# mt -f /dev/st1 rewind
# mt -f /dev/st1 fsf 1
# dd if=/dev/st1 bs=32k count=1
AMANDA: TAPESTART DATE 20051216 TAPE DailyDump74
1+0 records out
32768 bytes transferred in 5.310011 seconds (6171 bytes/sec)
# dd if=/dev/st1 bs=32k [EMAIL PROTECTED]:~# mt -f /dev/st1 fsf 1
# mt -f /dev/st1 fsf 1 [EMAIL PROTECTED]:~# dd if=/dev/st1 bs=32k count=1
AMANDA: TAPESTART DATE 20051216 TAPE DailyDump74
1+0 records out
32768 bytes transferred in 5.323727 seconds (6155 bytes/sec)
# dd if=/dev/st1 bs=32k [EMAIL PROTECTED]:~# mt -f /dev/st1 fsf 1
# mt -f /dev/st1 fsf 1 [EMAIL PROTECTED]:~# dd if=/dev/st1 bs=32k count=1
AMANDA: TAPESTART DATE 20051216 TAPE DailyDump74
1+0 records out
32768 bytes transferred in 5.318464 seconds (6161 bytes/sec)
dd if=/dev/st1 bs=32k [EMAIL PROTECTED]:~# mt -f /dev/st1 fsf 1  
2
# mt -f /dev/st1 fsf [EMAIL PROTECTED]:~# dd if=/dev/st1 bs=32k count=1
AMANDA: TAPESTART DATE 20051216 TAPE DailyDump74

1+0 records out
32768 bytes transferred in 5.313033 seconds (6167 bytes/sec)
]0;[EMAIL PROTECTED]: [EMAIL PROTECTED]:~#
Script done on Thu Mar 30 03:45:44 2006

As you can see, even when I move the tape foward, all I get is the tape
header. This is not what I _think_ I should be getting.

What am I doing wrong?

--
U.S. Encouraged by Vietnam Vote - Officials Cite 83% Turnout Despite
Vietcong Terror  - New York Times 9/3/1967






--
Genius might be described as a supreme capacity for getting its possessors
into trouble of all kinds.
-- Samuel Butler


Re: Best filesystem type for larg (500G) dumpdisk

2006-03-26 Thread Michael Loftis



--On March 26, 2006 12:27:02 AM +0100 Matthias Andree 
[EMAIL PROTECTED] wrote:



Michael Loftis [EMAIL PROTECTED] writes:


Reiser will take a while to mount such a large filesystemAs may XFS.
I haven't treied anything that big recently with ext3 but you can try
it...though I'm kind of interested now so I might see myself.


Well, just one data point ext3fs works for me on a 270 GB RAID5 (decent
hardware) fileserver.


Well, by a while, I can find a reasonable datapoint.  Our tape server has 
about 600Gb ReiserFS partition, I can time it next time I reboot itI 
think it's about a minute.


Re: Best filesystem type for larg (500G) dumpdisk

2006-03-25 Thread Michael Loftis



--On March 25, 2006 11:30:07 AM -0500 stan [EMAIL PROTECTED] wrote:


On Sat, Mar 25, 2006 at 11:25:01AM -0500, Joshua Baker-LePain wrote:

On Sat, 25 Mar 2006 at 10:37am, stan wrote

 Subject line pretty much says it all.

 What filesystem type should I use for a largish dumpdisk, oh and how
 about for a vtape partiton too?

What OS/distro are you using on the server?


Sorry (hides face in shame). I'm using Ubuntu Breezy. I plan on building
my own 2.6.16 kerenel. So my choices are li,ited to what's avaialble
on Linux.

I would think BTW, that if i had managed to get my hardware working
with Solaris 10 (which was my first plan), I'd have used zfs.



Solaris for x86 has always been short in the hardware department...In past 
days it was kinda sluggy too because it lacked hardware support for a 
number of things that were fast on SPARC due to hardware support in the 
processor.  I think with 10 they finally became VERY serious about x86 and 
ESPECIALLY x86_64 performance too and it's improved, that said I haven't 
had any time to even test Sol 10.  Soon...sometime...



I'd tend to recommend ReiserFS or XFS first (to me Reiser is better, XFS 
second...Reiser seems to handle corruption a little better, except in the 
case of tail  corruption in which case you lose all the tails on the 
filesystem possibly), followed by ext3.  Ext2 isn't an option because for 
500+G you need journalling.


Reiser will take a while to mount such a large filesystemAs may XFS.  I 
haven't treied anything that big recently with ext3 but you can try 
it...though I'm kind of interested now so I might see myself.  My 
benchmarks would be out of whack with yours though because of CPU and 
storage backend differences. :)


Re: Best filesystem type for larg (500G) dumpdisk

2006-03-25 Thread Michael Loftis



--On March 25, 2006 3:19:03 PM -0700 Michael Loftis [EMAIL PROTECTED] 
wrote:




I'd tend to recommend ReiserFS or XFS first (to me Reiser is better, XFS
second...Reiser seems to handle corruption a little better, except in the
case of tail  corruption in which case you lose all the tails on the
filesystem possibly), followed by ext3.  Ext2 isn't an option because for
500+G you need journalling.


I should also note that I say reiser's tools are better because they 
actually fix the filesystem.  XFS we've had filesystems never quite fix 
after needing to be fixed by the xfs tools.  They kept coming up with more 
errors, or crashing the machine despite the filesystem checking out as 
'fine' after forcing a full check.




Reiser will take a while to mount such a large filesystemAs may XFS.
I haven't treied anything that big recently with ext3 but you can try
it...though I'm kind of interested now so I might see myself.  My
benchmarks would be out of whack with yours though because of CPU and
storage backend differences. :)





--
Genius might be described as a supreme capacity for getting its possessors
into trouble of all kinds.
-- Samuel Butler


Re: Total tape usage in Amanda report

2006-03-23 Thread Michael Loftis

Uhm...Yes it does...

Just after the STATISTICS section you should have (by default):

USAGE BY TAPE:
 LabelTime  Size  %Nb
 CDO801   2:18 34005742k   99.4   546
 CDO798   1:33 27109731k   79.217


That'll list the total it got onto the tape.

--On March 24, 2006 10:03:45 AM +0700 Olivier Nicole [EMAIL PROTECTED] 
wrote:



Hello,

Is there a way to have the total tape usage in Amanda report?

It happens that sometime Amanda runs out of tape after 50 GB (which is
normal for a 50GB SLR100 tape) but sometime only after 8GB which
clearly means there is a problem. But Amanda reports do not show any
total and I have to compute it by hand to see when there is a problem
and when there is not.

Best regards,

Olivier





--
Genius might be described as a supreme capacity for getting its possessors
into trouble of all kinds.
-- Samuel Butler


Re: Total tape usage in Amanda report

2006-03-23 Thread Michael Loftis



--On March 24, 2006 10:35:38 AM +0700 Olivier Nicole [EMAIL PROTECTED] 
wrote:



Uhm...Yes it does...
Just after the STATISTICS section you should have (by default):


My mistake, it sure is there, but I never noticed it :(


NP...note that those numbers are just the amount of successfully taped 
DLEs, so, they can be a little misleading if say they say something like 
33GB (of a 40GB tape) if the next DLE it wanted to put on the tape was say 
8GB.




Thank you,

Olivier





--
Genius might be described as a supreme capacity for getting its possessors
into trouble of all kinds.
-- Samuel Butler


Re: hardware gzip accelerator cards

2006-03-11 Thread Michael Loftis



--On March 11, 2006 2:17:50 PM +0100 Kai Zimmer [EMAIL PROTECTED] wrote:


Hi all,

has anybody on the list experience with hardware gzip accelerator cards
(e.g. form indranetworks)? Are they of any use for amanda - or is the
disk-i/o the limiting factor? And how much are those (generally
pci-based) cards?

thanks,
Kai



Depends on the machine, most machines are disk I/O limited.  For those that 
aren't unless the card accellerates the gzip command it's worthless. 
Usually they require special APIs to be implemented in a special (apache) 
module in order to work.  That's not to say you couldn't write a gzip 
implementation using the card.  It might not be any faster though, in fact 
it might be slower.  Modern CPUs are pretty damned fast.  And because of 
hte nature of compression, you need a GP proc to run it, and it's not very 
likely you'll get anything faster than a newer Athlon or P4 on one of these 
cards.  Add to that the fact you have to load data to/from main memory, 
over whatever bus (esp a slow PCI bus) you might actually be *slower* 
running one of these cards.


They're meant to accellerate systems that are being used pretty heavily for 
other things by freeing the main processor to run the intensive Java apps 
or ASP.Net apps.


That said, you might see an improvement, if you can get the gzip command 
line command accellerated, or whatever your dump/tar/gtar equivalent uses.





--
Genius might be described as a supreme capacity for getting its possessors
into trouble of all kinds.
-- Samuel Butler


Re: If most recent backup is not level 0, recovery fails to bring back all files when directories have been renamed

2006-03-08 Thread Michael Loftis



--On March 8, 2006 11:34:16 AM + Dave Ewart [EMAIL PROTECTED] wrote:



Thoughts/opinions here?  BTW the AMANDA server runs Debian/Woody
(2.4.2p2-4) and the client being backed up above runs Debian/Sarge AMD64
(2.4.4p3-3).


It could very well be b -- if it is it's not amanda, it's tar.  The backup 
program is responsible exclusively for what does and does not get backed 
up.  Amanda just communicates a level/last backup date/file list to use. 
Depending on the dump/backup program.  I know that a new tar version in 
sarge atleast before the security update taht just went out is broken 
anyway.  It doesn't create valid archives a lot of the time, you'd be 
better off installing a backported tar from unstable, or rebuilding the one 
in oldstable and using that.  Google around for invalid base64 or 
obsolescent base64 header skipping to next archive (i think that's right) 
if you haven't seen the error message yet.




Cheers,

Dave.
--
Dave Ewart
[EMAIL PROTECTED]
Computing Manager, Cancer Epidemiology Unit
Cancer Research UK / Oxford University
PGP: CC70 1883 BD92 E665 B840 118B 6E94 2CFD 694D E370
Get key from http://www.ceu.ox.ac.uk/~davee/davee-ceu-ox-ac-uk.asc
N 51.7518, W 1.2016




--
Genius might be described as a supreme capacity for getting its possessors
into trouble of all kinds.
-- Samuel Butler


Re: compress client best

2006-02-09 Thread Michael Loftis



--On February 9, 2006 9:24:40 AM +0100 [EMAIL PROTECTED] wrote:

...


the dumptype 'global' only contains index yes.

Any idea?


Indexes are always compressed server-side.


Thank you and best regards
Uwe



Virus checked by G DATA AntiVirusKit
Version: AVK 16.5417 from 09.02.2006





--
Genius might be described as a supreme capacity for getting its possessors
into trouble of all kinds.
-- Samuel Butler


Re: System Crash when using Amanda

2006-01-10 Thread Michael Loftis



--On January 10, 2006 3:43:56 PM -0500 Freels, James D. 
[EMAIL PROTECTED] wrote:



Do you happen to be using the aic7xxx driver ? If so, I had the same
problem until the new kernel 2.6.15.



I'm having problems in debian 2.6.8 related to aic7xxx as well...I use an 
aic7xxx HVD SCSI connected to a tape library, occasionally since going to 
2.6 it totally locks up one of the tape drives to the point i have to 
shutdown and re-init the drive (which in this case means power cycling the 
whole library) and then reboot the tape server.


DLE 'aliasing'

2006-01-08 Thread Michael Loftis
OK maybe I should get flogged, but today I had/have to split a GNUTAR DLE 
into a couple of pieces.  The problem is I'm just cutting this dir in 
half [a-m]* in one, and [n-z]* in another  What I'm drawing a blank 
on is they're both in the same 'root' directory lets use /home/u1/ for 
arguments sake, so...how do I make two separate DLEs like that with 
different 'names' on the backup... I'm pretty sure it's possible, I'm just 
not seeing the example I was thinking of anywhere.


TIA!

--
Genius might be described as a supreme capacity for getting its possessors
into trouble of all kinds.
-- Samuel Butler


Re: DLE 'aliasing'

2006-01-08 Thread Michael Loftis



--On January 8, 2006 8:03:46 PM -0500 Jon LaBadie [EMAIL PROTECTED] wrote:


from the man page the syntax of a disklist entry is defined as:


I couldn't find a man page for disklist...well, not on my system, but, 
there again, I'm realizing now it's probably amanda or amanda.conf. 
*headdesk*




  hostname diskname [diskdevice] dumptype [spindle [interface]]

Note the diskdevice is optional.
You have been using the device name as the diskname (I like to
think of them as DLEnames as diskname sounds like the device.

So you can have host foo /home/u1 and host bar /home/u1.
It is host and name which must be unique, not device.

You could even try some form of clarifying names:

   host /home/u1-a2m /home/u1 ...
   host /home/u1-n2z /home/u1 ...


Yup, that's more or less what I was going to do dir_a-m, dir_n-z -- see 
where that gets me as a start, if one the other or both times out still, 
cut them in half again.  Keep repeating until I've got them pared down to 
what works.




BTW no possibility of A-Z or 0-9 or ???, just l.c. letters?



Yup just LC letters in this case.  Restrictions on what the program 
creates.  The subdirs are/were just getting too filled up to reliably 
backup w/o data timeout errors.


Re: BUG (was: Re: Handitarded....odd (partial) estimate timeout errors.)

2006-01-05 Thread Michael Loftis



--On January 5, 2006 4:49:53 PM +0100 Paul Bijnens 
[EMAIL PROTECTED] wrote:



Michael Loftis wrote:



Paul asked for the logs, it seems like there's an amanda bug.  The units


Yes, indeed, there is a bug in Amanda!
You have 236 DLE's for that host, and from my reading of the code
the REQuest UDP packet is limited to 32K instead of 64K (see planner.c
lines 1377-1383)  (Need to update the documentation!)


Woot, I'm NOT crazy! :D

...did I just say woot?  My apologies.


It seems that that planner splits up the REQuest packet into separate
UDP-packets when exceeding MAX_DGRAM/2, i.e. 32K.
Your first request was 32580 bytes.  Adding the next string to that
request would have excceeded the 32768 limit.
The reason for division by 2 seems to reserver space for error replies
on each of those.


I knew it was size related but that my packets were significantly less than 
the MAX_DGRAM.  This definitely explains it.



However, the amandad client only expects one and only one REQuest packet.
Any other REQuest packet coming from the same connection (5-tuple:
protocol, remotehost, remoteport, localhost, localport) and having
a type REQ is considered a duplicate.
It should actually test for the handle and sequence to be identical
too. It does not.

It's not fixed quickly either:  when receiving the first REQ packet,
the amandad client forks and execs the request program (sendsize in
this case) and reads from the results from a pipe.

By the time the second, non-identical request comes in (with different
handle, sequence -- which is currently not checked), sendsize is already
started and cannot be given additional DLE's to estimate.


As a temporary workaround, you could shorten the exclude-list string for
that host by creating a symlink:

ln -s /etc/amanda/exclude.gtar /.excl


Yeah...This will help for a time.  Hopefully long enough for a patch to fix 
amandad.  I'll have to create a separate type for this server, since we've 
got well over a hundred now and they all share that main backup type.  I 
figured shortening the UDP packets somehow would help, I knew it was just 
odd that it wasn't quite right and I seemed to be running into the problem 
way too early :)



and use that as exclude-list: this shortens each line by 20 byte, which
would shrink the package to fit again. (236 DLE's * 20  = 4720 bytes
less in a REQuest UDP for that host!)



AnywayI'm getting a headache thinking about it :)  all my other DLEs
seem ok for that host, and the ones that it misses are not always
exactly the same, but all seem to be non-calcsize estimated.


Just bad luck for those entries that happen to go in the end of the
queue.  On the other hand, when really unlucky, you could have up to
three estimates for each DLE, overflowing even the 4K we saved by
shrinking the exclude string...


Like I said, hopefully by then either the hackers (or myself) will have put 
together a patch.  ...  I see three ways to fix this...one of which I don't 
know will fix, what about turning wait=yes to wait=no in my xinetd.conf? 
Not sure what that would break.  The other two involve code...multiple 
sendsize's, *or* a protocol change to wait for a 'final start' packet, or 
an amandad change to wait a few extra seconds before starting the actual 
sendsize, coalescing the results.


And you're right, the other ways aren't easy...one involves possibly 
breaking the protocol too.






Re: BUG

2006-01-05 Thread Michael Loftis



--On January 5, 2006 11:05:44 AM -0700 John E Hein [EMAIL PROTECTED] wrote:


I still think we need to be able to break up estimate requests into
multiple chunks if necessary.  I never got around to making a patch
for that.


Yeah and amandad would have to properly understand breaking up it's REPs 
(PREPs...?).


--
Genius might be described as a supreme capacity for getting its possessors
into trouble of all kinds.
-- Samuel Butler


Handitarded....odd (partial) estimate timeout errors.

2006-01-04 Thread Michael Loftis
I added about half a dozen or so DLEs (splitting an existing one) and since 
that time I get estimate timeout errors for some other DLEs on this host 
(daily run snippet attached)  ... i suspect I'm hitting a UDP packet limit 
maybe, but...I'm really drawing a blank.  I've turned up etimeout quite a 
bit, to no effect.


Maybe soemone can jog my memory, but are the estimates returned in a single 
UDP packet and therefore subject to the MTU?  If so...how to get around it? 
OR maybe I'm missing something more obvious.  Amanda 2.4.5 server and 
client, client being debian woody, server being debian sarge client DLEs 
all with 'calcsize' estimate setting except for the affected DLEs, but not 
all non-calcsize DLEs are affected...  need anything else let me know.



 planner: ERROR Request to nfs0.msomt timed out.
 nfs0.msomt /var/spool/cron lev 0 FAILED [missing result for 
/var/spool/cron in nfs0.msomt response]
 nfs0.msomt /usr/local lev 0 FAILED [missing result for /usr/local in 
nfs0.msomt response]
 nfs0.msomt /root lev 0 FAILED [missing result for /root in nfs0.msomt 
response]




--
Genius might be described as a supreme capacity for getting its possessors
into trouble of all kinds.
-- Samuel Butler


Re: Handitarded....odd (partial) estimate timeout errors.

2006-01-04 Thread Michael Loftis



--On January 4, 2006 4:30:53 PM -0500 Jon LaBadie [EMAIL PROTECTED] wrote:



You can find comments on the problem here:

http://tinyurl.com/ca7pv



OK hmm  something REALLY odd is happening.  For the DLEs that failed 
there are multiple sendsize requests... one in the main/first REQ which it 
acks...then another request (a second or two later) that just for the DLEs 
that never make it, amandad claims this to be a dup P_REQ packet, acks it 
anyway, but doesn't apparently do any estimates of it  I'm wary of 
sending the entire debug to the list, but if interested I'll send it 
directly to developer( s )  I'm thinking maybe something funny is going 
on?







Re: Handitarded....odd (partial) estimate timeout errors.

2006-01-04 Thread Michael Loftis



--On January 4, 2006 7:20:50 PM -0500 Jon LaBadie [EMAIL PROTECTED] wrote:


Asking questions about things I know nothing ...  :)

Are you using iptables?
If so, have you installed and configured the ??conntrack?? module?


Paul asked for the logs, it seems like there's an amanda bug.  The units in 
question are attached to the same broadcast domain/VLAN and are in the same 
subnet, so are talking directly to eachother.  It's not an obvious network 
or switch problem going on.  I thought maybe an MTU limit of 1500 bytes but 
apparently amanda is set to fragment UDP packets up to 64k and so that 
should be fine, and other drives are making it.  Anyway thanks anyway Jon 
:)  I think we've hit some sort of bug or something in amandad, or planner 
(I think it sends the SERVICE sendsize packets) or both.


Network wise BTW the backup server is connected to a switch here in the 
office, which is trunked further to a switch upstairs, then to another 
switch in the blade chassis, then to the untrunked connection to the 
(amanda backup client) nfs server which is the one having issues.  It 
seemed maybe some sort of odd packet size limit or some other 'max number 
of' limit in planner, since planner is sending duplicate requests sorta for 
the affected DLEs.



AnywayI'm getting a headache thinking about it :)  all my other DLEs 
seem ok for that host, and the ones that it misses are not always exactly 
the same, but all seem to be non-calcsize estimated.


Re: Verizon subscribers -- off topic

2005-12-06 Thread Michael Loftis



--On December 6, 2005 10:01:06 AM +0100 Geert Uytterhoeven 
[EMAIL PROTECTED] wrote:



On Tue, 6 Dec 2005, Paul Bijnens wrote:

[Off topic]

This isn't the first time I'm hit with this nonsense: I can't send mail
to a Verizon email address.  And I'm surely not alone.

  http://www.theinquirer.net/?article=23703

Just to let people know (Gene!) that I do send mail to them, I'm not
ignoring them.  But their provider is ignoring their users.

Yes, I did fill out the whitelist request, twice already. Just enough
to get one mail pass through, and then a few weeks later, it bounces
again.

If people with Verizon email addresses want to read my responses, it's
time to switch providers.

Pfeew, that reliefs the anger a bit...


I can confirm I cannot send email to Gene from work, but I can from home.
Apparently it's unrelated to the sender address, but related to the
outgoing SMTP server.

Gr{oetje,eeting}s,


Actually verizon can and does block based on a number of different 
criteria, just as AOL does, the difference is verizon is completely 
non-transparent as to what got you blocked, and who you can contact to get 
you unblocked.  They also seem to have *no* control over their own 
automated systems whereas AOL atleast has some.


Re: amverify shows invalid sparse archive member errors

2005-12-01 Thread Michael Loftis



--On December 1, 2005 7:12:30 PM -0500 Ed Kenny [EMAIL PROTECTED] wrote:




Hi there,

I need a little help. I've installed Amanda server with clients through a
firewall. Everything is OK except when the disklist includes the AIX
client the amverify command on the server produces hundreds of these
errors:


Known tar 1.15 ish bugDebian tar 1.15.1-2 has a patch for it atleast, 
though I thought that 1.15.1 upstream had it patched, it showed up in or 
around 1.14 and persisted for quite a while.  Debian Sarge's default tar 
has the issue.  1.13.25 is known to me to be good,. though i thought it was 
fixed in 1.15.1 as well, but maybe only in debians.  Try finding an older 
or newer tar package. 1.13.25ish, or go after 1.15.1, 1.14 definitely has 
the bug.


/bin/gtar: ./987366/cred: invalid sparse archive member

/bin/gtar: ./987366/sigact: invalid sparse archive member

/bin/gtar: ./991470/sigact: invalid sparse archive member

/bin/gtar: ./995570/cred: invalid sparse archive member

/bin/gtar: ./995570/sigact: invalid sparse archive member

/bin/gtar: ./999676/sigact: invalid sparse archive member



The tar version from both machines:

AIX Client:

(/)- which tar

/usr/bin/tar

(/)- /usr/bin/tar --version

tar (GNU tar) 1.15.1

(/)-



RedHat Server:

(/root)- which tar

/bin/tar

(/root)- /bin/tar --version

tar (GNU tar) 1.15.1

(/root)-



amverify does not write to the /tmp/amanda place like it’s supposed to
either.



Any ideas?






--
Genius might be described as a supreme capacity for getting its possessors
into trouble of all kinds.
-- Samuel Butler



Re: Is automatic wakeonlan possible?

2005-11-10 Thread Michael Loftis



--On November 10, 2005 6:00:59 PM -0800 Kevin Dalley [EMAIL PROTECTED] 
wrote:



Does amanda have a method of automatically running wakeonlan, or the
equivalent?  What do other people do?


I don't backup desktops/workstations.  There's a central fileserver 
available for things that people want/need to backup on their 
workstation/desktop.


Re: Estimate Disable/tweak patch?

2005-10-21 Thread Michael Loftis



--On October 21, 2005 9:42:13 AM +0200 Paul Bijnens 
[EMAIL PROTECTED] wrote:



Michael Loftis wrote:


This sounds exactly like what I need  Do you know if 2.4.5 server
and clients can be intermixed with 2.4.4p3 (thereabouts) clients?  I


Yes, without any problem.  Of course, you cannot use the newer features
on a 2.4.4 client like the advanced estimate options.
As far as I know any 2.4.X version can communicate with eachother.


That's a given :)  and perfectly fine.  I really only have four-five hosts 
that *need* this.  Other than that the rest can get upgraded whenever.


Thanks again!




Estimate Disable/tweak patch?

2005-10-20 Thread Michael Loftis
There was someone who had posted here or to hackers an AMANDA estimate 
disabler tweak or patch.  I was wondering where this is.


And yes I tried searching but Yahoo groups is apparently completely broken. 
After a second or two it comes back with Partial search completed. Your 
search timed out before any results matching your search were found. Find 
more results for this search by clicking the button below. which would be 
fine if it were searching a decent chunk of articles.  It's not, only about 
100-500 at each click and not finding anything and it insists on startinf 
from oldest I think.



--
Genius might be described as a supreme capacity for getting its possessors
into trouble of all kinds.
-- Samuel Butler


Re: Estimate Disable/tweak patch?

2005-10-20 Thread Michael Loftis


--On October 20, 2005 1:40:00 PM -0400 Matt Hyclak [EMAIL PROTECTED] 
wrote:



On Thu, Oct 20, 2005 at 11:13:55AM -0600, Michael Loftis enlightened us:

There was someone who had posted here or to hackers an AMANDA estimate
disabler tweak or patch.  I was wondering where this is.



Since 2.4.5b1 there are a couple of options for estimates in your
dumptype:


This sounds exactly like what I need  Do you know if 2.4.5 server and 
clients can be intermixed with 2.4.4p3 (thereabouts) clients?  I have a 
bunch of machines being backed  up that it will take a long time to get a 
new version out to, but I need this badly for my mailserver and main 
fileserver if it indeed will speed things up.  Right now the systems all 
wait for several hrs for the mail server, and the fileserver takes over two 
hrs too.  Lots of files, lots of data.




* new 'estimate' dumptype option to select estimate type:
CLIENT: estimate by the dumping program.
CALCSIZE: estimate by the calcsize program, a lot faster but less
acurate. SERVER: estimate based on statistic from previous run, take
second but can be wrong on the estimate size.

I use CALCSIZE on my slow clients.

Matt

--
Matt Hyclak
Department of Mathematics
Department of Social Work
Ohio University
(740) 593-1263





--
Genius might be described as a supreme capacity for getting its possessors
into trouble of all kinds.
-- Samuel Butler


Re: Restoring without Amanda

2005-09-07 Thread Michael Loftis



--On September 7, 2005 8:36:20 PM +0100 Tony van der Hoff 
[EMAIL PROTECTED] wrote:




yes, personally, I had no problem with the hyphen, but ...

I posted last week that my confusion had arisen out of the skip=1,
having located the required archive. The need to rewind and skip 2 files
in this case could have been avoided at step 1 above. Now *that* would
make a difference.


dd operates at the block level, thus a skip of one skips one block, not 
file.


it is a little confusing at first but AMANDA assumes (maybe incorrectly 
these days) familiarity with dd.





Re: This is retarded.

2005-08-30 Thread Michael Loftis



--On August 30, 2005 2:36:45 PM -0600 Graeme Humphries 
[EMAIL PROTECTED] wrote:



This looks like a hardware error with your tape drive more than an amanda
problem, at least to me.


Could be, or could be AMANDA's still poor behavior when a DLE exceeds the 
capacity of a single tape, and the tapetype is just a bit long for the 
tapes or maybe as someone I think hypothesized that taper ignores the 
tapelen and just writes to EOT, and only planner really uses the tapelen? 
I'm not sure, haven't tested that myself and didn't get a chance to read 
the rest of that thread... :S




Graeme





--
Genius might be described as a supreme capacity for getting its possessors
into trouble of all kinds.
-- Samuel Butler


Re: tape drive repair question

2005-08-22 Thread Michael Loftis



--On August 22, 2005 3:14:04 AM -0400 Jon LaBadie [EMAIL PROTECTED] wrote:


Not strictly amanda related, but I can't
use my amanda setup until it is resolved.

I've got an HP DDS3 autoloader (SureStore 6x24) and
need to have it serviced or do something myself.

The drive uses a plastic magazine to hold 6 tape
cartridges, 3 each in the front and rear.  During
normal operation the magazine must be rotated 180
degrees to reach the tapes at the other end.  This
movement is what is failing.

The rotation begins and seems to stick about 1/3
of the way around.  If I operate the drive without
a cover, I can manually assist the operation and
it completes.  If it was only a mechanical device
I'd try spraying some lubricant.  But that is
probably not a good idea inside a tape drive ;)


I've never serviced one of these units, however, in general, what is used 
is a white lithium grease compound for lubrication.  Sometimes it may have 
a teflon component to it as well (that is as part of the grease).  This is 
a 'general purpose' type of lubricant for computer stuff.  HP uses it in 
their printers, on both plastic and metal parts.  It's very neutral, and 
not very tacky to dust usually so it stays clean.


Assuming it's something of that nature and not a failing/failed servomotor 
or other drive motor for the movement of the mechanism it *should* be 
fairly easy to service.  However, as I said, I'm unfamiliar with this 
particular unit.


Sorry I can't offer any other helpMaybe someone more knowledgeable will 
kick in :)


Re: Does Attached Changer matter in mtx inquiry?

2005-08-13 Thread Michael Loftis
No.  Attached Changer in this context I believe means SCSI 
attachment...something very different from physical attachment.  In any 
case I've only seen one changer that reported as attached.  And IIRC it 
definitely had ot do with it's SCSI bus behaviour.


So it's not saying no changer...it's saying it behaves in such and such a 
way when SCSI commands are sent to it (it detaches from the SCSI bus)


Re: multiple gzip on same data!?

2005-06-29 Thread Michael Loftis



--On June 29, 2005 9:57:48 AM -0600 Graeme Humphries 
[EMAIL PROTECTED] wrote:



Now, why oh why is it doing *two* gzip operations on each set of data!?
It looks like the gzip --best isn't actually getting that much running
time, so is there something going on here that's faking me out, and it
isn't *actually* gzipping everything twice? :)


Nope it isn't.  One is for the index, one for the data.  I had the same 
'huh?!' question (sort of) a while back since I do client side compression 
and still had gzip's running ;)



--
Genius might be described as a supreme capacity for getting its possessors
into trouble of all kinds.
-- Samuel Butler


Re: multiple gzip on same data!?

2005-06-29 Thread Michael Loftis



--On June 29, 2005 10:58:07 AM -0600 Graeme Humphries 
[EMAIL PROTECTED] wrote:



Ahhh, that makes sense then. Alright, I've got to beef up my AMANDA
server, because it's struggling along with just those 4 gzips, and I
want to have 4 dumpers going simultaneously all the time.


Then do client side compression?  Is there really a reason as to why you're 
not?  Unless your clients are all extrmely slow that's what I would 
suggest.





Re: Tape Library Recommendations

2005-06-23 Thread Michael Loftis



--On June 23, 2005 6:17:26 PM + James Marcinek [EMAIL PROTECTED] 
wrote:




I am looking recommendations for a tape library device that works well
with amanda. Right now the client is using 8mm tapes, so I'd be
interested in other feedback in regards to other types of media (LTO,
DLT, etc.) and pros and cons of each.


LTO, DLT, S-LTO, etc all have the huge advantage of the same physical form 
factor.  So 'upgrading' a DLT library to LTO, or S-LTO is just 
adding/upgrading the tape drives in it.  DLT (sometimes called Compactape 
IV) has a long history and is a good reliable medium, with a lot of vendors 
selling drives and





Re: Tape Library Recommendations

2005-06-23 Thread Michael Loftis



--On June 23, 2005 3:50:57 PM -0400 Mitch Collinsworth 
[EMAIL PROTECTED] wrote:




On Thu, 23 Jun 2005, Michael Loftis wrote:


LTO, DLT, S-LTO, etc all have the huge advantage of the same physical
form  factor.  So 'upgrading' a DLT library to LTO, or S-LTO is just
adding/upgrading the tape drives in it.  DLT (sometimes called
Compactape IV)  has a long history and is a good reliable medium, with a
lot of vendors  selling drives and


Yeah that's what you'd think.  I used to run an Overland loader with
DLT4000.  The thing was rock-solid, so when the time came I tried to
swap out the drive for a DLT7000.  Nope, Overland wouldn't hear of it.
They insisted I had to start all over with a new loader.  (Yeah, much
more $$, too.)  Turned out at least part of the problem was that the
SCSI version of the 4000 loader wasn't fast enough for the 7000 drive.


Well...mostly I guess.  There are some exceptions like when the SCSI Bus is 
shared 'tween the loader/changer and the drives and the loader/changer uses 
HVD instead of the now more usual LVD (and usually faster LVD at that) 
variants.




-Mitch





--
Genius might be described as a supreme capacity for getting its possessors
into trouble of all kinds.
-- Samuel Butler


Re: DDS-3 tapetype ???

2005-06-20 Thread Michael Loftis

As an FYI here's what I got with my DAT7000 (Quantum) drives

~# mtx load 28 1 ; sleep 160 ; mt -f /dev/nst1 datcompression 2 ; 
amtapetype -f /dev/nst1 -t DAT7000_Compressed -e 70G ; mt -f /dev/nst1 
rewind ; mt -f /dev/nst1 datcompression 0 ; amtapetype -f /dev/nst1 -t 
DAT7000_UnCompressed -e 35G ; mt -f /dev/nst1 offline ; mtx unload 28 1

Compression on.
Compression capable.
Decompression capable.
Writing 256 Mbyte   compresseable data:  27 sec
Writing 256 Mbyte uncompresseable data:  60 sec
WARNING: Tape drive has hardware compression enabled
Estimated time to write 2 * 71680 Mbyte: 33600 sec = 9 h 20 min
wrote 940417 32Kb blocks in 41 files in 7065 seconds (short write)
wrote 951844 32Kb blocks in 83 files in 7178 seconds (short write)
define tapetype DAT7000_Compressed {
   comment just produced by tapetype prog (hardware compression on)
   length 29566 mbytes
   filemark 0 kbytes
   speed 4251 kps
}
Compression off.
Compression capable.
Decompression capable.
Writing 128 Mbyte   compresseable data:  28 sec
Writing 128 Mbyte uncompresseable data:  28 sec
Estimated time to write 2 * 35840 Mbyte: 15680 sec = 4 h 21 min
wrote 1066524 32Kb blocks in 93 files in 7193 seconds (short write)
wrote 1072258 32Kb blocks in 187 files in 7412 seconds (short write)
define tapetype DAT7000_UnCompressed {
   comment just produced by tapetype prog (hardware compression off)
   length 33418 mbytes
   filemark 0 kbytes
   speed 4687 kps
}
Unloading Data Transfer Element into Storage Element 28...done




Re: DDS-3 tapetype ???

2005-06-17 Thread Michael Loftis



--On June 17, 2005 9:51:48 AM +0200 Paul Bijnens 
[EMAIL PROTECTED] wrote:



Estimated time to write 2 * 12288 Mbyte: 26880 sec = 7 h 28 min
wrote 298832 32Kb blocks in 76 files in 10632 seconds (short write)
wrote 310628 32Kb blocks in 158 files in 10840 seconds (short write)


These two lines are actually a little strange.  I would have expected
that the second pass wrote a little bit less then the first pass (and
the difference is the space taken up by the additional filemarks).


Actually. on my DAT drives I get the same thing.  Reliably.  Multiple 
(new) tapes, multiple drives.  I think maybe a bug in the tapetype program 
of some nature.  I've only ever run the tests without compression 
personally.


*shrugs*




Re: DDS-3 tapetype ???

2005-06-17 Thread Michael Loftis



--On June 17, 2005 1:37:42 PM -0400 Jon LaBadie [EMAIL PROTECTED] wrote:



Have you tried amtapetype with compression intentionally left on?


Not yetI can pull out a tape and start a run, it takes about 4hrs on my 
tape, will report back either later tonight (if i remember) or tomorrow 
morning.  Though I'm separate from the individual who started this thread 
the results should be interesting.


Actually I'll start a 'batch' run with compression on and off and we can 
see what we get.  It'll take atleast 8 hrs to complete so I probably won't 
have results until tomorrow.  Not intending on being here too late tonight.





It might confirm that the current results were obtained with
compression off.  I'm thinking about a possibility where the
drive dip switches are set to not allow switching it off.
My HP drive has such a switch.


--
Genius might be described as a supreme capacity for getting its possessors
into trouble of all kinds.
-- Samuel Butler


Re: Seeking message clarification.

2005-06-04 Thread Michael Loftis



--On June 4, 2005 1:35:49 AM -0400 Jon LaBadie [EMAIL PROTECTED] wrote:


Does taper adjust the reported number of filemarks for the
tape header and trailer files it might write to the tape?

I.e. is fm the number of DLEs written or is it off by
1 or 2 for the header and trailer files?


Well, fm 0 == the AMANDA label header.  fm 1 is the first dump.  so if you 
pop in an AMANDA tape and do mt fsf 1 (or the equivalent on your platform) 
you'll be at the first dump.  Now if you amtape CONF ... you must mt 
rewind;mt fsf 1 or you'll end up on the 2nd instead of the 1st fm (since 
amtape reads the label and doesn't rewind the tape when it's done)  AMANDA 
will always rewind and check the label before doing anything to a tape so 
she knows, but if you're writing scripts to directly access them then 
you'll need to worry about this, as I do for prepping and writing my 
offsite copies of Level 0's.





--
Undocumented Features quote of the moment...
It's not the one bullet with your name on it that you
have to worry about; it's the twenty thousand-odd rounds
labeled `occupant.'
  --Murphy's Laws of Combat



Re: disk offline?

2005-06-02 Thread Michael Loftis



--On June 2, 2005 12:05:58 PM -0700 Cameron Matheson [EMAIL PROTECTED] 
wrote:




devdb.tonservices.com /dev/ida/c0d0p2 root-tar

it's similar to my other clients which are all working
fine... why would it be sending this tar command? (the
other clients are all having '/' be their directory)


Whenusing tar reference MOUNTED filesystems not raw disk devices.  / is a 
mounted file system, the above is pretty clearly not, it should be like 
/var, /home, /usr, or somesuch.  You can't dump an unmounted filesystem or 
raw device with tar.


Re: Request for enhancement - Auto dle

2005-05-13 Thread Michael Loftis

--On May 13, 2005 6:20:33 PM +0100 Chris Lee [EMAIL PROTECTED] 
wrote:

It would be great if we could tell amanda that all subdirectories of a
specific root were to be treated as individual disk list entries. i.e.
balance all sub directories across backups.
This would help in cases like my home system where I have a few user
directories.
These are mostly static in size (grow slowly), on occasion I add another
home directory, for a guest or friend; updating the disk lists each time
is a pain and I often forget, so stuff gets left out of the backup.
These directories hold most of my data so splitting them up for amanda to
balance across backups is very useful.
As I want them all treated the same one entry referring to them all would
be great.
I can see the utility in this, however, I can see the difficulty too.  It'd 
require some additional protocol for the planner to query the clients for 
the DLEs to generate.  This could also produce the problem of ending up 
with a potential 'Denial of Service' attack on the AMANDA planner/server by 
filling up it's RAM with thousands and thousands of DLEs by simply creating 
a bunch of directories somewhere it's looking.  Not that there aren't 
already potential problems like this in AMANDA.

It's certainly not impossible, and I don't believe it's impractical...but 
I'm not certain.  However, I'd use it if it were available!


Re: Request for enhancement - Auto dle

2005-05-13 Thread Michael Loftis

--On May 13, 2005 3:21:29 PM -0500 Frank Smith [EMAIL PROTECTED] wrote:
   I just have a DLE for the top level, and that directory contains an
exclude file containing the subdirs that I back up as separate DLEs.
That way I won't miss any new subdirs that are created.  This won't work
as well for very dynamic environments or where you don't have much
control on the client machines.
Remember that excludes are controlled/controllable server side.  I use a 
file on the clients though myselfand...honestly i just had a thought 
about how well/if they work with GNUTAR or not?  I'd assume so.



Re: Amanda and LVM-based Linux installations

2005-05-13 Thread Michael Loftis

--On May 14, 2005 3:20:55 AM +0200 Arrigo Triulzi 
[EMAIL PROTECTED] wrote:

My plan of action is to edit /etc/lvm.conf to alter the umask parameter
to 027 so that the /dev/mapper files are group readable.  Then I was
planning to change them to be group backup so that Amanda can read them.
Does this make sense or am I working towards disaster?

That works fine as long as your'e using dump, but you'll need to have a 
backup device with media atleast 158Gb in size, and as your larger 
filesystem fills, a device capable of storing 450+Gb  AMANDA currently 
can't split DLEs across tapes, dumpe2fs is also potentially unreliable and 
quite a few 'in the know' refuse to use it and instead insist GNU Tar is a 
far better option.

I'm not aware of the specifics with dumpe2fs because I only use tar 
(reiserfs filesystems) but have heard that it's not really well supported.

YMMV.



Re: Is the Dell Powervault 132T a changer device?

2005-04-27 Thread Michael Loftis

--On Wednesday, April 27, 2005 15:03 -0400 Carlos Scott [EMAIL PROTECTED] 
wrote:

Hello all,
It's the first time i try to setup Amanda and i'm a little confused.
I don't think i got the changer device idea right. Does the Dell
Powervault 132T Tape Library qualify as a changer device?
I thought so but the mtx command tells me otherwise:
[EMAIL PROTECTED] DailySet1]# /usr/sbin/mtx -f /dev/sg1 inquiry
Product Type: *Tape Drive*
Vendor ID: 'IBM '
Product ID: 'ULTRIUM-TD2 '
Revision: '37RH'
Attached Changer: No
I want to backup around 6 servers using that unit. My plans are to use
one tape for all servers per day. Is that a right approach?
Please if someone could point me in the right direction i'd be really
thankfull.
You should have an sg device for the changer mechanism itself.  Here my 
changer ends up being /dev/sg2 under Linux.

Cheers!

--
GPG/PGP -- 0xE736BD7E 5144 6A2D 977A 6651 DFBE 1462 E351 88B9 E736 BD7E 


Re: Archive contains obsolescent base-64 headers

2005-04-25 Thread Michael Loftis

--On Monday, April 25, 2005 14:40 -0500 Bryan K. Walton 
[EMAIL PROTECTED] wrote:

I'm trying to restore some files and am running into errors.  Some
background:
The directory I am trying to restore sits on the same machine as the
actual amanda backup server.  Both the amanda client and amanda server
are 2.4.4p3. I'm restoring the files on the same machine that did the
actual backing up.  I'm using tar version 1.13.25.  From my research on
the web, I know that there have been problems with certain version of
tar, but 1.13.25 is supposed to be a good version.  And since I'm using
the same machine to extract as I am to compress, it seems that there
shouldn't be any problems with different compression algorithms.
Regardless of whether I use amrecover or amrestore, I get errors:
The two most common reasons i get these are 1) using the rewinding instead 
of non-rewinding tape device on accident, and 2) not using the correct 
restore programs.

not sure on your exact error but it might be worth a thought.  the tape 
device you should be using will be /dev/nst? (/dev/nst0).

hopefully this is somewhere for you to look.


Re: Debian Compile failed

2005-04-24 Thread Michael Loftis

--On Sunday, April 24, 2005 12:39 PM -0400 Kuas 
[EMAIL PROTECTED] wrote:

I can compile everything in FC3 and FC1 without a problem. But when I
tried to compile Amanda in Debian it gave this error in configure:

When I check if gnu c++ compiler and the standard library, it is
installed:
You also need gcc, make, cpp, libc6-dev ata  minimum  Let me check the 
build deps...

dump, gnuplot,libncurses5-dev, libreadline4-dev, libtool, flex, perl, 
smbclient, mailx, lpr, mtx, xfsdump, po-debconf

That's the full list (minus debhelper which if you're not dpkg-building 
from a debian source you don't need)...you might want to check 
backports.org and see if they have the version of amanda you want.

# dpkg --list | grep g++
ii  g++-2.95   2.95.4-22  The GNU C++ compiler
# dpkg --list | grep libstdc
ii  libstdc++2.10- 2.95.4-22  The GNU stdc++ library (development
files) ii  libstdc++2.10- 2.95.4-22  The GNU stdc++ library
ii  libstdc++5 3.3.5-8The GNU Standard C++ Library v3
Does any one know what package or step am I missing? Thanks.
Kuas


--
Undocumented Features quote of the moment...
It's not the one bullet with your name on it that you
have to worry about; it's the twenty thousand-odd rounds
labeled `occupant.'
  --Murphy's Laws of Combat


Re: estimates

2005-04-13 Thread Michael Loftis

--On Wednesday, April 13, 2005 23:48 +0200 Stefan G. Weichinger 
[EMAIL PROTECTED] wrote:


AMANDA 2.4.5 will make you happy (again):
This release brings so-called server-estimates which reduce
estimate-times radically.
What is this?  server estimates?  URL would be fine
2.4.5 is soon to come, snapshots of 2.4.5b1 are already available and
working stable.
2.4.5 servers/clients going to interoperate or no?


amdump tape 'searching'...

2005-04-05 Thread Michael Loftis
OK I know what amcheck's behaviour is, and I assum amdump still has the 
same behaviour.  what I want to know is if there are plans to change, or an 
easy way to change 2.4.4p3 (yes i know there are newer revs) to not be 
stupid and just ask the changer program for a given label?  what is 
everyone else doing?

we've got a 50 tape DLT library, it takes an hr or two to 'scan' for a tape 
if it happens to be say the previous tape from the current one (because 
they're out of order, or it's hit the last tape, etc).  scanning the tapes 
needlessly wears them and the drives.  the unit has a barcode reader and 
indeed if i 'amtape CONF label LABEL' it will load the correct tape 
(chg-zd-mtx is my changer -- chg-scsi is damaged somehow...and i can't 
quite figure out how...)...why can't amdump/amcheck do this instead of 
working my library to an early grave?

i know that the bits in the changers are not universally supported, but for 
the ones that do why aren't they used?  the current doc's all say they 
aren't used in this way so i'd like to know if anyone is working on this or 
not.

in the meantime i'm probably writing a script to run prior to my amdump and 
amcheck runs that uses amadmin CONF tape to load the next tape into the 
changer and then using mtx probably attempt to transfer/swap other tapes 
around so that the next tape (I currently am setup for two tapes per run) 
is in the following slot so amdumps 'next' call doesn't begin scannign the 
whole library.

maybe it'd be better if i just dedicated 4-6 slots for the current runs and 
used my script to fetch up all the tapes in order and store the other tapes 
elsewhere.

what is everyone else doing?
--
GPG/PGP -- 0xE736BD7E 5144 6A2D 977A 6651 DFBE 1462 E351 88B9 E736 BD7E 


Re: Adding new stuff to disklist

2005-04-01 Thread Michael Loftis

--On Friday, April 01, 2005 16:50 -0500 Vicki Stanfield [EMAIL PROTECTED] 
wrote:


What am I missing here?
Nothing, they'll get created at the next run.  the NOTE is just that, a 
note, it's not an ERROR or even a WARNING -- it's just saying hey i saw 
something you might like to know about.




Re: what the hail does this mean?

2005-03-31 Thread Michael Loftis

--On Thursday, March 31, 2005 6:09 PM -0700 Glenn English [EMAIL PROTECTED] 
wrote:

From 'amcheck sls':
Amanda Backup Client Hosts Check

ERROR: server.slsware.dmz: [addr 192.168.20.237: hostname lookup failed]
ERROR: log.slsware.dmz: [addr 192.168.20.237: hostname lookup failed]
Client check: 5 hosts checked in 10.049 seconds, 2 problems found
If the lookups failed, where did the IPs come from? They are indeed
wrong, but the 192.168.20 part is right. There is nothing at 237. It's
in the middle of a DHCP pool the PIX firewall uses when the lan wants to
talk to the dmz.
tape server appeared to clients as 192.168.0.237, clients attempt to do 
reverse DNS on server, and either get an answer that has no A records, 
mismatched A records or don't get an answer at all.  I know, the error is a 
bit cryptic, but less so when you look at it and realise the error came 
FROM the clients in the dmz.

The pix isn't at fault, lack of rDNS is.  get your in-addr.arpa's 
straightened out and you'll be good to go.

host server.slsware.dmz gives 192.168.20.218
host log.slsware.dmz gives 192.168.20.30
Both are right. The IPs are right in /etc/hosts, too. I ssh to them all
the time. And the DNS server is on the same network as the amanda
server. The 3 hosts that did not have problems are also on the lan. It's
gotta be a pix problem, but I don't understand how amanda could be
coming up with that IP. Where does amanda go to do its DNS?
--
Glenn English
[EMAIL PROTECTED]
GPG ID: D0D7FF20

--
Undocumented Features quote of the moment...
It's not the one bullet with your name on it that you
have to worry about; it's the twenty thousand-odd rounds
labeled `occupant.'
  --Murphy's Laws of Combat


Re: new tape not found in rack network services goes down after runing amdump

2005-03-17 Thread Michael Loftis
OS, Version of Amanda, etc.
Sounds like youv'e got a bunch of non-amanda problems here.  First the 
changer probably needs to have an inventory command run so it knows where 
things are...

Second this box sounds extremely unhealthy, a Sig11 from sendmail indicates 
a pretty serious issue -- maybe you have some bad RAM or something causing 
the whole thing, try running memtest86 on it for a few hrs.


Re: Exclude list syntax.

2005-03-08 Thread Michael Loftis
exclude list syntax depends on your dump/tar/smbtar...usually, no.  All you 
get is wildcards pretty much.

--On Tuesday, March 08, 2005 22:09 +0100 Erik P. Olsen [EMAIL PROTECTED] 
wrote:

Is it possible to use todays date as element in a filename in an exclude
list?
--
Regards,
Erik P. Olsen


--
GPG/PGP -- 0xE736BD7E 5144 6A2D 977A 6651 DFBE 1462 E351 88B9 E736 BD7E 


Strange planner problem 2.4.4p3

2005-02-25 Thread Michael Loftis
Yes I'm aware p4+ is out...But I'm having a strange problem where my 
planner promotes full dumps, but then complains about other full dumps 
being delayed because the backup is too big?  This makes no sense, 
shouldn't it incremental these other dumps in order to keep from delaying 
these other full dumps?

Any ideas, hints, suggestions?  Need/want any additional info?  It just 
seems pathological to do this ever.  Now I did have to amrmtape a few tapes 
here today (three) so maybe that's had the effect, but the promotions were 
from ~20+ days ahead, which is the length of my dumpcycle. (20 days that is)

--
Undocumented Features quote of the moment...
It's not the one bullet with your name on it that you
have to worry about; it's the twenty thousand-odd rounds
labeled `occupant.'
  --Murphy's Laws of Combat


Re: VXA-2 packet-loader issues and AMANDA [Fwd: hard luck with the new autoloader]

2005-02-04 Thread Michael Loftis

--On Friday, February 04, 2005 09:41 -0500 James D. Freels 
[EMAIL PROTECTED] wrote:

When you say cabling issues, does this include a separate scsi card
specific for the
tape drive ?  I think my cable connections are good.
This encompasses a whle list of issues.  The two or three most common, 
are cable length, termination, and cable quality.  Because you have a 
10MB/sec device on there you're limited to 3 meters (~9ft) (FAST SCSI).  If 
you remove that CD-ROM drive you should have a fully LVD chain from the 
sounds of it which means you have up to 12  meters (~40ft).  You also need 
to have LVD compatible active terminators in either case.  Cheap passive 
terminators will likely not work at extreme cable lengths.  And this length 
is *ALL* cabling, so it includes cabling internal to the VXA autoloader (I 
think about 1 meter, tops).

I would try cabling the Exa to a dedicated SCSI port before trying to do 
any firmware updates.  See if that clears up your problems, it really does 
sound like your SCSI bus is too long.

The scsi card I have is a LSI Logic / Symbios Logic (formerly NCR)
53c875 (rev 04)
All devices are indicating on boot up (dmesg) at 40 MB/s except the
CD-Rom which
is indicating 10 MB/s.  I have 3 hard drives, 2 tape drives (including
the new one
having trouble), and 1 CD-Rom in this scsi chain.  I have tried all three
scsi drivers
available for this card in the linux kernel 1) ncr53c8xx, 2) sym53c8xx,
and 3) sym53c8xx-2.
The ncr53c8xx driver seems to give the least problems, so I have
concentrated on
this one.  As I said, the library (autoloader) seems to work correctly,
but it
is just the tape writing that is giving me problems at present.  It is
able to label the tapes,
but not write a larger data set to the tape.



Re: VXA-2 packet-loader issues and AMANDA [Fwd: hard luck with the new autoloader]

2005-02-03 Thread Michael Loftis

--On Thursday, February 03, 2005 15:03 -0500 James D. Freels 
[EMAIL PROTECTED] wrote:

I am getting scsi sense errors using the new drive about 1-2 minutes
after an amdump or amflush starts.  Below are the reommendations from
Exabyte tech support to fix it.
I seem to remember doing FW upgrades on VXAs and DLTs was similar...you 
write the firmware to the tape, restart the tape, and it reads on the new 
firmware

As far as sense errors, more likely it's a cabling issue.



Re: short write even if the dumps are just 10% of the tape size

2005-01-18 Thread Michael Loftis

--On Tuesday, January 18, 2005 10:25 +0100 Peter Guhl [EMAIL PROTECTED] 
wrote:

Hi
On Mon, 2005-01-17 at 17:09, Gene Heskett wrote:
Is that tape being properly rewound Peter?  Most drives will fully
rewind a tape before they allow it to be ejected, maybe you should
eject it and look at it between each pass.
It's probably not rewound, you're right. Eject and look at it is hard -
there are 50km between me and this tape ;) But I guess it doesn't really
matter since it's too much written for beeing at the end and too few for
beeing at the beginning. No matter where it starts - it should write
until the end and then either rewind (=write another 15GB the next time
I try) or stay there (= writing 0B). At least that's what I suspect if
it acts logically.
Dumb question, maybe already asked, 1) is any other app using the tape and 
2) has anyone checked to make sure the mechanism isn't jamming since you're 
not there to watch it at all.


RE: Recommendation for tape drives

2004-12-03 Thread Michael Loftis

--On Friday, December 03, 2004 14:20 -0500 Gil Naveh [EMAIL PROTECTED] 
wrote:

Some more specifications regarding getting a tape drive:
I have a Solaris 9 box which will be the Amanda server
I have to back up another Solaris box as well as a few Window2000 boxes.
The budget for the tape drive is about 500$ to 1000$ (without Jukebox -
too expensive)
I would like to use compression, yet I have no idea about the
compressibility of the data.
My preferences would be: 1) A device that can store 30GB or more. (most
important)
 2) Easy to configure with Amanda.(very
impportant)  3) Reliable (very impportant)
 3) Fast to record and recover (less important)
thx :)
FYI, number three is usually VERY dependant on your tape host as well.  If 
your tape host isn't fast enough the drive will end up waxing the tape, 
winding back and forth since it can't stream.  So make sure whatever tape 
you get, your HDD where your amanda spool is on can SUSTAIN atleast the 
tape's speed plus about 50%.

One thing you need to think about is cost of media too.  I'd tend to 
recommend either Exabyte's VXA products or favorite vendor's DLT 
products.  I've had great luck with both.  DLTs are definitely more 
expensive, but have been around for longer and are known to work well. 
YMMV of course.



Re: Skip two tapes

2004-11-08 Thread Michael Loftis
you can just stuff tape08 in and ignore amanda or you can edit 
CONFDIR/tapelist and adjust the order.

--On Tuesday, November 09, 2004 08:41 +0100 Nicolas Ecarnot 
[EMAIL PROTECTED] wrote:

Hi,
I have a configuration with ten tapes that runs nicely.
For some reason, I have to skip two tapes : the last backup was made on
tape05 and the next one has to be made on tape08
How can I do that ? What file should I hack for that ?
--
Nicolas Ecarnot


--
Undocumented Features quote of the moment...
It's not the one bullet with your name on it that you
have to worry about; it's the twenty thousand-odd rounds
labeled `occupant.'
  --Murphy's Laws of Combat


RE: Problems with Overland Library and Solaris 8

2004-10-11 Thread Michael Loftis

--On Monday, October 11, 2004 10:45 +0200 [EMAIL PROTECTED] wrote:

I'm confused.
Gene Heskett wrote (on 10/2/2004):
Yes, of course you can use a changer. Amanda is used in large
installations doing hundreds of GB backups daily. They don't
have someone sitting around waiting to change tapes. Even I
don't do that in my home office. I have a six position changer
that does 12GB/tape. I set it to use two tapes -- if it needs
to use them. I.e. my normal nightly backup is about 7 or 8GB.
But if something unusual happens, it may take over 12GB. In
that case it will use a second tape.
The confusion here is the other Op mentioned a single dump limitation.  An 
individual dump cannot span multiple tapes.  Amanda can (now atleast 
limitedly) run the changer during a run and use up to run_tapes number of 
tapes, but it can still only put any given dump (DLE) onto one tape.  A run 
may span multiple tapes, but not a given DLE must reside on one tape.

In your case I'm not sure what the problem is but it sounds way more 
hardware or OS related than having anything to do with AMANDA.  AMANDA just 
is reporting symptoms or tripping the problem, but isn't the problem itself.

Check all your SCSI cables and connections, double check your terminations. 
Therein may lie your answer.


Re: Amcheck and amdump port usage?

2004-09-16 Thread Michael Loftis
Sorry for the long delay in response, I've been busy
--On Monday, September 13, 2004 16:54 -0400 KEVIN ZEMBOWER 
[EMAIL PROTECTED] wrote:

Michael, thank you for taking the time to try to help me. Please see my
further questions below.
Michael Loftis [EMAIL PROTECTED] 09/13/04 03:04PM 
--On Monday, September 13, 2004 14:24 -0400 KEVIN ZEMBOWER
[EMAIL PROTECTED] wrote:

1. The tapehost makes a 'start backup' request of the client, originating
on port 850-854 to port 10080-10083 using UDP. The contents of the packet
contain a port number in the range 850-854 which is open on the tapehost,
listening for TCP connections.
Your steps are pretty wrong so lets start over..
1. tapehost makes 'start backup, estimate/etc' call to amandad over UDP
on  remote (usually 10080) client sends back response(s) to udp port
(udpportrange).
The UDP packet is sent from the tapehost from which port? Is it correct
that it always goes to port 10080 on the client, no matter what is
defined in the compilation of amanda with --with-portrange
--with-udpportrange or --with-tcpportrange?

From an ephemeral port, IE it varies depending on your OS.  And it always 
goes to the 'amanda'  service, which is usually defined as 10080/udp.  And 
yes the compile time options do not control the source port for these.

Which port on the client does the response come from?
Same, ephemeral, but it goes TO one of the UDP Ports specified by 
udpportrange parameter.  Later TCP data connections may be attempted on the 
tcpportrange, as well as the amandaidx and amidxtape services (for restores 
though)

2. after response/receipt of estimates (Assuming backup run) at some
point  later the server sends start backup, this packet contains a tcp
port to  connect to on the server in the tcpportrange/portrange (these
are the  same).  the client may also connect to amandaidx on the tape
server as well  to transmit indices at this time (I can't remember, and
it does depend on  the index option in the dumptype config).  Once
connected the client begins  transmitting backup data to the server.
Is it correct that the packet of 'start backup' from the tapehost is sent
UDP?  From which port on tapehost? What port on the client is it
addressed to? Is it the same ports on both tapehost and client as the
ports in step 1?
Yes correct, sent UDP from an ephemeral port to 10080/udp on the client. 
All operations are basically two-step, a request going out from the tape 
server to amanda (usually 10080/udp) on the client, which responds back to 
a port selected by the server (one inside of --with-udpportrange) when it 
completes the operation.


Is the amandaidx port on the tapehost always 10082/tcp, regardless of the
--with-???portrange switches?
Yes, all of the services are, I've gone ahead and pasted the entries below, 
your installation won't use the kamanda ports.

amanda  10080/udp   # amanda backup services
kamanda 10081/tcp   # amanda backup services 
(Kerberos)
kamanda 10081/udp   # amanda backup services 
(Kerberos)
amandaidx   10082/tcp   # amanda backup services
amidxtape   10083/tcp   # amanda backup services

Sorry it took so long to get back to you.



Re: Amcheck and amdump port usage?

2004-09-13 Thread Michael Loftis

--On Monday, September 13, 2004 14:24 -0400 KEVIN ZEMBOWER 
[EMAIL PROTECTED] wrote:

I'm still trying to troubleshoot my problem getting Amanda to work though
a firewall. I've read John Jackson's  port usage document and the FAQ at
http://amanda.sourceforge.net/fom-serve/cache/139.html. I'd like someone
to comment on whether or not I have the overall communication sequence
correct below. Then, I'd like information on how this is different if
amcheck rather than amdump is run.
In compiling amanda, I used these options: --with-portrange=10080,10083
--with-tcpportrange=10080,10083 --with-udpportrange=850,854.
This is what I understand concerning the sequence of port usage in making
an amanda backup:
1. The tapehost makes a 'start backup' request of the client, originating
on port 850-854 to port 10080-10083 using UDP. The contents of the packet
contain a port number in the range 850-854 which is open on the tapehost,
listening for TCP connections.
Your steps are pretty wrong so lets start over..
1. tapehost makes 'start backup, estimate/etc' call to amandad over UDP on 
remote (usually 10080) client sends back response(s) to udp port 
(udpportrange).

2. after response/receipt of estimates (Assuming backup run) at some point 
later the server sends start backup, this packet contains a tcp port to 
connect to on the server in the tcpportrange/portrange (these are the 
same).  the client may also connect to amandaidx on the tape server as well 
to transmit indices at this time (I can't remember, and it does depend on 
the index option in the dumptype config).  Once connected the client begins 
transmitting backup data to the server.

That's it, two (ish) step process.  If it's a check request it just does a 
test to see if it can get an estimate or backup by dispatching the 
appropriate commands on the client side, then responding back to the 
tapehost on the indicated UDP port (udpportrange).  If it's going to be a 
backup then further TCP connections will be made to the ports indicated 
when the backup starts.  For estimates they come back via UDP packets.  No 
TCP connections are made to udpportrange, and the server never connects to 
the client.

The server doesn't tell the client to start backup until it's ready for 
data to flow to it.


Re: Amcheck and amdump port usage?

2004-09-13 Thread Michael Loftis
err i should mention, your portrange statements shouldn't overlap the 
actual servicesIE a port range of 10080-10083 is not a good idea since 
amanda uses those udp ports.  This will limit you effectively to one or two 
clients, same with tcpportrange, and we recommend you pick something 
outside of the reserved range (  1024 )


--On Monday, September 13, 2004 14:24 -0400 KEVIN ZEMBOWER 
[EMAIL PROTECTED] wrote:

I'm still trying to troubleshoot my problem getting Amanda to work though
a firewall. I've read John Jackson's  port usage document and the FAQ at
http://amanda.sourceforge.net/fom-serve/cache/139.html. I'd like someone
to comment on whether or not I have the overall communication sequence
correct below. Then, I'd like information on how this is different if
amcheck rather than amdump is run.
In compiling amanda, I used these options: --with-portrange=10080,10083
--with-tcpportrange=10080,10083 --with-udpportrange=850,854.
This is what I understand concerning the sequence of port usage in making
an amanda backup:
1. The tapehost makes a 'start backup' request of the client, originating
on port 850-854 to port 10080-10083 using UDP. The contents of the packet
contain a port number in the range 850-854 which is open on the tapehost,
listening for TCP connections.
2. The client responds by sending a UDP packet from any (?) port to port
850-854 on the tapehost. [Q: Can ports 850-854 on the tapehost be open to
receive both UDP and TCP packets at the same time?] The contents of the
packet are port numbers in the range 10080-10083 on the client which are
listening for TCP packets from the tapehost.
3. The tapehost responds by sending a packet from port 10080-10083 using
TCP to port 10080-10083 on the client. This packet starts the
transmission of the backup data from the client to the tapehost, using
the same port numbers just used.
Thanks for reviewing this and letting me know whether I've got it right.
I appreciate your patience and help.
-Kevin Zembower
-
E. Kevin Zembower
Internet Systems Group manager
Johns Hopkins University
Bloomberg School of Public Health
Center for Communications Programs
111 Market Place, Suite 310
Baltimore, MD  21202
410-659-6139


--
GPG/PGP -- 0xE736BD7E 5144 6A2D 977A 6651 DFBE 1462 E351 88B9 E736 BD7E 


Re: Why am I client constrained?

2004-09-01 Thread Michael Loftis
maxdumps n in your dumptypes, defaults to one, sets the number of 
parallel dumps per client.

--On Wednesday, September 01, 2004 12:29 -0700 Mike Fedyk 
[EMAIL PROTECTED] wrote:

Hi,
Right now, most of everything is on one file server, but I'd like to have
some DLEs SW compress on the server, and some compress on the client.
The problem is that Amanda 2.4.4p3-1 (Debian) is still only backing up
one DLE at a time, and amstatus reports that it is client constrained.
Let me know if you need any more info.
Mike
srv-lnx2600 /share/letter_artcomp-tar-high0
srv-lnx2600 /share/accountingsrvcomp-tar-high 1
srv-lnx2600 /share/sales comp-tar-high0

define dumptype comp-tar {
default
program GNUTAR
comment partitions dumped with tar and compressed
compress client best
index
}
define dumptype srvcomp {
compress server best
}
define dumptype comp-tar-high {
comp-tar
priority 4
}
define dumptype srvcomp-tar-high {
comp-tar
comp-tar-high
srvcomp
}


--
Undocumented Features quote of the moment...
It's not the one bullet with your name on it that you
have to worry about; it's the twenty thousand-odd rounds
labeled `occupant.'
  --Murphy's Laws of Combat


Re: Trying to figure out why unchanged file gets backed up in level 1 backup

2004-09-01 Thread Michael Loftis
Decisions on individual files and directories to backup are not the case of 
Amanda, but are under the sole control of the dump program, in this case 
I'm assuming GNU tar.  Now fair warning is that RedHat modifies pretty much 
everything that goes into it's offering, so, the problem you're having may 
not have a darn thing to do with GNU tar, but more to do with RedHat GNU 
tar.

That said, backups taken on the same day with tar, I've seen include 
unchanged files before, but I've not seen it skip files it's not been 
specifically told to skip.

--On Wednesday, September 01, 2004 18:42 -0600 Bret Mckee 
[EMAIL PROTECTED] wrote:

[ Sorry if this message shows up twice.  I sent it this morning, and
realized when it didn't show up that I was not longer a subscriber of
this list...]
Greetings:
I installed amanda and believe I have everything running.  I am using
the disk based virtual tape driver and everything seemed fine.
I did the first backup, and of course it did a level 0. I then did a
second backup and it did a level 1 backup.  Because I have a large
/home disk, I back up all the users directories separately.
My home directory (/home/mckee) is about 5GB, and the level 0 seemed
large enough to have gotten to it all.  The level 1 backup was run
almost immediately (as part of testing the new install), and virtually
nothing changed. I was really surprised to see that the level 1 was
about 500Mb or 10% of the level 0 (I had expected it to be much
smaller).
First, a bit of version information:
$ uname -a # I'm running RH Enterprise ES
Linux hostname 2.4.21-15.ELsmp #1 SMP Thu Apr 22 00:18:24 EDT 2004
i686 i686 i386 GNU/Linux
$ tar --version # And because it might matter:
tar (GNU tar) 1.13.25
amanda, unpacked from:
-rw-rw-r--1 mckeemckee 1383528 Jun 22 06:50
amanda-2.4.4p3.tar.gz
I used amrestore | tar -tv to get a list of the files in the level 0
and
level 1 archives, and discovered that several things that almost
certainly
had not changed were backed up. I then went and read all about gtar's
--listed-incremental mode and *think* I understand it (famous last
words :-), and I still can't explain why these files were backed up.
Picked as an example, one file that didn't change but that was backed
up was: /home/mckee/proj/proj-2.3/client/pubring.gpg
which was backed up relative to /home/mckee as:
./proj/proj-2.3/client/pubring.gpg
Trying to figure out why it got backed up, I looked in the gnutar-list
files for the path to both files:
hostname_home_mckee_0:26641 31851568 ./proj
hostname_home_mckee_1:26641 31851568 ./proj
hostname_home_mckee_0:26641 4637093 ./proj/proj-2.3
hostname_home_mckee_1:26641 4637093 ./proj/proj-2.3
hostname_home_mckee_0:26641 37028138 ./proj/proj-2.3/client
hostname_home_mckee_1:26641 37028138 ./proj/proj-2.3/client
Note that device/inode didn't change for any of the directories in the
path (which would have triggered tar to back up the files)
Here are the entries for the tar -tv output, and again the dates/sizes
didn't change:
mckee.list.0:-rw--- root/root   1692 2004-03-23 10:04:03
./proj/proj-2.3/client/pubring.gpg
mckee.list.1:-rw--- root/root  1692 2004-03-23 10:04:03
./proj/proj-2.3/client/pubring.gpg
I'm looking to understand this behavior, both because it is a waste of
tape
(even virtual) to backup unchanged bits and because I can't help
wonder
if some files are not being backed up at all (i.e. if it doesn't
correctly
decide what to back up, it could easily be missing files too).
If anyone can explain this behavior, or if you need additional
information to try and explain it, please let me know.  I have also
submitted this to the GNU tar list, since it seems fairly likely it is
really a tar problem...
Many thanks in advance,
Bret



--
GPG/PGP -- 0xE736BD7E 5144 6A2D 977A 6651 DFBE 1462 E351 88B9 E736 BD7E 


Re: Restore buffer?

2004-08-25 Thread Michael Loftis
amrecover and amrestore can be used to recover or restore arbitrary parts 
of the backup tape( s ) to arbitrary places...assuming the machine you're 
running them from has the appropriate tools available (IE gnutar and/or the 
particular 'dump' programs needed) -- so the generic answer is 'yes' -- 
restores can happen to any amanda client that is given access to the server.

--On Wednesday, August 25, 2004 11:32 -0600 Darren Landrum 
[EMAIL PROTECTED] wrote:

I am in charge of seeting up a new backup server for the County of
Montrose,  CO. My boss has asked me to put together a system that will
allow us to  buffer a restore job (say, grabbing a file from tape) before
sending the file  back to its proper place on a server.
The reason he feels this is necessary is so we can look at a file
restored  from tape before we destroy anything that might be on the
server proper. That  way, it can be confirmed by the user that this is
indeed the file(s) they  need restored before we commit to the final
action.
Is Amanda capable of this kind of operation?
Thank you very much for your time.
--
Regards,
Darren Landrum
Montrose County IT


--
GPG/PGP -- 0xE736BD7E 5144 6A2D 977A 6651 DFBE 1462 E351 88B9 E736 BD7E 


Re: Compression apparently ALWAYS happening as --best *AND* on tape host/server!!!!

2004-08-20 Thread Michael Loftis
Ah my bad, thanks.  Hadn't realised the indexes were compressed.
--On Friday, August 20, 2004 02:12 -0400 Joshua Baker-LePain 
[EMAIL PROTECTED] wrote:

On Thu, 19 Aug 2004 at 11:51pm, Michael Loftis wrote
just gettign amanda going on a new install and from what i can tell, the
dang thing is ALWAYS running compression server side, and in --best mode
DESPITE the dumptypes defining compression to be done on the clients,
and  in --fast!
It's compressing the index files.  That's not configurable, but also
shouldn't hit the CPU too hard.  In short, don't worry about it.
--
Joshua Baker-LePain
Department of Biomedical Engineering
Duke University

--
Undocumented Features quote of the moment...
It's not the one bullet with your name on it that you
have to worry about; it's the twenty thousand-odd rounds
labeled `occupant.'
  --Murphy's Laws of Combat


Compression apparently ALWAYS happening as --best *AND* on tape host/server!!!!

2004-08-19 Thread Michael Loftis
just gettign amanda going on a new install and from what i can tell, the 
dang thing is ALWAYS running compression server side, and in --best mode 
DESPITE the dumptypes defining compression to be done on the clients, and 
in --fast!

The tape host really does not have the CPU to do this
i'll include relevant amanda.conf snippets if wanted.
(output of pstree below that shows the gzip children...)
 |   |   `-amdump,26032 /usr/sbin/amdump MWDaily1
 |   |   `-driver,26041 MWDaily1
 |   |   |-dumper,26043 MWDaily1
 |   |   |-dumper,26044 MWDaily1
 |   |   |   `-gzip,26826 --best
 |   |   |-dumper,26045 MWDaily1
 |   |   |-dumper,26046 MWDaily1
 |   |   |   `-gzip,26686 --best
 |   |   |-dumper,26047 MWDaily1
 |   |   |-dumper,26048 MWDaily1
 |   |   |-dumper,26049 MWDaily1
 |   |   |-dumper,26050 MWDaily1
 |   |   `-taper,26042 MWDaily1
 |   |   `-taper,26051 MWDaily1
--
Undocumented Features quote of the moment...
It's not the one bullet with your name on it that you
have to worry about; it's the twenty thousand-odd rounds
labeled `occupant.'
  --Murphy's Laws of Combat


  1   2   >