Separate backup to the server and sending to tapes

2011-08-21 Thread Matt Burkhardt
We use Amanda to back everything up - excellent software.  But we're
finding a problem with out Internet connection after the backups start
sending it up to Amazon S3.  Is it possible to do the backups to the
server (we want it to happen during the day since that's when the
machines are on and people are here), then send it up to S3 at night?

Thanks,

-- 
Matt Burkhardt, MS Technology Management
Impari Systems, Inc.

m...@imparisystems.com
http://www.imparisystems.com 
http://www.linkedin.com/in/mlburkhardt 
http://www.twitter.com/matthewboh
502 Fairview Avenue
Frederick, MD  21701
work (301) 682-7901
cell   (301) 802-3235




Re: Separate backup to the server and sending to tapes

2011-08-21 Thread Matt Burkhardt
Perfect!  Thank you, Dennis.

On Sun, 2011-08-21 at 16:45 +0200, Dennis Benndorf wrote:

 Hello Matt,
 
 the simplest way to do so, is to run 
 amdump --no-taper daily when you want to do the backup, and run another 
 cronjob at night with amflush daily.
 
 Regards,
 Dennis
 
 
  Original-Nachricht 
  Datum: Sun, 21 Aug 2011 07:09:40 -0400
  Von: Matt Burkhardt m...@imparisystems.com
  An: amanda-users@amanda.org
  Betreff: Separate backup to the server and sending to tapes
 
  We use Amanda to back everything up - excellent software.  But we're
  finding a problem with out Internet connection after the backups start
  sending it up to Amazon S3.  Is it possible to do the backups to the
  server (we want it to happen during the day since that's when the
  machines are on and people are here), then send it up to S3 at night?
  
  Thanks,
  
  -- 
  Matt Burkhardt, MS Technology Management
  Impari Systems, Inc.
  
  m...@imparisystems.com
  http://www.imparisystems.com 
  http://www.linkedin.com/in/mlburkhardt 
  http://www.twitter.com/matthewboh
  502 Fairview Avenue
  Frederick, MD  21701
  work (301) 682-7901
  cell   (301) 802-3235
  
  
 


-- 
Matt Burkhardt, MS Technology Management
Impari Systems, Inc.

m...@imparisystems.com
http://www.imparisystems.com 
http://www.linkedin.com/in/mlburkhardt 
http://www.twitter.com/matthewboh
502 Fairview Avenue
Frederick, MD  21701
work (301) 682-7901
cell   (301) 802-3235




Re: Fare Thee Well

2010-10-17 Thread Matt Burkhardt
I've got to wish Dustin all the best.  I was having some problems with
Amanda and Dustin met me at a Panini and fixed me up.

Thanks Dustin and are you on LinkedIn?

On Fri, 2010-10-15 at 22:26 -0700, Christ Schlacta wrote:

 On 10/14/2010 10:03 AM, Jon LaBadie wrote:
  Dustin,
 
  I've been a member of the Amanda community sufficiently
  long to have seen the departure of a handful of star
  contributors.  I count you among them, your impact on
  the project has been enormous.
 
  As you leave for Mozilla, I wish you well and am
  confident you will flourish in any future endevour.
  We will be diminished by your absence.
 
  My best wishes go out to you my friend,
  Jon
 
 I have to second this emotion.



Matt Burkhardt
Impari Systems, Inc.

m...@imparisystems.com
http://www.imparisystems.com 
http://www.linkedin.com/in/mlburkhardt 
http://www.twitter.com/matthewboh
502 Fairview Avenue
Frederick, MD  21701
work (301) 682-7901
cell   (301) 802-3235




Re: Gotcha's

2010-07-26 Thread Matt Burkhardt
Will do - think it's going to take another few days to get everything
off the system just in case

On Sun, 2010-07-25 at 16:02 -0400, Dustin J. Mitchell wrote:
 On Sun, Jul 25, 2010 at 2:36 PM, Matt Burkhardt
 m...@imparisystems.com wrote:
 
 Right now, I'm copying over files to a Toshiba 1TB drive -
 it's taking forever!  But any directions greatly appreciated!
 
 
 
 Well, if your backups are encrypted, then clearly you're going to need
 your key.  You'll also need your S3 credentials.  Having all of the
 Amanda configuration, catalog and index information available will
 make the recovery much easier - I assume that's what you're working on
 now?
 
 
 
 Dustin
 
 -- 
 Open Source Storage Engineer
 http://www.zmanda.com
 


Matt Burkhardt
Impari Systems, Inc.

m...@imparisystems.com
http://www.imparisystems.com 
http://www.linkedin.com/in/mlburkhardt 
http://www.twitter.com/matthewboh
502 Fairview Avenue
Frederick, MD  21701
work (301) 682-7901
cell   (301) 802-3235




Re: Gotcha's

2010-07-25 Thread Matt Burkhardt
Right now, I'm copying over files to a Toshiba 1TB drive - it's taking
forever!  But any directions greatly appreciated!

It's Ubuntu 10.04

On Sun, 2010-07-25 at 12:43 -0400, Dustin J. Mitchell wrote:
 On Sun, Jul 18, 2010 at 4:48 PM, Matt Burkhardt
 m...@imparisystems.com wrote:
 
 I'm thinking it might be best to just go ahead and do a clean
 install, but I'm unsure of what I'll lose.  If I copied over
 my Amanda directories and files to the secondary drive, will
 that help with restoration?  My files are encrypted and I'm
 also backing up several different PC's to this server.
 
 
 
 Sorry, this somehow sorted several days back in my INBOX, so I never
 saw it.  Did you figure out what to do here?
 
 
 
 Dustin
 
 -- 
 Open Source Storage Engineer
 http://www.zmanda.com
 


Matt Burkhardt
Impari Systems, Inc.

m...@imparisystems.com
http://www.imparisystems.com 
http://www.linkedin.com/in/mlburkhardt 
http://www.twitter.com/matthewboh
502 Fairview Avenue
Frederick, MD  21701
work (301) 682-7901
cell   (301) 802-3235




Gotcha's

2010-07-19 Thread Matt Burkhardt
Okay, I upgraded my server to Ubuntu 10.04 and tried to add eBox network
module capabilities.  It completely broke my server.  I'm now looking at
my options.

I have two drives, one contains my music collection, the other all my
important documents, several different applications with databases that
I need for my business.

I have backup this server every night using Amanda to Amazon S3.

I'm thinking it might be best to just go ahead and do a clean install,
but I'm unsure of what I'll lose.  If I copied over my Amanda
directories and files to the secondary drive, will that help with
restoration?  My files are encrypted and I'm also backing up several
different PC's to this server.

Any help greatly appreciated.

Thanks,

Matt Burkhardt
Impari Systems, Inc.

m...@imparisystems.com
http://www.imparisystems.com 
http://www.linkedin.com/in/mlburkhardt 
http://www.twitter.com/matthewboh
502 Fairview Avenue
Frederick, MD  21701
work (301) 682-7901
cell   (301) 802-3235




Re: Amrecover problem - tar: This does not look like a tar archive

2010-03-19 Thread Matt Burkhardt
On Tue, 2010-03-16 at 14:22 -0500, Dustin J. Mitchell wrote:

 On Tue, Mar 16, 2010 at 1:55 PM, Dustin J. Mitchell dus...@zmanda.com wrote:
  I'm sorry we don't have any good solid answers for you..
 
 Well, I can give a little more detail.  I encrypted a backup with
 amcryptsimple, then changed the passphrase and tried to decrypt it.  I
 got:
 
 gpg: decryption failed: Bad session key
 
 Incidentally:
 Tue Mar 16 14:12:47 2010: amfetchdump: xferfilterproc...@0x13a7030:
 process exited with status 0
 which makes it hard for amfetchdump (or amidxtaped, behind the scenes
 for amrecover) to know anything has gone wrong.  I suppose we should
 special-case a zero-length output as failed..
 
 Some scribbling on the dumpfile on my vtape got gpg to say:
 
 gpg: [don't know]: invalid packet (ctb=39)
 gpg: [don't know]: invalid packet (ctb=5d)
 gpg: [don't know]: invalid packet (ctb=2a)
 
 which also doesn't match the error you saw.
 
 dus...@euclid ~/code/amanda/t/amanda [master] $ gpg --version
 gpg (GnuPG) 2.0.11
 libgcrypt 1.4.4
 Copyright (C) 2009 Free Software Foundation, Inc.
 License GPLv3+: GNU GPL version 3 or later http://gnu.org/licenses/gpl.html
 This is free software: you are free to change and redistribute it.
 There is NO WARRANTY, to the extent permitted by law.
 
 Dustin
 

I tried using the -l option on amfetchdump, but it still wants to be
able to decrypt the file before it dumps it


amfetchdump: slot 16: time 20100302090001 label laptops-0016 (exact label match)
Scanning volume laptops-0016 (slot 16)
amfetchdump: 1: restoring FILE: date 20100302090001 host 
mlb-laptop.imparisystems.local disk /home/mlb/ImpariSystems lev 1 comp .gz 
program /bin/tar crypt enc client_encrypt /usr/sbin/amcryptsimple 
client_decrypt_option -d
gpg: decryption failed: bad key


I'm working back through the files just to see if one of these hits - I
can't imagine using a separate passphrase on this set than on any other
groups.

Thanks for all the help

Matt Burkhardt
Impari Systems, Inc.

m...@imparisystems.com
http://www.imparisystems.com 
http://www.linkedin.com/in/mlburkhardt 
502 Fairview Avenue
Frederick, MD  21701
work (301) 682-7901
cell   (301) 802-3235




Re: Amrecover problem - tar: This does not look like a tar archive

2010-03-17 Thread Matt Burkhardt
On Mon, 2010-03-15 at 14:10 -0500, Dustin J. Mitchell wrote: 

 On Fri, Mar 12, 2010 at 5:57 AM, Matt Burkhardt m...@imparisystems.com 
 wrote:
  The bad key error doesn't mean the passphrase is wrong (that would be 
  invalid passphrase).  It often means that the file you are decrypting is 
  corrupt.  Was the file you are decrypting encrypted with a passphrase only 
  or with a public key?
 
 What was the answer to this question?
 
 Dustin
 
 --
 Open Source Storage Engineer
 http://www.zmanda.com
 

Dustin - thanks for checking up on me!

I've been trying to get an older version by getting the history and
selecting a full backup - so I do 


history
200- Dump history for config laptops host mlb-laptop.imparisystems.local 
disk /home/mlb/ImpariSystems
201- 2010-03-10-09-15-33 1 laptops-0004:5
201- 2010-03-09-10-40-06 1 laptops-0003:1
201- 2010-03-08-17-27-03 0 laptops-0002:2
201- 2010-03-06-12-03-37 0 laptops-0001:3
201- 2010-03-05-11-46-08 1 laptops-0020:2
201- 2010-03-05-09-57-36 0 laptops-0019:3
201- 2010-03-04-10-23-20 0 laptops-0018:4
201- 2010-03-03-09-00-02 1 laptops-0017:4
201- 2010-03-02-09-00-01 1 laptops-0016:1
201- 2010-02-28-12-13-07 1 laptops-0015:2
201- 2010-02-28-09-00-01 0 laptops-0014:4
201- 2010-02-25-09-00-01 1 laptops-0013:2
201- 2010-02-24-13-25-46 1 laptops-0012:2
201- 2010-02-24-09-00-02 0 laptops-0011:3
201- 2010-02-23-09-00-02 1 laptops-0007:2
201- 2010-02-11-09-00-02 0 laptops-0005:4
201- 2010-02-08-12-02-50 1 laptops-0006:4
201- 2010-02-07-09-00-02 1 laptops-0008:4
201- 2010-02-05-15-02-31 1 laptops-0010:4


In this example, I set the date to 2010-03-06 because I'm assuming that
0 means a full backup - and then I try to extract it and I get:


amrecover setdate 2010-03-06
200 Working date set to 2010-03-06.
amrecover ls
2010-03-06-12-03-37 YorkInvoice.txt.2008022237.log
2010-03-06-12-03-37 YorkInvoice.txt.2008022236.log
2010-03-06-12-03-37 YorkInvoice
2010-03-06-12-03-37 Q3IncomeStatement.20090222151410.log
2010-03-06-12-03-37 Q3IncomeStatement.20090222151409.xac
2010-03-06-12-03-37 Q3IncomeStatement.20090222150138.log
2010-03-06-12-03-37 Q3IncomeStatement.20090220101929.log
2010-03-06-12-03-37 Q3IncomeStatement.20090220101928.log
2010-03-06-12-03-37 Q3IncomeStatement
2010-03-06-12-03-37 OC_Beta_V4.0.pdf
2010-03-06-12-03-37 ImpariSystems.20100228163356.log
2010-03-06-12-03-37 ImpariSystems.20100228163355.xac
2010-03-06-12-03-37 ImpariSystems.20100228163244.log
2010-03-06-12-03-37 ImpariSystems.20100223123458.log
2010-03-06-12-03-37 ImpariSystems.20100223123457.xac
2010-03-06-12-03-37 ImpariSystems.20100223122643.log
2010-03-06-12-03-37 ImpariSystems.20100223122642.xac
2010-03-06-12-03-37 ImpariSystems.20100223121705.xac
2010-03-06-12-03-37 ImpariSystems.20100223121705.log
2010-03-06-12-03-37 ImpariSystems.20100223121146.log
2010-03-06-12-03-37 ImpariSystems.20100223121145.xac
2010-03-06-12-03-37 ImpariSystems.20100223120132.xac
2010-03-06-12-03-37 ImpariSystems.20100223120132.log
2010-03-06-12-03-37 ImpariSystems.20100223115324.log
2010-03-06-12-03-37 ImpariSystems.20100208164001.log
2010-03-06-12-03-37 ImpariSystems.20100208120210.log
2010-03-06-12-03-37 ImpariSystems.20100208120209.xac
2010-03-06-12-03-37 ImpariSystems.20100208115934.log
2010-03-06-12-03-37 ImpariSystems.20100207143430.log
2010-03-06-12-03-37 ImpariSystems.20100207143429.xac
2010-03-06-12-03-37 ImpariSystems.20100207142854.log
2010-03-06-12-03-37 ImpariSystems.20100207142853.xac
2010-03-06-12-03-37 ImpariSystems.20100207142327.log
2010-03-06-12-03-37 ImpariSystems.20100202141328.xac
2010-03-06-12-03-37 ImpariSystems.20100202141328.log
2010-03-06-12-03-37 ImpariSystems.20100202140825.log
2010-03-06-12-03-37 ImpariSystems.20100202140824.xac
2010-03-06-12-03-37 ImpariSystems.20100202135209.log
2010-03-06-12-03-37 ImpariSystems.20100202135208.xac
2010-03-06-12-03-37 ImpariSystems.20100202134629.log
2010-03-06-12-03-37 ImpariSystems
2010-03-06-12-03-37 Financials.20070602144258.log
2010-03-06-12-03-37 Financials.20070524123205.log
2010-03-06-12-03-37 Financials.20070524123203.log
2010-03-06-12-03-37 Financials
2010-03-06-12-03-37 EasyCom.pdf
2010-03-06-12-03-37 Demo Site/
2010-03-06-12-03-37 Backup.20090222151812.log
2010-03-06-12-03-37 Backup.20090222151811.log
2010-03-06-12-03-37 Backup
2010-03-06-12-03-37 2007msrsmfinal1.pdf
2010-03-06-12-03-37 .
amrecover add ImpariSystems
Added file /ImpariSystems
amrecover extract

Extracting files using tape drive changer on host ubuntu.imparisystems.local.
The following tapes are needed: laptops-0001

Restoring files into directory /home/mlb
Continue [?/Y/n]? y

Extracting files using tape drive changer on host ubuntu.imparisystems.local.
Load tape laptops-0001 now
Continue [?/Y/n/s/d]? y
tar: This does not look like a tar archive
tar: ./ImpariSystems: Not found in archive
tar: Exiting with failure status due to previous errors
Extractor child exited with status 2
amrecover 


So I tried to retrieve an older version using amfetchdump - so 


amfetchdump laptops

Re: Amrecover problem - tar: This does not look like a tar archive

2010-03-12 Thread Matt Burkhardt
On Fri, 2010-03-12 at 08:21 +0100, muessi wrote:

 Dustin J. Mitchell schrieb:
  I'm not terribly familiar with crypto, but presumably you need the
  secret key to decrypt.  Hopefully that was stored somewhere other than
  on the old computer?
  
  BTW, once you get amfetchdump working, amrecover will work fine.
  Amrecover just doesn't show error messages very well.  For example,
  the message regarding am_passphrase was buried in the amidxtaped
  logfile.
  
 
 Well, I'm not quite sure we're talking bout the same thing, but when I use
 encryption, I get something like this in the dumpfiles:
 
 To restore, position tape at start of file and run:
   dd if=tape bs=32k skip=1 | /usr/sbin/amcrypt-ossl-asym -d | /bin/gzip 
 -dc |
 /bin/tar -xpGf - ...
 
 
 So far, having to restore something encrypted, I did use the above commands 
 and
 it worked just fine.
 
 /Michael
 

I'm using amcryptsimple instead of the encryption methodology that
you're using - but thanks for the feedback.

I'm also working with the gnupg folks and they've given me this feedback
- just also wanted to check and make sure that I'm reading the code
correctly.  I'll keep everyone in the loop



  Long story short, I use amanda for my backups and I've been using
 encryptsimple for my backups.  My PC died completely, and I'm trying
 to get the backups onto another machine.  I've stepped through the
 programs and have found that it's calling gpg with 
  
  gpg --batch --quiet --no-mdc-warning --decrypt --passphrase-fd 3
 3/var/lib/amanda/.am_passphrase
  
  I was under the impression that the passphrase (.am_passphrase) was
 just a clear text secret phrase.  However, the gpg call errors out
 with:
  
  gpg: decryption failed: bad key
 
 The bad key error doesn't mean the passphrase is wrong (that would
 be invalid passphrase).  It often means that the file you are
 decrypting is corrupt.  Was the file you are decrypting encrypted with
 a passphrase only or with a public key? 


Here's the code that calls gpg for the encryption:


gpg --batch --no-secmem-warning --disable-mdc --symmetric --cipher-algo AES256 
--passphrase-fd 3  3/var/lib/amanda/.am_passphrase


According to the man pages, it says not to use the --cipher-algo but
doesn't mention if that's needed in order to decrypt the files.  Would
that have to happen?

Thanks!

Matt Burkhardt
Impari Systems, Inc.

Customer Relationship Management Systems
We help you find and keep your best customers
m...@imparisystems.com
http://www.imparisystems.com 
http://www.linkedin.com/in/mlburkhardt 
502 Fairview Avenue
Frederick, MD  21701
work (301) 682-7901
cell   (301) 802-3235




Amrecover problem - tar: This does not look like a tar archive

2010-03-11 Thread Matt Burkhardt
tar: ./Q3IncomeStatement.20090222151410.log: Not found in archive
tar: ./YorkInvoice: Not found in archive
tar: ./YorkInvoice.txt.2008022236.log: Not found in archive
tar: ./YorkInvoice.txt.2008022237.log: Not found in archive
tar: ./Demo Site: Not found in archive
tar: Exiting with failure status due to previous errors
Extractor child exited with status 2

Any ideas?

Thanks,


Matt Burkhardt
Impari Systems, Inc.

Customer Relationship Management Systems
We help you find and keep your best customers
m...@imparisystems.com
http://www.imparisystems.com 
http://www.linkedin.com/in/mlburkhardt 
502 Fairview Avenue
Frederick, MD  21701
work (301) 682-7901
cell   (301) 802-3235




Re: Amrecover problem - tar: This does not look like a tar archive

2010-03-11 Thread Matt Burkhardt
Thanks for being so quick Dustin...

I did amfetchdump and on the first run through it was complaining
about /var/lib/amanda/.am_passphrase missing, so I created it and put in
the correct passphrase, now I'm getting this:


amfetchdump: slot 10: time 20100205150231 label laptops-0010 (exact label match)
Scanning volume laptops-0010 (slot 10)
amfetchdump: 1: restoring FILE: date 20100205150231 host 
mlb-laptop.imparisystems.local disk /home/mlb/Documents lev 1 comp .gz program 
/bin/tar crypt enc client_encrypt /usr/sbin/amcryptsimple client_decrypt_option 
-d
gpg: decryption failed: bad key


So I'm hunting around for the next steps - can I create a new key file
without the old computer?

On Thu, 2010-03-11 at 13:22 -0600, Dustin J. Mitchell wrote:

 Amrecover tends to hide the actual error messages, since they occur on
 the server.
 
 Try doing a recovery using amfetchdump, instead.  I suspect that
 there's a missing encryption key or something preventing the recovery.
 
 Dustin
 
 --
 Open Source Storage Engineer
 http://www.zmanda.com



Matt Burkhardt
Impari Systems, Inc.

Customer Relationship Management Systems
We help you find and keep your best customers
m...@imparisystems.com
http://www.imparisystems.com 
http://www.linkedin.com/in/mlburkhardt 
502 Fairview Avenue
Frederick, MD  21701
work (301) 682-7901
cell   (301) 802-3235




Re: New look for amanda.org

2010-02-19 Thread Matt Burkhardt
A million times better!

The only thing that I would change is the sponsored by message that's at
the bottom of the page - maybe have a second one over the right hand top
menu where it says Download | FAQ | Wiki


On Thu, 2010-02-18 at 14:34 -0800, Tatjana wrote:

 Hello All,
 
 I am a web developer working at Zmanda. We wanted to give a facelift to 
 amanda.org to make it more user-friendly as well as visually appealing. 
 Our objective was to do so without losing the site's simplicity. Here is 
 a draft of new version of amanda.org:
 
 http://www.amanda.org/new_site_preview/
 
 Currently I have only two pages coded in order to get feedback - the 
 main page and downloads page. Rest of the links point to the main page 
 itself.
 
 Please let me know if you have any feedback on this new design.
 
 
 
 Regards,
 
 Tatjana Schlothauer
 
 



Matt Burkhardt
Impari Systems, Inc.

Customer Relationship Management Systems
We help you find and keep your best customers
m...@imparisystems.com
http://www.imparisystems.com 
http://www.linkedin.com/in/mlburkhardt 
502 Fairview Avenue
Frederick, MD  21701
work (301) 682-7901
cell   (301) 802-3235




Re: Problems with changing from IP address to named host

2009-06-12 Thread Matt Burkhardt
D'oh!  I was pinging just the machine name and not the fully qualified
name and it turns out it was a problem with the name service
configuration file.

A line in the /etc/nsswitch.conf file needed to be changed from 

hosts:  files mdns4_minimal [NOTFOUND=return] dns mdns4

to 

hosts:  files dns mdns4

Thanks again for pointing me in the right direction - I'm slowly getting
better!

On Thu, 2009-06-11 at 20:26 -0400, Dustin J. Mitchell wrote:

 On Thu, Jun 11, 2009 at 7:43 PM, Matt Burkhardtm...@imparisystems.com wrote:
  amandad: check_name_give_sockaddr:
  resolve_hostname('ubuntu.imparisystems.local'): Name or service not known
 
 
  What else do I need to change?
 
 I think this is the place to start -- why is that domain name not
 resolving on the client?
 
 Dustin
 

-- 
Matt Burkhardt, M.Sci. Technology Management
m...@imparisystems.com
(301) 682-7901
502 Fairview Avenue
Frederick, MD  21701
http://www.imparisystems.com 



Problems with changing from IP address to named host

2009-06-11 Thread Matt Burkhardt
I finally got DNS running on my main server and now I'm getting errors
from my backups that run across the network.  Here's part of the output
in my planner.datestring.debug file


1244557453.125414: planner: make_socket opening socket with family 2
1244557453.125473: planner: connect_port: Try  port 516: available - Success
1244557453.131565: planner: connected to 192.168.1.105.10080
1244557453.131624: planner: our side is 0.0.0.0.516
1244557453.131646: planner: try_socksize: send buffer size is 65536
1244557453.131659: planner: try_socksize: receive buffer size is 65536
1244557458.155442: planner: security_stream_seterr(0x80719b0, recv error: 
Connection reset by peer)
1244557458.155511: planner: security_seterror(handle=0x80713d0, 
driver=0xb7e6be80 (BSDTCP) error=recv error: Connection reset by peer)
1244557458.155632: planner: security_close(handle=0x80713d0, driver=0xb7e6be80 
(BSDTCP))
1244557458.155644: planner: security_stream_close(0x80719b0)
1244557458.155874: planner: pid 8332 finish time Tue Jun  9 10:24:18 2009

So, I go over to the client and check /var/lib/amanda/.amandahosts file and 
change the IP addresses to the new domain name

ubuntu.imparisystems.local amandabackup amdump

and check /etc/xinetd.d/amandaclient on the client machine to make sure I don't 
have an only from designation 

service amanda
{
disable = no
flags   = IPv4
socket_type = stream
protocol= tcp
wait= no
user= amandabackup
group   = disk
groups  = yes
server  = /usr/libexec/amanda/amandad
server_args = -auth=bsdtcp amdump
}

and everytime I run amcheck - I get

Amanda Backup Client Hosts Check

WARNING: mlb-laptop.imparisystems.local: selfcheck request failed: recv error: 
Connection reset by peer
Client check: 1 host checked in 5.133 seconds.  1 problem found.

(brought to you by Amanda 2.6.1p1)

The amandad debug file from the client says 

amandad: check_name_give_sockaddr: 
resolve_hostname('ubuntu.imparisystems.local'): Name or service not known


What else do I need to change?

Thanks - as always!


-- 
Matt Burkhardt, M.Sci. Technology Management
m...@imparisystems.com
(301) 682-7901
502 Fairview Avenue
Frederick, MD  21701
http://www.imparisystems.com 



Re: [Amanda-users] Cloud Backup...but to my own Data Center

2009-06-03 Thread Matt Burkhardt
In the meantime, can't he just backup across his WAN using a server at
his central office?

On Tue, 2009-06-02 at 16:58 -0700, Dustin J. Mitchell wrote:

 On Tue, Jun 2, 2009 at 1:36 PM, Hopifan amanda-fo...@backupcentral.com 
 wrote:
  Now the question- I am looking for solution, something like Zmanda, but 
  instead of backing up to Amazon S3 I want to backup data from these 30 
  offices to my Data Center. I would appreciate any help. I was looking at 
  Data Domain solution but it was too expensive. Ideally Cloud Backup but to 
  my own location would be the best.
 
 If you're interested in putting some development work into this, we
 are working on a project called libzcloud
 (http://github.com/zmanda/libzcloud/tree/) which Amanda will use to
 talk to arbitrary clouds.  If you write an interface from libzcloud to
 your cloud system, then Amanda will be able to use it.
 
 Dustin
 

-- 
Matt Burkhardt, M.Sci. Technology Management
m...@imparisystems.com
(301) 682-7901
502 Fairview Avenue
Frederick, MD  21701
http://www.imparisystems.com 



xinetd and netstats

2009-05-14 Thread Matt Burkhardt
Okay - I'm getting an amanda server up and running for the local Boys
and Girls Club on an Ubuntu 8.04 LTS server - If I run amcheck daily, I
get

Amanda Tape Server Host Check
-
Holding disk /media/raid5/amandabackup/daily/dumps: 266228440 KB disk
space available, using 255988440 KB
slot 10: Found a non-amanda tape, will label it `daily-001'.
NOTE: skipping tape-writable test
Found a brand new tape, will label it daily-001.
NOTE: conf info dir /etc/amanda/daily/curinfo does not exist
NOTE: it will be created on the next run.
NOTE: index dir /etc/amanda/daily/index does not exist
NOTE: it will be created on the next run.
Server check took 0.063 seconds

Amanda Backup Client Hosts Check

WARNING: Usage of fully qualified hostname recommended for Client
localhost.
WARNING: localhost: selfcheck request failed: timeout waiting for ACK
Client check: 1 host checked in 29.994 seconds, 1 problem found

(brought to you by Amanda 2.5.2p1)

So I run netstat -a | grep amanda to see what services I have running
and I get

tcp0  0 *:amandaidx *:*
LISTEN

So I figure I must have something wrong with my /etc/xinetd.d/amanda
file - but I can't find the issue.  I have restarted xinetd so often I
think I'm going crazy!


Contents of /etc/xinetd.d/amanda

# default: on
# description: The amanda service
service amanda
{
only_from   = 192.168.10.100
socket_type = dgram
protocol= udp
wait= yes
user= backup
group   = backup
groups  = yes
server  = /usr/lib/amanda/amandad
server_args = -auth=bsd amdump amindexd amidxtaped
disable = no
}
# default: on
# description: The amanda index service
service amandaidx
{
only_from   = 192.168.10.100
socket_type = stream
protocol= tcp
wait= no
user= backup
group   = backup
groups  = yes
server  = /usr/lib/amanda/amindexd
server_args = -auth=bsd amdump amindexd amidxtaped
disable = no
}
#default: on
# description: The amanda tape service
service amidxtape
{
only_from   = 192.168.10.0/24
socket_type = stream
protocol= tcp
wait= no
user= backup
group   = backup
groups  = yes
server  = /usr/lib/amanda/amidxtaped
server_args = -auth=bsd amdump amindexd amidxtaped
disable = no
}




-- 
Matt Burkhardt, M.Sci. Technology Management
m...@imparisystems.com
(301) 682-7901
502 Fairview Avenue
Frederick, MD  21701
http://www.imparisystems.com 



Re: xinetd and netstats - solved!

2009-05-14 Thread Matt Burkhardt
Just got a little further - 

Realized the Ubuntu apt-get package makes modifications to your
inetd.conf file, so that's why amandaidx was running.  Also, the
amanda-server does not install the client portion (or amandad) so needed
to run sudo apt-get install amanda-client

Once I commented out the lines in inetd.conf, restart the machine, I can
now do netstat -a | grep am and get

tcp0  0 *:amandaidx *:* LISTEN 
tcp0  0 *:amidxtape *:* LISTEN 
udp0  0 *:amanda*:*

and everything checks out fine.



On Thu, 2009-05-14 at 14:33 -0400, Matt Burkhardt wrote:
 Okay - I'm getting an amanda server up and running for the local Boys
 and Girls Club on an Ubuntu 8.04 LTS server - If I run amcheck daily,
 I get
 
 Amanda Tape Server Host Check
 -
 Holding disk /media/raid5/amandabackup/daily/dumps: 266228440 KB disk
 space available, using 255988440 KB
 slot 10: Found a non-amanda tape, will label it `daily-001'.
 NOTE: skipping tape-writable test
 Found a brand new tape, will label it daily-001.
 NOTE: conf info dir /etc/amanda/daily/curinfo does not exist
 NOTE: it will be created on the next run.
 NOTE: index dir /etc/amanda/daily/index does not exist
 NOTE: it will be created on the next run.
 Server check took 0.063 seconds
 
 Amanda Backup Client Hosts Check
 
 WARNING: Usage of fully qualified hostname recommended for Client
 localhost.
 WARNING: localhost: selfcheck request failed: timeout waiting for ACK
 Client check: 1 host checked in 29.994 seconds, 1 problem found
 
 (brought to you by Amanda 2.5.2p1)
 
 So I run netstat -a | grep amanda to see what services I have running
 and I get
 
 tcp0  0 *:amandaidx *:*
 LISTEN
 
 So I figure I must have something wrong with my /etc/xinetd.d/amanda
 file - but I can't find the issue.  I have restarted xinetd so often I
 think I'm going crazy!
 
 
 Contents of /etc/xinetd.d/amanda
 
 # default: on
 # description: The amanda service
 service amanda
 {
 only_from = 192.168.10.100
 socket_type = dgram
 protocol = udp
 wait = yes
 user = backup
 group = backup
 groups = yes
 server = /usr/lib/amanda/amandad
 server_args = -auth=bsd amdump amindexd amidxtaped
 disable = no
 }
 # default: on
 # description: The amanda index service
 service amandaidx
 {
 only_from = 192.168.10.100
 socket_type = stream
 protocol = tcp
 wait = no
 user = backup
 group = backup
 groups = yes
 server = /usr/lib/amanda/amindexd
 server_args = -auth=bsd amdump amindexd amidxtaped
 disable = no
 }
 #default: on
 # description: The amanda tape service
 service amidxtape
 {
 only_from = 192.168.10.0/24
 socket_type = stream
 protocol = tcp
 wait = no
 user = backup
 group = backup
 groups = yes
 server = /usr/lib/amanda/amidxtaped
 server_args = -auth=bsd amdump amindexd amidxtaped
 disable = no
 }
 
 
 
 
 -- 
 Matt Burkhardt, M.Sci. Technology Management
 m...@imparisystems.com
 (301) 682-7901
 502 Fairview Avenue
 Frederick, MD  21701
 http://www.imparisystems.com 
 
-- 
Matt Burkhardt, M.Sci. Technology Management
m...@imparisystems.com
(301) 682-7901
502 Fairview Avenue
Frederick, MD  21701
http://www.imparisystems.com 




Backing up to CDR

2009-05-11 Thread Matt Burkhardt
I'm doing some pro bono work for our local Boys and Girls Club and
helping them set up a server.  We're using Amanda for a backup and
trying to back everything up to a CDR.  I thought there was a device for
it or a cdrtaper utility or something.  I've tried googling it, but no
luck and was hoping someone might know...

Thanks again!
-- 
Matt Burkhardt, M.Sci. Technology Management
m...@imparisystems.com
(301) 682-7901
502 Fairview Avenue
Frederick, MD  21701
http://www.imparisystems.com 




Re: Two Questions - Clients and MySQL

2009-03-03 Thread Matt Burkhardt
On Mon, 2009-03-02 at 17:18 -0500, Dustin J. Mitchell wrote:
 The must-be-logged-in problem sounds like it might be an Ubuntu security 
 thing??

I think I may have figured this out - basically it's all wireless and
running WPA off a Linksys router - so would someone have to be longed
onto the router for the server to find the laptop?

Would it work with a wired network?
 
 On Mon, Mar 2, 2009 at 5:05 PM, Matt Burkhardt m...@imparisystems.com wrote:
  I know that Zmanda does MySQL backups - but does Amanda as well?  Is that
  one of the differentiations for the products?
 
 ZRM is also open-source:
   http://www.zmanda.com/backup-mysql.html
 
 Dustin
 
Thanks - both to Dustin and Paul!



Two Questions - Clients and MySQL

2009-03-02 Thread Matt Burkhardt
You know, the more that I get to know and understand Amanda, the more
I'm impressed.  Thanks guys!

First off, I have one laptop - but it appears that someone has to be
logged on in order for the backup to work properly.  I thought with the
additional user, the machine just had to be on.  Did I do something
wrong?  Is it possible to just have it sign on and backup without a
real user on?

I know that Zmanda does MySQL backups - but does Amanda as well?  Is
that one of the differentiations for the products?

As always - thanks for your quick answers and thoughts...

Matt Burkhardt, MS Technology Management
Impari Systems, Inc.
502 Fairview Avenue
Frederick, MD  21701
m...@imparisystems.com
www.imparisystems.com
(301) 682-7901




Re: S3 Backup using 2.6.1b2

2009-01-13 Thread Matt Burkhardt
On Mon, 2009-01-12 at 12:52 -0500, Dustin J. Mitchell wrote:

 On Mon, Jan 12, 2009 at 12:23 PM, Graham Wooden gra...@g-rock.net wrote:
  Also keep in mind the TCP sliding window when going over the Internet.  You
  can have a fat pipe (on both ends), but the increased hop count and the TCP
  overhead you will only get a percentage of your pipe.  In my experiences,
  they have been about less than 1/2 of the pipe.  Unless the backup is
  streamed over UDP (more efficient but no error checking).
 
 That's a good point.  The broader point, though, is that neither
 Amazon nor most of the ISPs between you and Amazon are too keen to
 squeeze out every last ounce of upload speed for you, so anyone
 backing up more than a few gigs nightly to Amazon is going to be
 unhappy - regardless of pipe size.
 
 Note that you can use Amanda's planner to good effect, by specifying,
 say, a 2G tape size, and lots of small DLEs, thereby backing up much
 more than 2G of data over the course of your dumpcycle.  I use this
 technique to keep my nightly backups to about 800M.

Thanks guys!  I'm parsing out the directories so that they're around
800M a piece and creating an exclude list.  Since they're music files,
I'm just running it once for the full backup, then the exclude files
will just pick up the new stuff.  I'll keep an eye on it to make sure
that it doesn't get too big.


 
 Dustin
 

Matt Burkhardt, MSTM
President
Impari Systems, Inc.
502 Fairview Avenue
Frederick, MD  21701
m...@imparisystems.com
www.imparisystems.com
(301) 682-7901




S3 Backup using 2.6.1b2

2009-01-08 Thread Matt Burkhardt
# backups performed at the beginning of the
previous
# cycle
runtapes 1  # number of tapes to be used in a single run of
amdump
tpchanger chg-multi   # the tape-changer glue script
tapedev S3:   # the no-rewind tape device to be used
device_property S3_ACCESS_KEY My Access Key
device_property S3_SECRET_KEY My Secret Key
device_property S3_SSL false
changerfile changer.conf
tapetype HARDDISK   # what kind of tape it is (see tapetypes below)

holdingdisk hd1 {
directory /samba/smalldrive/dumps/music
use -1000Mb
}
holdingdisk hd2 {
directory /samba/bigdrive/dumps/music
use -1000 Mb
}

label_new_tapes musicSet1-# Enable auto labeling
labelstr ^musicSet1-[0-9][0-9]*$  # label constraint regex: all
tapes mus$

dtimeout 1800   # number of idle seconds before a dump is
aborted.
ctimeout 30 # maximum number of seconds that amcheck waits
# for each client host
etimeout 300# number of seconds per filesystem for
estimates.

define dumptype global {
comment Global definitions
auth bsdtcp
}

define dumptype gui-base {
global
program GNUTAR
comment gui base dumptype dumped with tar
compress none
index yes
}


define tapetype HARDDISK {
comment Virtual Tapes
length 10 mbytes
}


includefile advanced.conf
includefile /var/lib/amanda/template.d/dumptypes
includefile tapetypes





Matt Burkhardt, MSTM
President
Impari Systems, Inc.
502 Fairview Avenue
Frederick, MD  21701
m...@imparisystems.com
www.imparisystems.com
(301) 682-7901






Re: Time outs for S3 Backups

2008-12-08 Thread Matt Burkhardt
On Sun, 2008-12-07 at 21:40 -0500, Nikolas Coukouma wrote:

 On Sun, 2008-12-07 at 13:25 -0500, Dustin J. Mitchell wrote:
  On Sun, Dec 7, 2008 at 12:22 PM, Matt Burkhardt [EMAIL PROTECTED] wrote:
   Now that I've got my holding disk set up, I've been running into issues 
   with
   timeouts from S3.
 
  ... These errors
  are only reported to you after 14 retrires, which means Amazon has
  been given ample opportunity to resolve any network issues.
  ...
 
 Actually, the 2.6.0p2 release (and earlier) uses significantly different
 values.
 The maximum number of retries is only 5 and the resulting time is fairly
 short (the backoff increases exponentially). I believe that 2.6.0p2 only
 waits a a couple seconds
 
 If the bucket you're trying to backup to isn't created yet, that's
 probably the problem. If the problem persists, I'd recommend either
 trying the beta or compiling a copy from source, tweaking the values in
 device-src/s3.c .

I'm running 2.6.1b1 right now - I'm using the Ubuntu Hardy package for
i386.  

 
 Specifically, you'd want to change the values
 #define EXPONENTIAL_BACKOFF_START_USEC 1
 #define EXPONENTIAL_BACKOFF_MAX_RETRIES 5
 to
 #define EXPONENTIAL_BACKOFF_START_USEC G_USEC_PER_SEC/100
 #define EXPONENTIAL_BACKOFF_MAX_RETRIES 14
 

I'll give it a try, but it's a bear to uninstall with the package and
install from source - wish me luck!

 Regards,

Matt Burkhardt, MSTM
President
Impari Systems, Inc.
502 Fairview Avenue
Frederick, MD  21701
[EMAIL PROTECTED]
www.imparisystems.com
(301) 682-7901




Re: Time outs for S3 Backups

2008-12-08 Thread Matt Burkhardt
On Sun, 2008-12-07 at 21:40 -0500, Nikolas Coukouma wrote:

 On Sun, 2008-12-07 at 13:25 -0500, Dustin J. Mitchell wrote:
  On Sun, Dec 7, 2008 at 12:22 PM, Matt Burkhardt [EMAIL PROTECTED] wrote:
   Now that I've got my holding disk set up, I've been running into issues 
   with
   timeouts from S3.
 
  ... These errors
  are only reported to you after 14 retrires, which means Amazon has
  been given ample opportunity to resolve any network issues.
  ...
 
 Actually, the 2.6.0p2 release (and earlier) uses significantly different
 values.
 The maximum number of retries is only 5 and the resulting time is fairly
 short (the backoff increases exponentially). I believe that 2.6.0p2 only
 waits a a couple seconds
 
 If the bucket you're trying to backup to isn't created yet, that's
 probably the problem. If the problem persists, I'd recommend either
 trying the beta or compiling a copy from source, tweaking the values in
 device-src/s3.c .
 
 Specifically, you'd want to change the values
 #define EXPONENTIAL_BACKOFF_START_USEC 1
 #define EXPONENTIAL_BACKOFF_MAX_RETRIES 5
 to
 #define EXPONENTIAL_BACKOFF_START_USEC G_USEC_PER_SEC/100
 #define EXPONENTIAL_BACKOFF_MAX_RETRIES 14
 
 Regards,

I'm sure you probably already know, but I'm running the Ubuntu Hardy
2.6.1b1 package.  I just downloaded the source for 2.6.1b1 and those
definitions are already set in the s3.c file - would it be worth it to
try and uninstall the package and reinstall from source?

Thanks!

Matt Burkhardt, MSTM
President
Impari Systems, Inc.
502 Fairview Avenue
Frederick, MD  21701
[EMAIL PROTECTED]
www.imparisystems.com
(301) 682-7901




Re: Time outs for S3 Backups

2008-12-08 Thread Matt Burkhardt
On Mon, 2008-12-08 at 12:30 -0500, Nikolas Coukouma wrote:

 On Mon, 2008-12-08 at 07:02 -0500, Matt Burkhardt wrote:
  On Sun, 2008-12-07 at 21:40 -0500, Nikolas Coukouma wrote: 
   On Sun, 2008-12-07 at 13:25 -0500, Dustin J. Mitchell wrote:
On Sun, Dec 7, 2008 at 12:22 PM, Matt Burkhardt [EMAIL PROTECTED] 
wrote:
 Now that I've got my holding disk set up, I've been running into 
 issues with
 timeouts from S3.
   
... These errors
are only reported to you after 14 retrires, which means Amazon has
been given ample opportunity to resolve any network issues.
...
   ...
   If the bucket you're trying to backup to isn't created yet, that's
   probably the problem. If the problem persists, I'd recommend either
   trying the beta or compiling a copy from source, tweaking the values in
   device-src/s3.c 
  I'm sure you probably already know, but I'm running the Ubuntu Hardy
  2.6.1b1 package.  I just downloaded the source for 2.6.1b1 and those
  definitions are already set in the s3.c file - would it be worth it to
  try and uninstall the package and reinstall from source?
 
 Nope.
 
 Since you're running 2.6.1b1, you can try setting the property S3_SSL to
 false temporarily and using a packet capture tool like Wireshark to
 take a closer look. This disables the encryption that's usually used to
 transfer your data to Amazon, so it's not a good idea to leave it off
 during normal operation.
 
 If you want to try that and send the packet capture file (from having
 SSL disabled) to me off-list, I'd be happy to take a look. If you're not
 planning to look yourself, you might save yourself some installation by
 using tcpdump to create the capture:
 tcpdump -i interface -s 1500 -w some-file

Okay - dumb question time

I don't have a GUI installed on the server, so I can't use Wireshark -
so I'm using the tcpdump command.  I just ran it on the command line in
another terminal session after I started the amflush command.  How do I
know when and how do I stop it?


 
 Best of luck,

Matt Burkhardt, MSTM
President
Impari Systems, Inc.
502 Fairview Avenue
Frederick, MD  21701
[EMAIL PROTECTED]
www.imparisystems.com
(301) 682-7901




Re: Time outs for S3 Backups

2008-12-08 Thread Matt Burkhardt
It's about a 720 Mb file before it died - still want it off list

On Mon, 2008-12-08 at 12:30 -0500, Nikolas Coukouma wrote:

 On Mon, 2008-12-08 at 07:02 -0500, Matt Burkhardt wrote:
  On Sun, 2008-12-07 at 21:40 -0500, Nikolas Coukouma wrote: 
   On Sun, 2008-12-07 at 13:25 -0500, Dustin J. Mitchell wrote:
On Sun, Dec 7, 2008 at 12:22 PM, Matt Burkhardt [EMAIL PROTECTED] 
wrote:
 Now that I've got my holding disk set up, I've been running into 
 issues with
 timeouts from S3.
   
... These errors
are only reported to you after 14 retrires, which means Amazon has
been given ample opportunity to resolve any network issues.
...
   ...
   If the bucket you're trying to backup to isn't created yet, that's
   probably the problem. If the problem persists, I'd recommend either
   trying the beta or compiling a copy from source, tweaking the values in
   device-src/s3.c 
  I'm sure you probably already know, but I'm running the Ubuntu Hardy
  2.6.1b1 package.  I just downloaded the source for 2.6.1b1 and those
  definitions are already set in the s3.c file - would it be worth it to
  try and uninstall the package and reinstall from source?
 
 Nope.
 
 Since you're running 2.6.1b1, you can try setting the property S3_SSL to
 false temporarily and using a packet capture tool like Wireshark to
 take a closer look. This disables the encryption that's usually used to
 transfer your data to Amazon, so it's not a good idea to leave it off
 during normal operation.
 
 If you want to try that and send the packet capture file (from having
 SSL disabled) to me off-list, I'd be happy to take a look. If you're not
 planning to look yourself, you might save yourself some installation by
 using tcpdump to create the capture:
 tcpdump -i interface -s 1500 -w some-file
 
 Best of luck,

Matt Burkhardt, MSTM
President
Impari Systems, Inc.
502 Fairview Avenue
Frederick, MD  21701
[EMAIL PROTECTED]
www.imparisystems.com
(301) 682-7901




Time outs for S3 Backups

2008-12-07 Thread Matt Burkhardt
Now that I've got my holding disk set up, I've been running into issues
with timeouts from S3.  I've increased the dtimeout from 1800 seconds to
3600 seconds - the files that I'm trying to backup are rather large -
about 56GB.  Anyone have experience with this?

Thanks again!

Matt Burkhardt, MSTM
President
Impari Systems, Inc.
502 Fairview Avenue
Frederick, MD  21701
[EMAIL PROTECTED]
www.imparisystems.com
(301) 682-7901




Re: Time outs for S3 Backups

2008-12-07 Thread Matt Burkhardt
On Sun, 2008-12-07 at 14:26 -0500, Dustin J. Mitchell wrote:

 On Sun, Dec 7, 2008 at 1:37 PM, Matt Burkhardt [EMAIL PROTECTED] wrote:
 localhost /samba/bigdrive/Music lev 0: partial taper:  While writing data
  block to S3: Too many retries; last message was 'Your socket connection to
  the server was not read from or written to within the timeout period. Idle
  connections will be closed.' (RequestTimeout) (HTTP 400) (after 14 retries)
 
 Hmm, I recall having this kind of trouble way back when S3 was still
 in beta, but haven't seen it since.  I assumed it was due to a bug on
 the Amazon side.
 
 The S3 device does not begin sending data until it has a full block
 available, and then sends that block as quickly as the network will
 permit, so this sort of timeout cannot be caused by slowness on the
 client or anything like that.
 
 Have you modified your blocksize from the default 10M?  

No, I've left it based on the examples from Amanda - here's what's in my
amanda.conf

define tapetype HARDDISK {
comment Virtual Tapes
length 10 mbytes
}


 Is there some
 reason that a TCP connection to Amazon would stall out for more than
 60 seconds?  

I've got Verizon DSL - seems to be working fine - so not likely

 Can you upload multi-megabyte files to S3 using other
 utilities (e.g., JetS3t)?

I've been able to back up my document portion of my server on a regular
basis - it's uploading and storing about a little over 1GB on S3 and it
seems to work fine.  For the music backup, it's gotten up to file 1c4 -
about 4.7 GB


 
 Dustin
 

Matt Burkhardt, MSTM
President
Impari Systems, Inc.
502 Fairview Avenue
Frederick, MD  21701
[EMAIL PROTECTED]
www.imparisystems.com
(301) 682-7901




Holding Disk Size

2008-12-06 Thread Matt Burkhardt
I'm running out of holding disk space during the backup and I was
wondering what I can do.  Here's my scenario

I've got a directory called Music that has 54GB on it.  My disk has 36GB
free.  I have a second disk with 34GB free on it.  Can I have multiple
holding disks?
If I give the disk list as each subdirectory under Music, will it need
less holding space?  Should I break up the backup jobs into several
ones?  Should I just buy a bigger disk?

Thanks!

Matt Burkhardt, MSTM
President
Impari Systems, Inc.
502 Fairview Avenue
Frederick, MD  21701
[EMAIL PROTECTED]
www.imparisystems.com
(301) 682-7901




Re: Error - tcpm_recv_token: invalid size: amandad:

2008-12-04 Thread Matt Burkhardt
Thanks Dustin!

Turns out that there was a permissions problem with /var/lib/amanda - I
had at first installed with the repos for Ubuntu, then uninstalled and
used the packages from Zmanda - so just changed the permissions to
amandabackup:disk and amcheck runs fine.


On Wed, 2008-12-03 at 21:18 -0500, Dustin J. Mitchell wrote:

 On Wed, Dec 3, 2008 at 5:32 PM, Matt Burkhardt [EMAIL PROTECTED] wrote:
  WARNING: 192.168.1.102: selfcheck request failed: tcpm_recv_token: invalid
  size: amandad:
 
 What do you see if you telnet to that IP on port 10080?
 
 Dustin
 

Matt Burkhardt, MSTM
President
Impari Systems, Inc.
502 Fairview Avenue
Frederick, MD  21701
[EMAIL PROTECTED]
www.imparisystems.com
(301) 682-7901




I accidentally ran chown and need help

2008-12-03 Thread Matt Burkhardt
I'm trying to set up a backup for a client machine running 8.10 Ubuntu
using an Ubuntu 8.04 server running 2.6.1b1 - but the version on the
client machine is an earlier one.

Anyway, I fat fingered something and changed the ownerships on all the
files in /usr/libexec/amanda and I'm now getting an error message that
says 

Amanda Backup Client Hosts Check

ERROR: 192.168.1.102: [/usr/libexec/amanda/runtar is not SUID root]
Client check: 1 host checked in 0.091 seconds.  1 problem found.

Right now, I have that set to root:disk - I basically copied the same
permissions from the /usr/libexec on the server.  Here's the directory
with the owners/groups

 ls -lh /usr/libexec/amanda
total 988K
-rwxr-xr-x 1 root root  29K 2008-07-30 19:29 amandad
-rw-r--r-- 1 root root  957 2008-07-30 19:28 amanda-sh-lib.sh
-rw-r--r-- 1 root root  135 2008-07-30 19:29 amcat.awk
-rwxr-xr-x 1 root root 6.6K 2008-07-30 19:29 amcleanupdisk
-rwxr-xr-x 1 root root  19K 2008-07-30 19:29 amidxtaped
-rwxr-xr-x 1 root root  43K 2008-07-30 19:29 amindexd
-rwxr-xr-x 1 root root 7.2K 2008-07-30 19:29 amlogroll
-rw-r--r-- 1 root root  18K 2008-07-30 19:29 amplot.awk
-rw-r--r-- 1 root root 3.3K 2008-07-30 19:29 amplot.g
-rw-r--r-- 1 root root 3.3K 2008-07-30 19:29 amplot.gp
-rwxr-xr-x 1 root root  11K 2008-07-30 19:29 amtrmidx
-rwxr-xr-x 1 root root 9.9K 2008-07-30 19:29 amtrmlog
drwxr-xr-x 2 root root 4.0K 2008-08-02 10:18 application
-rwxr-xr-- 1 root disk  15K 2008-07-30 19:29 calcsize
-rwxr-xr-x 1 root root  12K 2008-07-30 19:29 chg-chio
-rwxr-xr-x 1 root root 9.7K 2008-07-30 19:29 chg-chs
-rwxr-xr-x 1 root root 7.9K 2008-07-30 19:29 chg-disk
-rwxr-xr-x 1 root root 7.3K 2008-07-30 19:29 chg-iomega
-rwxr-xr-x 1 root root 5.1K 2008-07-30 19:29 chg-juke
-rw-r--r-- 1 root root 4.4K 2008-07-30 19:29 chg-lib.sh
-rwxr-xr-x 1 root root 6.8K 2008-07-30 19:29 chg-manual
-rwxr-xr-x 1 root root  14K 2008-07-30 19:29 chg-mcutil
-rwxr-xr-x 1 root root 4.8K 2008-07-30 19:29 chg-mtx
-rwxr-xr-x 1 root root  12K 2008-07-30 19:29 chg-multi
-rwxr-xr-x 1 root root 1.7K 2008-07-30 19:29 chg-null
-rwxr-xr-x 1 root root 4.0K 2008-07-30 19:29 chg-rait
-rwxr-xr-x 1 root root 6.8K 2008-07-30 19:29 chg-rth
-rwxr-xr-x 1 root root 196K 2008-07-30 19:29 chg-scsi
-rwxr-xr-x 1 root root  40K 2008-07-30 19:29 chg-zd-mtx
-rwxr-xr-x 1 root root  22K 2008-07-30 19:29 chunker
-rwxr-xr-x 1 root root  71K 2008-07-30 19:29 driver
-rwxr-xr-x 1 root disk  39K 2008-07-30 19:29 dumper
-rwxr-xr-- 1 root disk 6.8K 2008-07-30 19:29 killpgrp
-rwxr-xr-x 1 root root 5.8K 2008-07-30 19:29 noop
-rwxr-xr-x 1 root root 5.1K 2008-07-30 19:28 patch-system
-rwxr-xr-x 1 root disk  60K 2008-07-30 19:29 planner
-rwxr-xr-- 1 root disk 4.9K 2008-07-30 19:29 rundump
-rwxr-xr-- 1 root disk 7.8K 2008-07-30 19:29 runtar
-rwxr-xr-x 1 root root  27K 2008-07-30 19:29 selfcheck
-rwxr-xr-x 1 root root  48K 2008-07-30 19:29 sendbackup
-rwxr-xr-x 1 root root  51K 2008-07-30 19:29 sendsize
-rwxr-xr-x 1 root root  49K 2008-07-30 19:29 taper
-rwxr-xr-x 1 root root 4.1K 2008-07-30 19:29 versionsuffix

Thanks!


Matt Burkhardt, MSTM
President
Impari Systems, Inc.
502 Fairview Avenue
Frederick, MD  21701
[EMAIL PROTECTED]
www.imparisystems.com
(301) 682-7901




Re: Release of Amanda-2.6.1b1

2008-12-02 Thread Matt Burkhardt
Are the packages on http://www.zmanda.com/download-amanda.php going to
be updated soon?

On Tue, 2008-12-02 at 13:21 -0500, Nikolas Coukouma wrote:

 On Thu, 2008-11-27 at 08:53 -0500, Jean-Louis Martineau wrote:
  Hello,
  
  The Amanda core team is pleased to announce the first beta release of
  Amanda 2.6.1, the 2.6.1b1 release.
  
  Source tarballs are available from
* http://www.amanda.org
* https://sourceforge.net/project/showfiles.php?group_id=120
  Binaries for many systems are available from
* http://www.zmanda.com/download-amanda.php
 
 If you use Amazon S3 you may encounter a segmentation fault. A fix has
 already been committed and is available in the more recent (20081202 or
 later) source tarballs.
 http://www.zmanda.com/community-builds.php
 http://www.zmanda.com/downloads/community/community-builds/amanda-2.6.1b1-20081202.tar.gz
 
 Regards,

Matt Burkhardt, MSTM
President
Impari Systems, Inc.
502 Fairview Avenue
Frederick, MD  21701
[EMAIL PROTECTED]
www.imparisystems.com
(301) 682-7901




Re: Installing 2.6.0p2 on Ubuntu

2008-11-29 Thread Matt Burkhardt
Sorry it takes me awhile to get back.

Anyway - here's what's happened to me - 

Originally installed
http://www.zmanda.com/downloads/community/Amanda/2.6.0p2/Ubuntu-Hardy/amanda-backup-server_2.6.0p2-1_i386.deb
 but kept getting an error about missing a p1 library whenever I tried to run 
amrecover (so I think there's a problem with that Amanda package).  So I did an 
apt-get remove.

Then - downloaded Amanda 2.6.0.p2 from scratch and installed, but was
running into problems with the configuration - problem was that it
wasn't finding the configuration files that I had set up originally - I
was getting ready to rerun ./configure with the parameters that would
match my system, but then amanda 2.6.1b1 was announced, so I ran make
uninstall on the 2.6.0p2 version, then downloaded
http://www.zmanda.com/downloads/community/Amanda/2.6.1b1/Ubuntu-Hardy/amanda-backup-server_2.6.1b1-1_i386.deb
 and everything seems to be as it should!

I'm currently running an amflush, I was getting an error about it when I
tried to run amrecover.  It's backing up to Amazon S3, and I'll let you
know how that works.

Thanks!

On Thu, 2008-11-20 at 18:14 -0600, Dustin J. Mitchell wrote:

 On Wed, Nov 19, 2008 at 3:00 PM, Matt Burkhardt [EMAIL PROTECTED] wrote:
  AMRECOVER Version 2.6.0p2. Contacting server on ubuntu ...
  [request failed: timeout waiting for ACK]
 
  Is there another configure directive to find the original files?
 
 This can be because amrecover is using the wrong auth mechanism to
 find the server (look for auth in /etc/amanda/amanda-client.conf),
 or because something is wrong with your xinetd configuration -- is the
 executable that's pointing to still at that location?
 
 Dustin
 

Matt Burkhardt, MSTM
President
Impari Systems, Inc.
502 Fairview Avenue
Frederick, MD  21701
[EMAIL PROTECTED]
www.imparisystems.com
(301) 682-7901




Installing 2.6.0p2 on Ubuntu

2008-11-19 Thread Matt Burkhardt
I used the Ubuntu Hardy Haron package to install Amanda server on my
server.  It ran the backups correctly, but now I need to recover a file.
If  I ran recover, it was asking for one of the 2.6.0p1 files - so I
uninstalled Amanda through apt-get and pulled down the 2.6.0p2 package
and compiled it on the machine.  

I changed the user and groups when running ./configure to match up with
the users and groups defined with the Ubuntu package - and now I'm
trying to run amrecover by typing

 sudo amrecover /etc/amanda/DailySet1

but I get - 

AMRECOVER Version 2.6.0p2. Contacting server on ubuntu ...
[request failed: timeout waiting for ACK]

Is there another configure directive to find the original files?  

Matt Burkhardt, MSTM
President
Impari Systems, Inc.
502 Fairview Avenue
Frederick, MD  21701
[EMAIL PROTECTED]
www.imparisystems.com
(301) 682-7901




Re: amrecover with 2.6.0p2

2008-09-17 Thread Matt Burkhardt
Could it be that I don't have the amanda client on my server?

On Tue, 2008-09-16 at 21:47 -0500, Dan Locks wrote:

 Matt Burkhardt wrote:
  This is an upgrade from p1 to p2
 
  I used the i386 server version for Hardy
 
 
 That package checks out, so I suspect this is a problem with the 
 behavior of our package during upgrade.  The easiest solution is 
 probably to remove the p2 package, then loosely make sure everything is 
 gone (search for /usr/sbin/am* /usr/lib/amanda/ /usr/libexec/amanda/).  
 Then install the package again. 
 
 This is just a workaround.  It's going to take me awhile to figure out 
 what's wrong with our debian upgrade process.
 
 Dan
 
 

Matt Burkhardt, MSTM
President
Impari Systems, Inc.
401 Rosemont Avenue
Frederick, MD  21701
[EMAIL PROTECTED]
www.imparisystems.com
(301) 644-3911




Re: amrecover with 2.6.0p2

2008-09-16 Thread Matt Burkhardt

On Mon, 2008-09-15 at 19:34 -0400, Dustin J. Mitchell wrote:

 On Mon, Sep 15, 2008 at 4:55 PM, Matt Burkhardt [EMAIL PROTECTED] wrote:
  I'm trying to run amrecover to get back some files, and I'm getting
 
  amrecover: error while loading shared libraries: libamclient-2.6.0p1.so:
  cannot open shared object file: No such file or directory
 
  and the only files I have on my box are the p2.so files - so do I need to
  recompile, reuse, something?
 
 Sounds like a recompile -- at least of amrecover.
 
 Dustin
 

I used the Ubuntu deb package at 

http://www.zmanda.com/download-amanda.php 

Not knowing a lot about packages, do I have to remove it with the
package manager, then download source and compile?

Thanks,


Matt Burkhardt, MSTM
President
Impari Systems, Inc.
401 Rosemont Avenue
Frederick, MD  21701
[EMAIL PROTECTED]
www.imparisystems.com
(301) 644-3911




amrecover with 2.6.0p2

2008-09-15 Thread Matt Burkhardt
I'm trying to run amrecover to get back some files, and I'm getting

amrecover: error while loading shared libraries: libamclient-2.6.0p1.so:
cannot open shared object file: No such file or directory

and the only files I have on my box are the p2.so files - so do I need
to recompile, reuse, something?

Thanks,

Matt Burkhardt, MSTM
President
Impari Systems, Inc.
401 Rosemont Avenue
Frederick, MD  21701
[EMAIL PROTECTED]
www.imparisystems.com
(301) 644-3911