Re: [Amanda-users] amzfs-sendrecv - restore how?

2015-08-07 Thread Gary Cowell
Doing

add .
extract

worked!

Had to rename my zfs filesystem in the none-global zone first, but , it did
work. Thank you for the pointer.



On 5 August 2015 at 12:55, Jean-Louis Martineau martin...@zmanda.com
wrote:

 On 31/07/15 02:51 AM, harpingon wrote:

 Hello

 This is probably a stupid question, but I'm not finding the answer by
 searching

 I'm backing up a SPARC/Solaris 11 server with amzfs-sendrecv via pfexec ,
 and I do get my dumps okay. They show up on the amanda server tape files as
 ZFS shapshot (big-endian machine), version 145, type: ZFS,  when I run
 'file' on them.

 Testing restores though, how do you restore one of these?

 amrecover seems not helpful, there are no indexes, and you can't do file
 based restore anyway.

 The index must contains a single /.


 Searching seems to reveal a 'restore' command in amrecover for this, but
 I don't have that option.

 In the 3.3.0 release notes, I see:

 implement restore command in amzfs-sendrecv, it can be use with
 amrecover

 Like any restore using amrecover:
 add .
 extract

 Jean-Louis



Re: [Amanda-users] amzfs-sendrecv - restore how?

2015-08-05 Thread Jean-Louis Martineau

On 31/07/15 02:51 AM, harpingon wrote:

Hello

This is probably a stupid question, but I'm not finding the answer by searching

I'm backing up a SPARC/Solaris 11 server with amzfs-sendrecv via pfexec , and I do get my 
dumps okay. They show up on the amanda server tape files as ZFS shapshot 
(big-endian machine), version 145, type: ZFS,  when I run 'file' on them.

Testing restores though, how do you restore one of these?

amrecover seems not helpful, there are no indexes, and you can't do file based 
restore anyway.

The index must contains a single /.


Searching seems to reveal a 'restore' command in amrecover for this, but I 
don't have that option.

In the 3.3.0 release notes, I see:

implement restore command in amzfs-sendrecv, it can be use with amrecover

Like any restore using amrecover:
add .
extract

Jean-Louis


[Amanda-users] amzfs-sendrecv - restore how?

2015-08-04 Thread harpingon
Hello

This is probably a stupid question, but I'm not finding the answer by searching 

I'm backing up a SPARC/Solaris 11 server with amzfs-sendrecv via pfexec , and I 
do get my dumps okay. They show up on the amanda server tape files as ZFS 
shapshot (big-endian machine), version 145, type: ZFS,  when I run 'file' on 
them.

Testing restores though, how do you restore one of these?

amrecover seems not helpful, there are no indexes, and you can't do file based 
restore anyway.

Searching seems to reveal a 'restore' command in amrecover for this, but I 
don't have that option. 

In the 3.3.0 release notes, I see:

implement restore command in amzfs-sendrecv, it can be use with amrecover

But how?

I'm using Amanda 3.3.6 on the server side (linux) , and on the SPARC/Solaris 11 
side.

Thanks for any guidance

+--
|This was sent by gary.cow...@outlook.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--




Re: [Amanda-users] amzfs-sendrecv - restore how?

2015-08-04 Thread Jon LaBadie
On Thu, Jul 30, 2015 at 11:51:29PM -0700, harpingon wrote:
 Hello
 
 This is probably a stupid question, but I'm not finding the answer by 
 searching 
 
 I'm backing up a SPARC/Solaris 11 server with amzfs-sendrecv via pfexec , and 
 I do get my dumps okay. They show up on the amanda server tape files as ZFS 
 shapshot (big-endian machine), version 145, type: ZFS,  when I run 'file' on 
 them.
 
 Testing restores though, how do you restore one of these?
 
 amrecover seems not helpful, there are no indexes, and you can't do file 
 based restore anyway.
 
 Searching seems to reveal a 'restore' command in amrecover for this, but I 
 don't have that option. 
 
 In the 3.3.0 release notes, I see:
 
 implement restore command in amzfs-sendrecv, it can be use with amrecover
 
 But how?

This is not amanda, but can zfs snapshots be mounted separately from
their source.  If so, could you mount one somewhere and use standard
tools (cp, etc.) to get the files you want/need?

jl
-- 
Jon H. LaBadie j...@jgcomp.com
 11226 South Shore Rd.  (703) 787-0688 (H)
 Reston, VA  20190  (703) 935-6720 (C)


Re: [Amanda-users] Backup incremental files

2015-06-08 Thread Gerrit A. Smit :: TI

Cuttler, Brian (HEALTH) schreef op 08-06-15 om 17:18:


I see what you are saying, and while the filesystem may support a method of 
backing up the file change, gtar isn't the tool to use for recording that delta 
(zfs is not amenable to DUMP, nor would that meet the request).
I am told that with the right ZFS-mounts you can generate a binary 
stream comprising a delta.

You can dump that, with ... whatever.
The other way around: ZFS can read that binary stream.

Gerrit


[Amanda-users] Backup incremental files/ Barcode band-autoloader

2015-06-08 Thread Taz_Tasmania
Hello,

i have two to ask to Amanda.

I have installed a Debian 7. In the packages are the version 3.3.1 from Amanda.

1.)
Can Amanada work with a barcode band-autoloader?

2.)
I have a big file (25GB) and each day becomes the file to end a piece longer. 
Can Amanda backup the difference from this file and not the complete file by 
incremental?

Timo

+--
|This was sent by muel...@global-village.de via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--




Re: [Amanda-users] Backup incremental files

2015-06-08 Thread Gerrit A. Smit :: TI

Op 08-06-15 om 16:20 schreef Taz_Tasmania:


2.)
I have a big file (25GB) and each day becomes the file to end a piece longer. 
Can Amanda backup the difference from this file and not the complete file by 
incremental?


AFIK Amanda does things on file system level, not file level.

Gerrit


Re: [Amanda-users] Backup incremental files/ Barcode band-autoloader

2015-06-08 Thread Gerrit A. Smit :: TI

Op 08-06-15 om 16:20 schreef Taz_Tasmania:


1.)
Can Amanada work with a barcode band-autoloader?

https://forums.zmanda.com/showthread.php?2224-How-amanda-interact-with-barcodes



Re: [Amanda-users] Backup incremental files

2015-06-08 Thread Gerrit A. Smit :: TI

Op 08-06-15 om 17:01 schreef Cuttler, Brian (HEALTH):


Amanda manages DLE (DiskList Entries) at the file system level, the tools used 
to back up the file systems are all native to the OS (of the amanda client) 
were the file system lives. The only tools I've seen used for backup are GTAR,
STAR and Dump, which is not to say that you couldn't possibly roll-you-own, but 
it isn't something that I've seen discussed on the list before



I think the only thing that would help is using ZFS or Btrfs or anything else 
which knows snapshots.


Gerrit


AW: [Amanda-users] Attepmt to use amrecover over ssh from client fails

2015-01-29 Thread Schoepflin, Markus
Comparing your setup with my notes, I noticed that I also do the
following on the *server*:

# sudo -u backup-user ssh client

And then abort after accepting the host key. Did you do this as well?

Otherwise try to run amrecover -o debug_auth=1 on the client and check
the log files. You should now see the SSH call being performed.

HTH, Markus



[Amanda-users] Attepmt to use amrecover over ssh from client fails

2015-01-28 Thread jscarville
Amanda has been happily backing up a half dozen Linux machines for a week now 
and I can even restore files using amrecover on the server. That part was easy.

Using the instructions I found on zmanda wiki -- with appropriate adjustments 
for my environment -- I tried to get amrecover over ssh working on the clients 
with no success.

I created a key pair on the client.

I added the public key of the above pair to /var/amanda/.ssh/authorized_keys on 
the server

As a test, I can log onto the server from the client as amandabackup:

$ ssh -i /var/amanda/.ssh/amrecover_key scadev02.lereta.com

and as root:

$ sudo ssh -i /var/amanda/.ssh/amrecover_key amandabac...@scadev02.lereta.com

but running amrecover as root I get a Host key verification failed:

$ sudo amrecover
Host key verification failed.
AMRECOVER Version 2.6.1p2. Contacting server on scadev02.lereta.com ...
[request failed: EOF on read from scadev02.lereta.net]

Just to be certain this is not just an incompatibility between 2.6.1 (on 
client) and 3.3.6 (on the server), I tried the same steps on a machine running 
client version 3.3.6 with the same results.

$ sudo amrecover
Host key verification failed.
AMRECOVER Version 3.3.6. Contacting server on scadev02.lereta.com ...
[request failed: EOF on read from scadev02.lereta.net]

Server version: 3.3.6

Client version: 2.6.1p2

/etc/amanda/amanda-client.conf

conf lereta
index_server scadev02.lereta.com
tape_server scadev02.lereta.com
tapedev 
auth ssh
ssh_keys /var/amanda/.ssh/amrecover_key
client_username amandabackup

(I also noticed that the underscore was replaced by a dash between 2.6.1 and 
3.3.6)

/var/log/amanda/client/amrecover.20150116145809.debug contains:

1421449089.660567: amrecover: pid 7195 ruid 0 euid 0 version 2.6.1p2: start at 
Fri Jan 16 14:58:09 2015
1421449089.661031: amrecover: pid 7195 ruid 0 euid 0 version 2.6.1p2: rename at 
Fri Jan 16 14:58:09 2015
1421449089.661075: amrecover: security_getdriver(name=ssh) returns 
0x7ff5c720f2e0
1421449089.661094: amrecover: security_handleinit(handle=0x7ff5c9929a30, 
driver=0x7ff5c720f2e0 (SSH))
1421449089.661685: amrecover: security_streaminit(stream=0x7ff5c99315a0, 
driver=0x7ff5c720f2e0 (SSH))
1421449089.719622: amrecover: security_stream_seterr(0x7ff5c99315a0, SOCKET_EOF)
1421449089.719657: amrecover: security_seterror(handle=0x7ff5c9929a30, 
driver=0x7ff5c720f2e0 (SSH) error=EOF on read from scadev02.lereta.net)
1421449089.719671: amrecover: security_close(handle=0x7ff5c9929a30, 
driver=0x7ff5c720f2e0 (SSH))
1421449089.719677: amrecover: security_stream_close(0x7ff5c99315a0)

I do not see what I am overlooking. It make no sense that I can connect as 
amandabackup but, assuming that amrecover connects as the user in 
client-username, amrecover cannot.

Any suggestions as to where to go from here?

+--
|This was sent by a0180...@opayq.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--




[Amanda-users] any way to Track amvault progress?

2015-01-28 Thread frank chow
i am testing to run a amvault to the external storage. is there any way to show 
the vaulting progress?

+--
|This was sent by frank.cho...@gmail.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--




[Amanda-users] Strange Problem: Failures if one Windows system down

2014-07-18 Thread nschlia
Hi,

I am using 3.3.5, but this does not help. PC1 was down, but this caused PC2s 
backup to fail partially. In other cases, both PC2 and 3 failed. I don't see a 
reason why, as they are up and running.


FAILURE DUMP SUMMARY:
  planner: ERROR Request to PC1.mynet failed: No route to host
  PC1.mynet WakeOnLan_C lev 0  FAILED [Request to PC1.mynet failed: No route to 
host]
  PC2.mynet WakeOnLan_C lev 1  FAILED [data timeout]
  PC2.mynet WakeOnLan_C lev 1  partial taper: successfully taped a partial dump
  PC2.mynet WakeOnLan_C lev 1  FAILED [data timeout]
  PC2.mynet WakeOnLan_C lev 1  partial taper: successfully taped a partial dump

   
DUMPER STATS   TAPER STATS
HOSTNAMEDISKL ORIG-GB  OUT-GB  COMP%  MMM:SS   KB/s MMM:SS   
KB/s
-- -- 
-- -
PC1.mynet   WakeOnLan_C  FAILED 
 
PC2.mynet   WakeOnLan_C 1   0 --PARTIAL  33:58   
45.7 PARTIAL
PC3.mynet   WakeOnLan   1   5   3   62.4   12:07 4744.9  12:06 
4753.2

(brought to you by Amanda version 3.3.5)

+--
|This was sent by nsch...@oblivion-software.de via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--




[Amanda-users] Online PDF generator for LTO Ultrium barcode tape labels

2014-03-24 Thread Donna111
Hi there
You said that this one is something like a barcode add-in for PDF 
(http://www.rasteredge.com/how-to/csharp-imaging/pdf-barcode-creating/),I don't 
think so.There is a online PDF generater for LTO Ultrium barcode tape 
labels.The barcode Excel plug-in 
(http://www.rasteredge.com/how-to/csharp-imaging/excel-barcode-creating/) you 
have mentioned are not a relative one.

+--
|This was sent by nanaklin...@yahoo.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--




Re: [Amanda-users] Amanda Backup ssh_security could not find canonical name for

2013-10-09 Thread Chris Hoogendyk
We don't use wins or winbind on our network and recommend that windows users not try to use it. So, 
I'm not familiar with any details. However, google turns up 
http://www.samba.org/samba/docs/man/manpages-3/winbindd.8.html, which indicates the use of both in 
nsswitch.conf.


We run the smbd portion of samba, but not the nmbd portion on our Sun servers. However, as we move 
into Ubuntu, I've let the Ubuntu packagers have a bit more say so that I can rely on aptitude 
upgrades and not worry any more than necessary about custom local builds or configurations. I've 
managed to figure out a few interesting things about upstart and apparmor so that I can, for 
example, run multiple instances of mysql on different ports; but, with very few exceptions, I have 
taken package software rather than building my own. I did build Amanda, because that is what I am 
used to doing, and I want the freedom of choosing the latest Amanda. But, I'm straying from the 
point - on Ubuntu I have nmbd running. I should look at the local config and see what I can dial 
back to cut/control noise on the network.



On 10/8/13 6:18 PM, Jon LaBadie wrote:

On Tue, Oct 08, 2013 at 03:53:18PM -0400, Chris Hoogendyk wrote:

So, two machines that want to talk to one another (e.g. amanda
server and amanda client) need to know how to address one another.
If you don't have DNS within your private network, and you don't
have fixed IPs assigned (that you know) within the DHCP server, then
it seems to me you are really creating difficulties. Even on my home
network, I set fixed IPs so that I can do things like ssh and rsync
from one Mac to another. Without that, you're dependent on network
chatter and some vendor's auto discovery mechanism. But that's not a
protocol that's going to work with Amanda. You might jerry-rig a
complicated method for auto-discovery and transmitting information
to the Amanda server that gets put into /etc/hosts; but, then, why
not just implement fixed IPs and/or DNS. It seems like the more
traditional and well documented solution.

I use static IP's at home also.  Also my internet router is my DHCP
server and I associate each static IP with its MAC address to ensure
the server does not give out that IP to another host.  (that also
lets me use DHCP and get the same IP)

Samba on the Ubuntu amanda server could act as a WINS server.  But
I don't see anything in docs for the name service switch (nsswitch.conf)
that say it can use a WINS server.  Did I miss it?

Jon


On 10/8/13 3:04 PM, Jon LaBadie wrote:

On Tue, Oct 08, 2013 at 01:22:49PM -0400, Chris Hoogendyk wrote:

Just for reference, if you are running in a private network without
dns lookup, then you should put all the machines you want to backup
into /etc/hosts. That's what I had to do when my Amanda server was
on our private net and had no public address.

Can /etc/hosts be automatically updated if clients get their
IP addresses dynamically with DHCP?

Jon

That doesn't mean you don't have other issues that have to be dealt
with. There is a general trouble shooting page for possible issues
that result in selfcheck request failed --
http://wiki.zmanda.com/index.php/Selfcheck_request_failed.


On 10/8/13 5:08 AM, jefflau wrote:

Dear All,
I was learning on Amanda backup and facing issue for below. I planning using it 
in the workgroup without dns server.

I was Using Ubuntu 12 and installed it by using apt-get, by searching many of 
the issue resolved. Till this stage I do a month can't resolved it.

Hope someone able help me as soon as can


Amanda Backup Client Hosts Check

WARNING: backup: selfcheck request failed: ssh_security could not find 
canonical name for 'backup': Name or service not known
Client check: 1 host checked in 20.581 seconds.  1 problem found.

+--
|This was sent by jef...@gmail.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--




--
---

Chris Hoogendyk

-
O__   Systems Administrator
   c/ /'_ --- Biology  Geology Departments
  (*) \(*) -- 347 Morrill Science Center
~~ - University of Massachusetts, Amherst

hoogen...@bio.umass.edu

---

Erdös 4

End of included message 


--
---

Chris Hoogendyk

-
   O__   Systems Administrator
  c/ /'_ --- Biology  Geology Departments
 (*) \(*) -- 347 Morrill Science Center
~~ - University of Massachusetts, Amherst

hoogen...@bio.umass.edu

---

Erdös 4



[Amanda-users] Amanda Backup ssh_security could not find canonical name for

2013-10-08 Thread jefflau
Dear All,
I was learning on Amanda backup and facing issue for below. I planning using it 
in the workgroup without dns server. 

I was Using Ubuntu 12 and installed it by using apt-get, by searching many of 
the issue resolved. Till this stage I do a month can't resolved it.

Hope someone able help me as soon as can


Amanda Backup Client Hosts Check

WARNING: backup: selfcheck request failed: ssh_security could not find 
canonical name for 'backup': Name or service not known
Client check: 1 host checked in 20.581 seconds.  1 problem found.

+--
|This was sent by jef...@gmail.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--




Re: [Amanda-users] Amanda Backup ssh_security could not find canonical name for

2013-10-08 Thread Debra S Baddorf
Can you send the amanda.conf   file?We need a little more information here.
[ If you are already discussing this with somebody,  then never mind! ]
Deb


On Oct 8, 2013, at 4:08 AM, jefflau amanda-for...@backupcentral.com
 wrote:

 Dear All,
 I was learning on Amanda backup and facing issue for below. I planning using 
 it in the workgroup without dns server. 
 
 I was Using Ubuntu 12 and installed it by using apt-get, by searching many of 
 the issue resolved. Till this stage I do a month can't resolved it.
 
 Hope someone able help me as soon as can
 
 
 Amanda Backup Client Hosts Check
 
 WARNING: backup: selfcheck request failed: ssh_security could not find 
 canonical name for 'backup': Name or service not known
 Client check: 1 host checked in 20.581 seconds.  1 problem found.
 
 +--
 |This was sent by jef...@gmail.com via Backup Central.
 |Forward SPAM to ab...@backupcentral.com.
 +--
 
 




Re: [Amanda-users] Amanda Backup ssh_security could not find canonical name for

2013-10-08 Thread Jon LaBadie
On Tue, Oct 08, 2013 at 01:22:49PM -0400, Chris Hoogendyk wrote:
 Just for reference, if you are running in a private network without
 dns lookup, then you should put all the machines you want to backup
 into /etc/hosts. That's what I had to do when my Amanda server was
 on our private net and had no public address.

Can /etc/hosts be automatically updated if clients get their
IP addresses dynamically with DHCP?

Jon
 
 That doesn't mean you don't have other issues that have to be dealt
 with. There is a general trouble shooting page for possible issues
 that result in selfcheck request failed --
 http://wiki.zmanda.com/index.php/Selfcheck_request_failed.
 
 
 On 10/8/13 5:08 AM, jefflau wrote:
 Dear All,
 I was learning on Amanda backup and facing issue for below. I planning using 
 it in the workgroup without dns server.
 
 I was Using Ubuntu 12 and installed it by using apt-get, by searching many 
 of the issue resolved. Till this stage I do a month can't resolved it.
 
 Hope someone able help me as soon as can
 
 
 Amanda Backup Client Hosts Check
 
 WARNING: backup: selfcheck request failed: ssh_security could not find 
 canonical name for 'backup': Name or service not known
 Client check: 1 host checked in 20.581 seconds.  1 problem found.
 
 +--
 |This was sent by jef...@gmail.com via Backup Central.
 |Forward SPAM to ab...@backupcentral.com.
 +--
 
 
 
 -- 
 ---
 
 Chris Hoogendyk
 
 -
O__   Systems Administrator
   c/ /'_ --- Biology  Geology Departments
  (*) \(*) -- 347 Morrill Science Center
 ~~ - University of Massachusetts, Amherst
 
 hoogen...@bio.umass.edu
 
 ---
 
 Erdös 4
 End of included message 

-- 
Jon H. LaBadie j...@jgcomp.com
 11226 South Shore Rd.  (703) 787-0688 (H)
 Reston, VA  20190  (609) 477-8330 (C)


Re: [Amanda-users] Amanda Backup ssh_security could not find canonical name for

2013-10-08 Thread Jon LaBadie
On Tue, Oct 08, 2013 at 03:53:18PM -0400, Chris Hoogendyk wrote:
 So, two machines that want to talk to one another (e.g. amanda
 server and amanda client) need to know how to address one another.
 If you don't have DNS within your private network, and you don't
 have fixed IPs assigned (that you know) within the DHCP server, then
 it seems to me you are really creating difficulties. Even on my home
 network, I set fixed IPs so that I can do things like ssh and rsync
 from one Mac to another. Without that, you're dependent on network
 chatter and some vendor's auto discovery mechanism. But that's not a
 protocol that's going to work with Amanda. You might jerry-rig a
 complicated method for auto-discovery and transmitting information
 to the Amanda server that gets put into /etc/hosts; but, then, why
 not just implement fixed IPs and/or DNS. It seems like the more
 traditional and well documented solution.

I use static IP's at home also.  Also my internet router is my DHCP
server and I associate each static IP with its MAC address to ensure
the server does not give out that IP to another host.  (that also
lets me use DHCP and get the same IP)

Samba on the Ubuntu amanda server could act as a WINS server.  But
I don't see anything in docs for the name service switch (nsswitch.conf)
that say it can use a WINS server.  Did I miss it?

Jon
 
 
 On 10/8/13 3:04 PM, Jon LaBadie wrote:
 On Tue, Oct 08, 2013 at 01:22:49PM -0400, Chris Hoogendyk wrote:
 Just for reference, if you are running in a private network without
 dns lookup, then you should put all the machines you want to backup
 into /etc/hosts. That's what I had to do when my Amanda server was
 on our private net and had no public address.
 Can /etc/hosts be automatically updated if clients get their
 IP addresses dynamically with DHCP?
 
 Jon
 That doesn't mean you don't have other issues that have to be dealt
 with. There is a general trouble shooting page for possible issues
 that result in selfcheck request failed --
 http://wiki.zmanda.com/index.php/Selfcheck_request_failed.
 
 
 On 10/8/13 5:08 AM, jefflau wrote:
 Dear All,
 I was learning on Amanda backup and facing issue for below. I planning 
 using it in the workgroup without dns server.
 
 I was Using Ubuntu 12 and installed it by using apt-get, by searching many 
 of the issue resolved. Till this stage I do a month can't resolved it.
 
 Hope someone able help me as soon as can
 
 
 Amanda Backup Client Hosts Check
 
 WARNING: backup: selfcheck request failed: ssh_security could not find 
 canonical name for 'backup': Name or service not known
 Client check: 1 host checked in 20.581 seconds.  1 problem found.
 
 +--
 |This was sent by jef...@gmail.com via Backup Central.
 |Forward SPAM to ab...@backupcentral.com.
 +--
 
 
 
 
 -- 
 ---
 
 Chris Hoogendyk
 
 -
O__   Systems Administrator
   c/ /'_ --- Biology  Geology Departments
  (*) \(*) -- 347 Morrill Science Center
 ~~ - University of Massachusetts, Amherst
 
 hoogen...@bio.umass.edu
 
 ---
 
 Erdös 4
 End of included message 

-- 
Jon H. LaBadie j...@jgcomp.com
 11226 South Shore Rd.  (703) 787-0688 (H)
 Reston, VA  20190  (609) 477-8330 (C)


Re: [Amanda-users] Backup between restricted networks [server and client not in

2013-03-01 Thread Charles Curley
On Thu, 28 Feb 2013 21:21:49 -0800
mohit amanda-for...@backupcentral.com wrote:

 We have only ssh connection to the clients, and after reading thru
 documentations and forum, i understand that amanda uses a port range
 and 10080 port to take backup, which is not possible in this case.

You can run Amanda over SSH.
http://wiki.zmanda.com/index.php/How_To:Set_up_transport_encryption_with_SSH

You didn't say why you can't use the normal Amanda port range. If
necessary you can change Amanda's ports by recompiling. However, Amanda
over SSH should bypass the whole issue. SSH operates over its normal
port, 22, and Amanda tunnels through that.

-- 

Charles Curley  /\ASCII Ribbon Campaign
Looking for fine software   \ /Respect for open standards
and/or writing?  X No HTML/RTF in email
http://www.charlescurley.com/ \No M$ Word docs in email

Key fingerprint = CE5C 6645 A45A 64E4 94C0  809C FFF6 4C48 4ECD DFDB


[Amanda-users] Help with a custom LVM snapshot script

2013-02-08 Thread neal lawson
here is what i have so far, but i welcome any suggestions or help..

https://github.com/neallawson/Amanda-LVM-snapshot

+--
|This was sent by neal.law...@gmail.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--




Re: [Amanda-users] Help with a custom LVM snapshot script

2013-02-08 Thread Jon LaBadie
On Fri, Feb 08, 2013 at 11:30:45AM -0800, neal lawson wrote:
 here is what i have so far, but i welcome any suggestions or help..
 
 https://github.com/neallawson/Amanda-LVM-snapshot
 
Glad to see someone working on this important piece.

I question if ruby is a good choice of languages as none of
the rest of the amanda code uses it.  Users would have an
additional requirement of installing ruby support and the
code maintainers would have to be conversant in another
language.

jl
-- 
Jon H. LaBadie j...@jgcomp.com
 11226 South Shore Rd.  (703) 787-0688 (H)
 Reston, VA  20190  (609) 477-8330 (C)


[Amanda-users] Help with a custom LVM snapshot script

2013-02-08 Thread neal lawson
i chose ruby since its a stable modern language i know, and most linux 
distributions already ship with 1.8.7

+--
|This was sent by neal.law...@gmail.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--




[Amanda-users] Help with a custom LVM snapshot script

2013-02-08 Thread neal lawson
i chose ruby since its a stable modern language i know, and most linux 
distributions already ship with 1.8.7, further more Amanda should be agnostic 
to the language that the helper scripts use. If Amanda requires i use perl, it 
might be time to find another backup solution.

+--
|This was sent by neal.law...@gmail.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--




Re: [Amanda-users] Help with a custom LVM snapshot script

2013-02-08 Thread Markus Iturriaga Woelfel

On Feb 8, 2013, at 5:21 PM, neal lawson amanda-for...@backupcentral.com wrote:

 i chose ruby since its a stable modern language i know, and most linux 
 distributions already ship with 1.8.7, further more Amanda should be agnostic 
 to the language that the helper scripts use. If Amanda requires i use perl, 
 it might be time to find another backup solution.



Amanda is certainly agnostic as far as helper scripts/Applications are 
concerned. The only advantage to using Perl would be that you can use the 
included Amanda API, avoiding re-coding some functions and having an easier 
interaction with the rest of the amanda system. If that's not an issue, then 
Ruby should do fine. 

 Markus Iturriaga Woelfel, it seems the repo you referenced no longer exists, 
 Markus Iturriaga Woelfel


https://github.com/marxarelli/amlvm-snapshot still works for me. 

I'm all for a ground-up rewrite of the LVM script though. I don't know if I can 
contribute much in Ruby, but I'd be interested in seeing design decisions 
discussed. I don't know if this is the right forum - depends on the Amanda.org 
folks, I guess. I wrote the snapshot scripts we use for BackupPC (which we also 
use for backups) in Perl, so I have some experience writing scripts that 
interact with LVM. I can't promise they're elegant though. I've never made them 
publicly available mostly because they are hacks, but they do work. :)

Markus

---
Markus A. Iturriaga Woelfel, IT Administrator
Department of Electrical Engineering and Computer Science
University of Tennessee
Min H. Kao Building, Suite 424 / 1520 Middle Drive
Knoxville, TN 37996-2250
mitur...@eecs.utk.edu / (865) 974-3837
http://twitter.com/UTKEECSIT










Re: [Amanda-users] Help with a custom LVM snapshot script

2013-02-01 Thread Bob Vickers

On Thu, 31 Jan 2013, Stefan G. Weichinger wrote:


I would also like to see improved scripts, I have the issue at one
client that the LVM-snapshots aren't correctly removed sometimes ...
that leads to non-working backups and dozens of lurking snapshots ...



I had a lot of problems with LVM snapshots that would not go away, so I 
wrote a script deleteLV that repeatedly waits a little while then tries 
again. This has worked fine, though i have a suspicion that SuSE fixed the 
bug I was working round at exactly the same time as I installed my script!


The script deleteLV is appended to this message in case anyone finds it 
useful.


Bob Vickers

#! /bin/bash

# Delete an LVM logical volume (typically used for a snapshot that won't go
# away). See 
# https://bugzilla.novell.com/show_bug.cgi?id=642296

# Author: Bob Vickers, Royal Holloway University of london

set -o nounset
export PATH=/sbin:/usr/sbin:/bin:/usr/bin

usage=\
 deleteLV  [-n maxTries] [-t timetowait] LVdevice...
-n max number of tries (default 10)
-t time to wait after sync (default 10 seconds)

#
#   Parse the command options
declare -i maxtries=10 delay=10
ERRFLAG=OK
OPTIND=1
while getopts n:t: OPTION
do
case $OPTION in
n)  maxtries=$OPTARG;;
t)  delay=$OPTARG;;
\?) ERRFLAG=BAD;;
esac
done
shift `expr $OPTIND - 1`
#
# Exit with a usage message if the syntax is wrong.
#
if [ $ERRFLAG != OK -o $# -eq 0 ]
then
echo 21 $0: usage:
echo 21 $usage
exit 1
fi

declare -i ntries

for lv in $@
do
if lvs $lv /dev/null
then
# We have a valid LVM logical volume, so repeatedly try to delete it.
ntries=0
while (( $ntries  $maxtries ))
do
ntries=$(( $ntries + 1 ))
echo 2 $0: Removing $lv attempt $ntries
udevadm settle
sync
sleep $delay
lvremove -f $lv  break
done
if lvs $lv /dev/null 2/dev/null
then
echo 2  $0: FAILED to remove $lv after $ntries attempts
else
echo 2  $0: Successfully removed $lv after $ntries attempt(s)
fi
else
 echo 2  $0: $lv is not an LVM logical volume
fi
done



Re: [Amanda-users] Help with a custom LVM snapshot script

2013-02-01 Thread Stefan G. Weichinger
Am 2013-02-01 10:56, schrieb Bob Vickers:
 On Thu, 31 Jan 2013, Stefan G. Weichinger wrote:

 I would also like to see improved scripts, I have the issue at one
 client that the LVM-snapshots aren't correctly removed sometimes ...
 that leads to non-working backups and dozens of lurking snapshots ...

 
 I had a lot of problems with LVM snapshots that would not go away, so I
 wrote a script deleteLV that repeatedly waits a little while then tries
 again. This has worked fine, though i have a suspicion that SuSE fixed
 the bug I was working round at exactly the same time as I installed my
 script!
 
 The script deleteLV is appended to this message in case anyone finds it
 useful.

Thanks for sharing, Bob!

I saw these issues with Gentoo Linux as well, so it might not be
Suse-specific. The server in question has recently been upgraded to a
more recent kernel and newer lvm2-userspace tools, I haven't seen the
non-deleted LV-snapshots since then.

I will take a look at your script and how I could fit it into my scheme,
at first glance I think I have to figure out how to select the snapshot
as it changes name every day. But that shouldn't be that hard ;-)

Thanks, Stefan



Re: [Amanda-users] Help with a custom LVM snapshot script

2013-01-31 Thread Stefan G. Weichinger
Am 28.01.2013 17:48, schrieb Markus Iturriaga Woelfel:

 I would, however, be interested in seeing an
 effort to rewrite this from the ground up as there are certainly a
 lot of things that could be improved/fixed.

I would also like to see improved scripts, I have the issue at one
client that the LVM-snapshots aren't correctly removed sometimes ...
that leads to non-working backups and dozens of lurking snapshots ...

Stefan


Re: [Amanda-users] Help with a custom LVM snapshot script

2013-01-28 Thread Jean-Louis Martineau

On 01/26/2013 01:33 PM, neal lawson wrote:

I've been working on a custom LVM snapshot script, I have the bulk of the 
script working, but i'm not sure how to message amanand to change the backup 
location to the snapshot location. and thoughts would be helpful.

The script must print the directory property on stdout:
PROPERTY directory /path/to/mount-point


(FYI: i did try the existing scripts that can be found on the internet, but 
they seem to be abandoned, and longer functional).

I would prefer if you can enhance that script instead of writing a new one.

Jean-Louis


[Amanda-users] Help with a custom LVM snapshot script

2013-01-28 Thread neal lawson
Markus Iturriaga Woelfel, it seems the repo you referenced no longer exists, 
Markus Iturriaga Woelfel

Jean-Louis Martineau, the current scripts seem to be a bit of a disaster, and i 
did try to fix them, currently i do have a ruby based script that is 99% there 
i'm just missing the last bit to get amgtar to backup the new location.

+--
|This was sent by neal.law...@gmail.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--




[Amanda-users] Help with a custom LVM snapshot script

2013-01-28 Thread neal lawson
so the correct link to the repo is https://github.com/marxarelli/amlvm-snapshot 
 the one posted had a trailing ., there are sever versions of this on github, 
this project seems to be abandoned as it was last updated over 2 years go

+--
|This was sent by neal.law...@gmail.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--




Re: [Amanda-users] Help with a custom LVM snapshot script

2013-01-28 Thread Markus Iturriaga Woelfel

On Jan 28, 2013, at 11:23 AM, neal lawson amanda-for...@backupcentral.com 
wrote:

 so the correct link to the repo is 
 https://github.com/marxarelli/amlvm-snapshot  the one posted had a trailing 
 ., there are sever versions of this on github, this project seems to be 
 abandoned as it was last updated over 2 years go


Sorry, I think that was me ending a sentence out of sheer automatism. We 
patched the script to work with CentOS/RedHat 6, our main Linux distribution, 
and it works fine if what you're backing up is the entire logical volume. If 
you're only backing up a subdirectory, there may be problems. I would, however, 
be interested in seeing an effort to rewrite this from the ground up as there 
are certainly a lot of things that could be improved/fixed. We also wound up 
also creating our own amanda RPMs since the ones distributed by ZManda place 
files in /usr/local which conflicts with the way we manage our systems. 
However, it all seems to work together well. I back up 103 DLEs using the 
amlvm-snapshot script.

Markus
---
Markus A. Iturriaga Woelfel, IT Administrator
Department of Electrical Engineering and Computer Science
University of Tennessee
Min H. Kao Building, Suite 424 / 1520 Middle Drive
Knoxville, TN 37996-2250
mitur...@eecs.utk.edu / (865) 974-3837
http://twitter.com/UTKEECSIT










[Amanda-users] Error message: configuration keyword expected

2013-01-10 Thread bulut
hi everyone, I resolve the problem.

I installed system without turkish characters vs. After I reinstall 
amadna.3.3.x.
Now everythink works well.

Not: I use same amanda.conf config file without change it.

+--
|This was sent by ata...@yahoo.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--




[Amanda-users] Error message: configuration keyword expected

2013-01-08 Thread bulut
hi everyone,

I installed Amanda-3.3.2 on debian (32 bit) from source code. 

./configure --with-user=amanda --with-group=backup --with-rsh-security 
--with-ssh-security --with-smbclient --with-bsdtcp-security --disable-nls

Everytihnks goes fine but I can't run amlabel, amdump or amcheck with -t 
parameter. 

amanda@bulut:/usr/local/etc/amanda/SERVERS$ amcheck  -t SERVERS
Amanda Tape Server Host Check
-
 Current slot not loaded
Taper scan algorithm did not find an acceptable volume.
(expecting a new volume)
ERROR: No acceptable volumes found
Server check took 2.141 seconds

(brought to you by Amanda 3.3.2)


amanda@bulut:/usr/local/etc/amanda/SERVERS$ amlabel -f SERVERS Tape-02
/usr/local/etc/amanda/SERVERS/amanda.conf, line 2: error: not a keyword.
/usr/local/etc/amanda/SERVERS/amanda.conf, line 2: end of line is expected
/usr/local/etc/amanda/SERVERS/amanda.conf, line 6: error: not a keyword.
/usr/local/etc/amanda/SERVERS/amanda.conf, line 6: end of line is expected
/usr/local/etc/amanda/SERVERS/amanda.conf, line 9: FIRST, FIRSTFIT, LARGEST, 
LARGESTFIT, SMALLEST or LAST expected
/usr/local/etc/amanda/SERVERS/amanda.conf, line 11: error: not a keyword.
/usr/local/etc/amanda/SERVERS/amanda.conf, line 11: end of line is expected
/usr/local/etc/amanda/SERVERS/amanda.conf, line 21: error: not a keyword.
/usr/local/etc/amanda/SERVERS/amanda.conf, line 21: end of line is expected
/usr/local/etc/amanda/SERVERS/amanda.conf, line 22: error: not a keyword.
/usr/local/etc/amanda/SERVERS/amanda.conf, line 27: error: not a keyword.
/usr/local/etc/amanda/SERVERS/amanda.conf, line 27: end of line is expected
/usr/local/etc/amanda/SERVERS/amanda.conf, line 29: error: not a keyword.
/usr/local/etc/amanda/SERVERS/amanda.conf, line 29: end of line is expected
/usr/local/etc/amanda/SERVERS/amanda.conf, line 31: error: not a keyword.
/usr/local/etc/amanda/SERVERS/amanda.conf, line 31: end of line is expected
/usr/local/etc/amanda/SERVERS/amanda.conf, line 35: error: not a keyword.
/usr/local/etc/amanda/SERVERS/amanda.conf, line 35: end of line is expected
/usr/local/etc/amanda/SERVERS/amanda.conf, line 36: error: not a keyword.
/usr/local/etc/amanda/SERVERS/amanda.conf, line 38: error: not a keyword.
/usr/local/etc/amanda/SERVERS/amanda.conf, line 38: end of line is expected
/usr/local/etc/amanda/SERVERS/amanda.conf, line 52: error: not a keyword.
/usr/local/etc/amanda/SERVERS/amanda.conf, line 52: end of line is expected
/usr/local/etc/amanda/SERVERS/amanda.conf, line 60: error: not a keyword.
/usr/local/etc/amanda/SERVERS/amanda.conf, line 60: end of line is expected
/usr/local/etc/amanda/SERVERS/amanda.conf, line 61: error: not a keyword.
/usr/local/etc/amanda/SERVERS/amanda.conf, line 62: configuration keyword 
expected
/usr/local/etc/amanda/SERVERS/amanda.conf, line 63: error: not a keyword.
/usr/local/etc/amanda/SERVERS/amanda.conf, line 63: end of line is expected
/usr/local/etc/amanda/SERVERS/amanda.conf, line 65: configuration keyword 
expected
/usr/local/etc/amanda/SERVERS/amanda.conf, line 66: error: not a keyword.
/usr/local/etc/amanda/SERVERS/amanda.conf, line 66: end of line is expected
/usr/local/etc/amanda/SERVERS/amanda.conf, line 67: error: not a keyword.
/usr/local/etc/amanda/SERVERS/amanda.conf, line 68: error: not a keyword.
/usr/local/etc/amanda/SERVERS/amanda.conf, line 70: error: not a keyword.
/usr/local/etc/amanda/SERVERS/amanda.conf, line 70: end of line is expected
/usr/local/etc/amanda/SERVERS/amanda.conf, line 72: error: not a keyword.
/usr/local/etc/amanda/SERVERS/amanda.conf, line 72: end of line is expected
/usr/local/etc/amanda/SERVERS/amanda.conf, line 74: error: not a keyword.
/usr/local/etc/amanda/SERVERS/amanda.conf, line 74: end of line is expected
/usr/local/etc/amanda/SERVERS/amanda.conf, line 77: error: not a keyword.
/usr/local/etc/amanda/SERVERS/amanda.conf, line 77: end of line is expected
/usr/local/etc/amanda/SERVERS/amanda.conf, line 78: error: not a keyword.
/usr/local/etc/amanda/SERVERS/amanda.conf, line 78: end of line is expected
/usr/local/etc/amanda/SERVERS/amanda.conf, line 80: configuration keyword 
expected
/usr/local/etc/amanda/SERVERS/amanda.conf, line 81: configuration keyword 
expected
/usr/local/etc/amanda/SERVERS/amanda.conf, line 82: error: not a keyword.
/usr/local/etc/amanda/SERVERS/amanda.conf, line 82: end of line is expected
/usr/local/etc/amanda/SERVERS/amanda.conf, line 83: error: not a keyword.
/usr/local/etc/amanda/SERVERS/amanda.conf, line 84: configuration keyword 
expected
/usr/local/etc/amanda/SERVERS/amanda.conf, line 85: error: not a keyword.
/usr/local/etc/amanda/SERVERS/amanda.conf, line 87: error: not a keyword.
/usr/local/etc/amanda/SERVERS/amanda.conf, line 87: end of line is expected
/usr/local/etc/amanda/SERVERS/amanda.conf, line 88: error: not a keyword.
/usr/local/etc/amanda/SERVERS/amanda.conf, line 88: end of line is expected
/usr/local/etc/amanda/SERVERS/amanda.conf, line 90: configuration keyword 

Re: [Amanda-users] Error message: configuration keyword expected

2013-01-08 Thread Jon LaBadie
Might your files have passed through a windows system?
I'm thinking that each line ends with the M$ style of CR/LF
but your OS is expecting only LF.


On Tue, Jan 08, 2013 at 02:59:11PM -0800, bulut wrote:
 hi everyone,
 
 I installed Amanda-3.3.2 on debian (32 bit) from source code. 
 
 ./configure --with-user=amanda --with-group=backup --with-rsh-security 
 --with-ssh-security --with-smbclient --with-bsdtcp-security --disable-nls
 
 Everytihnks goes fine but I can't run amlabel, amdump or amcheck with -t 
 parameter. 
 
 amanda@bulut:/usr/local/etc/amanda/SERVERS$ amcheck  -t SERVERS
 Amanda Tape Server Host Check
 -
  Current slot not loaded
 Taper scan algorithm did not find an acceptable volume.
 (expecting a new volume)
 ERROR: No acceptable volumes found
 Server check took 2.141 seconds
 
 (brought to you by Amanda 3.3.2)
 
 
 amanda@bulut:/usr/local/etc/amanda/SERVERS$ amlabel -f SERVERS Tape-02
 /usr/local/etc/amanda/SERVERS/amanda.conf, line 2: error: not a keyword.
 /usr/local/etc/amanda/SERVERS/amanda.conf, line 2: end of line is expected
 /usr/local/etc/amanda/SERVERS/amanda.conf, line 6: error: not a keyword.
 /usr/local/etc/amanda/SERVERS/amanda.conf, line 6: end of line is expected
 /usr/local/etc/amanda/SERVERS/amanda.conf, line 9: FIRST, FIRSTFIT, 
 LARGEST, LARGESTFIT, SMALLEST or LAST expected
 /usr/local/etc/amanda/SERVERS/amanda.conf, line 11: error: not a keyword.
 /usr/local/etc/amanda/SERVERS/amanda.conf, line 11: end of line is expected
 /usr/local/etc/amanda/SERVERS/amanda.conf, line 21: error: not a keyword.
 /usr/local/etc/amanda/SERVERS/amanda.conf, line 21: end of line is expected
 /usr/local/etc/amanda/SERVERS/amanda.conf, line 22: error: not a keyword.
 /usr/local/etc/amanda/SERVERS/amanda.conf, line 27: error: not a keyword.
 /usr/local/etc/amanda/SERVERS/amanda.conf, line 27: end of line is expected
 /usr/local/etc/amanda/SERVERS/amanda.conf, line 29: error: not a keyword.
 /usr/local/etc/amanda/SERVERS/amanda.conf, line 29: end of line is expected
 /usr/local/etc/amanda/SERVERS/amanda.conf, line 31: error: not a keyword.
 /usr/local/etc/amanda/SERVERS/amanda.conf, line 31: end of line is expected
 /usr/local/etc/amanda/SERVERS/amanda.conf, line 35: error: not a keyword.
 /usr/local/etc/amanda/SERVERS/amanda.conf, line 35: end of line is expected
 /usr/local/etc/amanda/SERVERS/amanda.conf, line 36: error: not a keyword.
 /usr/local/etc/amanda/SERVERS/amanda.conf, line 38: error: not a keyword.
 /usr/local/etc/amanda/SERVERS/amanda.conf, line 38: end of line is expected
 /usr/local/etc/amanda/SERVERS/amanda.conf, line 52: error: not a keyword.
 /usr/local/etc/amanda/SERVERS/amanda.conf, line 52: end of line is expected
 /usr/local/etc/amanda/SERVERS/amanda.conf, line 60: error: not a keyword.
 /usr/local/etc/amanda/SERVERS/amanda.conf, line 60: end of line is expected
 /usr/local/etc/amanda/SERVERS/amanda.conf, line 61: error: not a keyword.
 /usr/local/etc/amanda/SERVERS/amanda.conf, line 62: configuration keyword 
 expected
 /usr/local/etc/amanda/SERVERS/amanda.conf, line 63: error: not a keyword.
 /usr/local/etc/amanda/SERVERS/amanda.conf, line 63: end of line is expected
 /usr/local/etc/amanda/SERVERS/amanda.conf, line 65: configuration keyword 
 expected
 /usr/local/etc/amanda/SERVERS/amanda.conf, line 66: error: not a keyword.
 /usr/local/etc/amanda/SERVERS/amanda.conf, line 66: end of line is expected
 /usr/local/etc/amanda/SERVERS/amanda.conf, line 67: error: not a keyword.
 /usr/local/etc/amanda/SERVERS/amanda.conf, line 68: error: not a keyword.
 /usr/local/etc/amanda/SERVERS/amanda.conf, line 70: error: not a keyword.
 /usr/local/etc/amanda/SERVERS/amanda.conf, line 70: end of line is expected
 /usr/local/etc/amanda/SERVERS/amanda.conf, line 72: error: not a keyword.
 /usr/local/etc/amanda/SERVERS/amanda.conf, line 72: end of line is expected
 /usr/local/etc/amanda/SERVERS/amanda.conf, line 74: error: not a keyword.
 /usr/local/etc/amanda/SERVERS/amanda.conf, line 74: end of line is expected
 /usr/local/etc/amanda/SERVERS/amanda.conf, line 77: error: not a keyword.
 /usr/local/etc/amanda/SERVERS/amanda.conf, line 77: end of line is expected
 /usr/local/etc/amanda/SERVERS/amanda.conf, line 78: error: not a keyword.
 /usr/local/etc/amanda/SERVERS/amanda.conf, line 78: end of line is expected
 /usr/local/etc/amanda/SERVERS/amanda.conf, line 80: configuration keyword 
 expected
 /usr/local/etc/amanda/SERVERS/amanda.conf, line 81: configuration keyword 
 expected
 /usr/local/etc/amanda/SERVERS/amanda.conf, line 82: error: not a keyword.
 /usr/local/etc/amanda/SERVERS/amanda.conf, line 82: end of line is expected
 /usr/local/etc/amanda/SERVERS/amanda.conf, line 83: error: not a keyword.
 /usr/local/etc/amanda/SERVERS/amanda.conf, line 84: configuration keyword 
 expected
 /usr/local/etc/amanda/SERVERS/amanda.conf, line 85: error: not a keyword.
 /usr/local/etc/amanda/SERVERS/amanda.conf, line 87: error: not a keyword.
 

Re: [Amanda-users] Help in installing Amanda Client in FreeBSD 8.0/9.0

2012-11-22 Thread Olivier Nicole
Jose,

 Im new to Amanda. I have installed Amanda Server on a CentOS but on
 trying to install it on the FreeBSD clients I have I am experiencing
 hitches. Any help you can accord will be highly appreciated.

I am not sure what are your problem sin installing amanda client on
FreeBSD.

Go to /usr/ports/misc/amanda-client
make
make install

That should do it.

Else you ned to be specific on the type of propblems you are facing.

Best regards,

Olivier


[Amanda-users] Sun StoreEdge L700 with MTX

2011-03-28 Thread therm
Hello @all,

yesterday we got a Sun Storedge L700 with 3 Drives LTO4 and 678 slots.
I can use mtx for changing the tapes from every slot to every slot, no
problem. My problem is that it moans if I want to load these tapes into the 
drives.
But I see people using this autochanger in the internet, what did I wrong?

Here is what I am doing:


root@amanda:~# lsscsi -g
[0:0:0:0]cd/dvd  hp   DVD RW AD-7586H  KP03  /dev/sr0  /dev/sg0
[2:0:0:0]storage HP   HSV200   6220  - /dev/sg3
[2:0:1:0]storage HP   HSV200   6220  - /dev/sg4
[2:0:3:0]mediumx STK  L700 0318  /dev/sch0 /dev/sg7 
   -
[2:0:5:0]tapeIBM  ULTRIUM-TD4  A232  /dev/st0  /dev/sg8
[2:0:6:0]tapeIBM  ULTRIUM-TD4  A232  /dev/st1  /dev/sg9
[2:0:7:0]tapeIBM  ULTRIUM-TD4  A232  /dev/st2  /dev/sg10
[3:0:0:0]storage HP   P410i3.66  - /dev/sg1
[3:0:0:1]diskHP   LOGICAL VOLUME   3.66  /dev/sda  /dev/sg2
[4:0:0:0]storage HP   HSV200   6220  - /dev/sg5
[4:0:1:0]storage HP   HSV200   6220  - /dev/sg6

root@amanda:~# mtx -f /dev/sg7 status
  Storage Changer /dev/sg7:3 Drives, 698 Slots ( 20 Import/Export )
Data Transfer Element 0:Empty
Data Transfer Element 1:Empty
Data Transfer Element 2:Empty
  Storage Element 1:Full :VolumeTag=DAY$02 
  Storage Element 2:Empty
  Storage Element 3:Empty
  Storage Element 4:Full :VolumeTag=DAY$07 
  Storage Element 5:Full :VolumeTag=DAY$02 
  Storage Element 6:Full :VolumeTag=DAY$07 
  Storage Element 7:Full :VolumeTag=DAY$00 
  Storage Element 8:Empty
  Storage Element 9:Empty
  Storage Element 10:Full :VolumeTag=DAY$07 
  Storage Element 11:Empty
  Storage Element 12:Empty
  Storage Element 13:Empty
  Storage Element 14:Empty
  Storage Element 15:Empty
  Storage Element 16:Empty
  Storage Element 17:Empty
  Storage Element 18:Empty
  Storage Element 19:Empty
  Storage Element 20:Empty
...
  Storage Element 679 IMPORT/EXPORT:Empty
  Storage Element 680 IMPORT/EXPORT:Empty
  Storage Element 681 IMPORT/EXPORT:Empty
  Storage Element 682 IMPORT/EXPORT:Empty
  Storage Element 683 IMPORT/EXPORT:Empty
  Storage Element 684 IMPORT/EXPORT:Empty
  Storage Element 685 IMPORT/EXPORT:Empty
  Storage Element 686 IMPORT/EXPORT:Empty
  Storage Element 687 IMPORT/EXPORT:Empty
  Storage Element 688 IMPORT/EXPORT:Empty
  Storage Element 689 IMPORT/EXPORT:Empty
  Storage Element 690 IMPORT/EXPORT:Empty
  Storage Element 691 IMPORT/EXPORT:Empty
  Storage Element 692 IMPORT/EXPORT:Empty
  Storage Element 693 IMPORT/EXPORT:Empty
  Storage Element 694 IMPORT/EXPORT:Empty
  Storage Element 695 IMPORT/EXPORT:Empty
  Storage Element 696 IMPORT/EXPORT:Empty
  Storage Element 697 IMPORT/EXPORT:Empty
  Storage Element 698 IMPORT/EXPORT:Empty

root@amanda:~# mtx -f /dev/sg7 transfer 4 3 (runs perfectly)

root@amanda:~# mtx -f /dev/sg7 load 3
or root@amanda:~# mtx -f /dev/sg7 load 3 0
instead will fail with the following error message:
Loading media from Storage Element 3 into drive 0...mtx: Request Sense:
Long Report=yes
mtx: Request Sense: Valid Residual=no
mtx: Request Sense: Error Code=70 (Current)
mtx: Request Sense: Sense Key=Illegal Request
mtx: Request Sense: FileMark=no
mtx: Request Sense: EOM=no
mtx: Request Sense: ILI=no
mtx: Request Sense: Additional Sense Code = 30
mtx: Request Sense: Additional Sense Qualifier = 00
mtx: Request Sense: BPV=no
mtx: Request Sense: Error in CDB=no
mtx: Request Sense: SKSV=no
MOVE MEDIUM from Element Address 1002 to 500 Failed

I phoned the seller of this and they said that they checked the library
with their tool and everything was ok.
Do I make something wrong or do have any recommendation?

-- 
Best Regards, 
Dennis Benndorf

+--
|This was sent by dennis.bennd...@gmx.de via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--




Re: [Amanda-users] backup analysis

2011-03-15 Thread Jon LaBadie
On Mon, Mar 14, 2011 at 06:17:41PM -0700, upengan78 wrote:
 Couple of questions:
 
...
 Does amoverview work on weeklyfull but not on monthlyfull?
 
 no amoverview doesn't work on any of above. Always E.
 
 The amoverview output you show indicated errors for 3 weeks of daily runs.
 What did the associated daily reports show? Were there errors?
 
 No errors in daily reports. Daily report was OK for all DLEs. Sometimes it 
 was STRANGE when part was retried successfully.

No, that should not cause problems.  There were some problems with amoverview
when the C source was recoded to perl.  But those problems were accompanied
by many error messages about uninitialized variables.

I assume you are using a recent release of amanda?

Jon
-- 
Jon H. LaBadie  j...@jgcomp.com
 JG Computing
 12027 Creekbend Drive  (703) 787-0884
 Reston, VA  20194  (703) 787-0922 (fax)


[Amanda-users] backup analysis

2011-03-15 Thread upengan78
I assume you are using a recent release of amanda? 

I am sure this is not latest because I went with OpenCSW repository available 
for solaris platform instead of compiling amanda package myself from source.

3.1.1 is the version of amanda on this system, as shown below.

/opt/csw/bin/pkgutil -a | grep amanda
gpg: Signature made Mon Mar 14 20:32:35 2011 CDT using DSA key ID E12E9D2F
gpg: Good signature from CSW Distribution Manager d...@opencsw.org
amanda   CSWamanda3.1.1,REV=2010.07.20 8.2 MB

Thanks for continuing the discussion and help!

+--
|This was sent by upendra.gan...@gmail.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--




[Amanda-users] backup analysis

2011-03-15 Thread upengan78
Could it be because of trying to fill tapes(vtapes in my case) to 100%?

following is configured in monthlyfull and weeklyfull amanda configuration,


flush-threshold-dumped100 
flush-threshold-scheduled 100 
taperflush100
autoflush yes

+--
|This was sent by upendra.gan...@gmail.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--




Re: [Amanda-users] backup analysis

2011-03-15 Thread Jon LaBadie
On Tue, Mar 15, 2011 at 07:23:38AM -0700, upengan78 wrote:
 Could it be because of trying to fill tapes(vtapes in my case) to 100%?
 
 following is configured in monthlyfull and weeklyfull amanda configuration,
 
 
 flush-threshold-dumped100 
 flush-threshold-scheduled 100 
 taperflush100
 autoflush yes

I wouldn't expect those settings to affect amoverview.
However, I've not played with them, leaving the first
three at the default 0.  I do set autoflush to yes.

jl
-- 
Jon H. LaBadie  j...@jgcomp.com
 JG Computing
 12027 Creekbend Drive  (703) 787-0884
 Reston, VA  20194  (703) 787-0922 (fax)


[Amanda-users] backup analysis

2011-03-14 Thread upengan78
Hello ,

I am wondering if there is anything in Amada that can give me information on 
how old the backup of a DLE exists in the system and similar useful 
information. I do receive daily AMANDA MAIL REPORT when a job is finished but 
that doesn't tell me how old the backup of DLE is available for me to 
recover/restore.

How can I find this information ?

+--
|This was sent by upendra.gan...@gmail.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--




Re: [Amanda-users] backup analysis

2011-03-14 Thread Brian Cuttler

Upendra,

I use the # amadmin subcommands due and find for
this type of information.

On Mon, Mar 14, 2011 at 07:11:39AM -0700, upengan78 wrote:
 Hello ,
 
 I am wondering if there is anything in Amada that can give me information on 
 how old the backup of a DLE exists in the system and similar useful 
 information. I do receive daily AMANDA MAIL REPORT when a job is finished but 
 that doesn't tell me how old the backup of DLE is available for me to 
 recover/restore.
 
 How can I find this information ?
 
 +--
 |This was sent by upendra.gan...@gmail.com via Backup Central.
 |Forward SPAM to ab...@backupcentral.com.
 +--
 
 
---
   Brian R Cuttler brian.cutt...@wadsworth.org
   Computer Systems Support(v) 518 486-1697
   Wadsworth Center(f) 518 473-6384
   NYS Department of HealthHelp Desk 518 473-0773



IMPORTANT NOTICE: This e-mail and any attachments may contain
confidential or sensitive information which is, or may be, legally
privileged or otherwise protected by law from further disclosure.  It
is intended only for the addressee.  If you received this in error or
from someone who was not authorized to send it to you, please do not
distribute, copy or use it or any attachments.  Please notify the
sender immediately by reply e-mail and delete this from your
system. Thank you for your cooperation.




Re: [Amanda-users] backup analysis

2011-03-14 Thread Jon LaBadie
On Mon, Mar 14, 2011 at 07:11:39AM -0700, upengan78 wrote:
 Hello ,
 
 I am wondering if there is anything in Amada that can give me information on 
 how old the backup of a DLE exists in the system and similar useful 
 information. I do receive daily AMANDA MAIL REPORT when a job is finished but 
 that doesn't tell me how old the backup of DLE is available for me to 
 recover/restore.
 
 How can I find this information ?

amoverview, or perhaps some amadmin options


-- 
Jon H. LaBadie  j...@jgcomp.com
 JG Computing
 12027 Creekbend Drive  (703) 787-0884
 Reston, VA  20194  (703) 787-0922 (fax)


[Amanda-users] backup analysis

2011-03-14 Thread upengan78
Thanks Jon and Brian

I will try amadmin, i did however try, amoverview. Do you know how to interpret 
the o/p from amoverview?

/opt/csw/sbin/amoverview monthlyfull

 date 02 02 02 02 02 02 02 03 03 03 03 03 03 03 03 03 
03 03 03 03 03
host disk 22 23 24 25 26 27 28 01 02 03 04 05 06 07 08 09 
10 11 12 13 14

client.domain.com /location/EL  E  E  E  E  E  E  E  E  E  E  E  E  
E  E  E  E  E  E  E  E
client.domain.com /location/am  E  E  E  E  E  E  E  E  E  E  E  E  
E  E  E  E  E  E  E  E
client.domain.com /location/ageE  E  E  E  E  E  E  E  E  E  E  E  
E  E  E  E  E  E  E  E
client.domain.com /location/other/[a-m]* E  E  E  E  E  E  E EE  E  E  E  E 
EE  E  E  E  E  E  E  E
client.domain.com /location/other/[n-z]* E  E  E  E  E  E  E EE  E  E  E  E 
 E  E  E  E  E  E  E  E
client.domain.com /location/other/_rest_ E  E  E  E  E  E  E  E  E  E  E  E 
 E  E  E  E  E  E  E  E
client.domain.com /location/werE  E  E  E  E  E  E  E  E  E  E  E  
E  E  E  E  E  E  E  E
client.domain.com /location/st  E  E  E  E  E  E  E  E  E  E  E  E  
E  E  E  E  E  E  E  E
client.domain.com /location/ida   E  E  E  E  E  E  E  E  E  E  E  E  E 
 E  E  E  E  E  E  E
client.domain.com /location/soc   E EE EE EE EE EE EE EE  E EE EE EE  E  E 
EE EE EE EE EE EE  E

+--
|This was sent by upendra.gan...@gmail.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--




Re: [Amanda-users] backup analysis

2011-03-14 Thread Brian Cuttler

Upendra,

I found the amoverview man page on the web and it shows
the date headers and DLEs down the left side as your email
indicates but shows dump levels in the grid rather than the
E markings which the man page shows and ran into end of
tape .

Its more complicated than that, rather than repeat or risk
screwing it up in a summary I recommend you read the man
page yourself.

On Mon, Mar 14, 2011 at 08:00:48AM -0700, upengan78 wrote:
 Thanks Jon and Brian
 
 I will try amadmin, i did however try, amoverview. Do you know how to 
 interpret the o/p from amoverview?
 
 /opt/csw/sbin/amoverview monthlyfull
 
  date 02 02 02 02 02 02 02 03 03 03 03 03 03 03 03 03 
 03 03 03 03 03
 host disk 22 23 24 25 26 27 28 01 02 03 04 05 06 07 08 09 
 10 11 12 13 14
 
 client.domain.com /location/EL  E  E  E  E  E  E  E  E  E  E  E  
 E  E  E  E  E  E  E  E  E
 client.domain.com /location/am  E  E  E  E  E  E  E  E  E  E  E  
 E  E  E  E  E  E  E  E  E
 client.domain.com /location/ageE  E  E  E  E  E  E  E  E  E  E  E 
  E  E  E  E  E  E  E  E
 client.domain.com /location/other/[a-m]* E  E  E  E  E  E  E EE  E  E  E  
 E EE  E  E  E  E  E  E  E
 client.domain.com /location/other/[n-z]* E  E  E  E  E  E  E EE  E  E  E  
 E  E  E  E  E  E  E  E  E
 client.domain.com /location/other/_rest_ E  E  E  E  E  E  E  E  E  E  E  
 E  E  E  E  E  E  E  E  E
 client.domain.com /location/werE  E  E  E  E  E  E  E  E  E  E  E 
  E  E  E  E  E  E  E  E
 client.domain.com /location/st  E  E  E  E  E  E  E  E  E  E  E  
 E  E  E  E  E  E  E  E  E
 client.domain.com /location/ida   E  E  E  E  E  E  E  E  E  E  E  E  
 E  E  E  E  E  E  E  E
 client.domain.com /location/soc   E EE EE EE EE EE EE EE  E EE EE EE  E  
 E EE EE EE EE EE EE  E
 
 +--
 |This was sent by upendra.gan...@gmail.com via Backup Central.
 |Forward SPAM to ab...@backupcentral.com.
 +--
 
 
---
   Brian R Cuttler brian.cutt...@wadsworth.org
   Computer Systems Support(v) 518 486-1697
   Wadsworth Center(f) 518 473-6384
   NYS Department of HealthHelp Desk 518 473-0773



IMPORTANT NOTICE: This e-mail and any attachments may contain
confidential or sensitive information which is, or may be, legally
privileged or otherwise protected by law from further disclosure.  It
is intended only for the addressee.  If you received this in error or
from someone who was not authorized to send it to you, please do not
distribute, copy or use it or any attachments.  Please notify the
sender immediately by reply e-mail and delete this from your
system. Thank you for your cooperation.




[Amanda-users] backup analysis

2011-03-14 Thread upengan78
Thanks , I took a look at that man page just now. I tried some other 
options/switches for amoverview , I always got E or EE while my amanda backup 
reports were OK most of the times for all DLEs.

Just for the sake of trying I tried amadmin weeklyfull find and guess what ! 
this works a lot better and I wonder why because amoverview is actuallly 
supposed to run same command in background but it shows all E or EEs but 
amadmin works great and I will continue using that..

/opt/csw/sbin/amadmin weeklyfull find 

datehost  disklv tape or file   
  file part  
status
2011-02-10 00:01:02 client.domain.com /etc 0 WF-2   
 2   1/1 OK 
2011-02-11 00:01:02 client.domain.com /etc 1 WF-2   
 9   1/1 OK 
2011-02-14 00:01:02 client.domain.com /etc 0 WF-3   
 7   1/1 OK 
2011-02-15 00:01:02 client.domain.com /etc 0 WF-3   
12   1/1 OK 
2011-02-16 00:01:02 client.domain.com /etc 0 WF-3   
21   1/1 OK 
2011-02-17 00:01:02 client.domain.com /etc 0 WF-4   
 2   1/1 OK 


So I can say isssue is resolved. Thanks for your quick reply and help! 
appreciate it!

+--
|This was sent by upendra.gan...@gmail.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--




[Amanda-users] backup analysis

2011-03-14 Thread upengan78
Yes it works with monthlyfull as well. It so happened that I just ran amadmin 
on weeklyfull first so I posted o/p of weeklyfull in my last post while I 
should have actually posted o/p for monthlyfull. Sorry for creating confusion.

/opt/csw/sbin/amadmin monthlyfull find

datehost  disk lv tape or file  
   file 
part  status
2011-03-14 00:01:17 client.domain.com /etc  0 
/location/apps_raid5/amanda/holdingdisk2/20110314000117/client.domain.com._etc.0
  0 -1/-1 OK 
2011-02-23 01:15:02 client.domain.com /location/EL   1 MF-15
   6   
1/1 OK 
2011-02-24 01:15:02 client.domain.com /location/EL   1 MF-16
   6   
1/1 OK 
2011-02-25 01:15:01 client.domain.com /location/EL   1 MF-17
   6   
1/1 OK 
2011-02-26 01:15:01 client.domain.com /location/EL   1 MF-18
   6   
1/1 OK 
2011-02-27 01:15:02 client.domain.com /location/EL   1 MF-19
   6   
1/1 OK

+--
|This was sent by upendra.gan...@gmail.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--




Re: [Amanda-users] backup analysis

2011-03-14 Thread Jon LaBadie
On Mon, Mar 14, 2011 at 09:10:07AM -0700, upengan78 wrote:
 Yes it works with monthlyfull as well. It so happened that I just ran amadmin 
 on weeklyfull first so I posted o/p of weeklyfull in my last post while I 
 should have actually posted o/p for monthlyfull. Sorry for creating confusion.
 
 /opt/csw/sbin/amadmin monthlyfull find
 
 datehost  disk lv tape or file
  file 
 part  status
 2011-03-14 00:01:17 client.domain.com /etc  0 
 /location/apps_raid5/amanda/holdingdisk2/20110314000117/client.domain.com._etc.0
   0 -1/-1 OK 
 2011-02-23 01:15:02 client.domain.com /location/EL   1 MF-15  
  
 6   1/1 OK 
 2011-02-24 01:15:02 client.domain.com /location/EL   1 MF-16  
  
 6   1/1 OK 
 2011-02-25 01:15:01 client.domain.com /location/EL   1 MF-17  
  
 6   1/1 OK 
 2011-02-26 01:15:01 client.domain.com /location/EL   1 MF-18  
  
 6   1/1 OK 
 2011-02-27 01:15:02 client.domain.com /location/EL   1 MF-19  
  
 6   1/1 OK
Couple of questions:

I assume, that as a test, the monthlyfull config is being run daily?

Does amoverview work on weeklyfull but not on monthlyfull?

The amoverview output you show indicated errors for 3 weeks of daily runs.
What did the associated daily reports show?  Were there errors?

Jon
-- 
Jon H. LaBadie  j...@jgcomp.com
 JG Computing
 12027 Creekbend Drive  (703) 787-0884
 Reston, VA  20194  (703) 787-0922 (fax)


[Amanda-users] backup analysis

2011-03-14 Thread upengan78
Couple of questions:

I assume, that as a test, the monthlyfull config is being run daily?

yes monthly full is being run daily.

Does amoverview work on weeklyfull but not on monthlyfull?

no amoverview doesn't work on any of above. Always E.

The amoverview output you show indicated errors for 3 weeks of daily runs.
What did the associated daily reports show? Were there errors?

No errors in daily reports. Daily report was OK for all DLEs. Sometimes it was 
STRANGE when part was retried successfully.

+--
|This was sent by upendra.gan...@gmail.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--




RE: [Amanda-users] LTO5 and LTFS

2011-02-26 Thread Anthony Bouch
Hi Shaun,

I went through a similar process - initially trying to use VMWare's ESXi
server. The main issue appeared to be in correctly detecting both the
Ultrium 3000 LTO drive as well as internal SCSI disks on the P212/256 SAS
RAID controller.

Like you I also decided to revert to a native Ubuntu server install (10.10)
and the good news is that Ultrium 3000 via a P212/256 and LTFS all work
fine.

There are two things you need to do first however.

1) HP have release a new SCSI driver. It's been included in the Linux kernel
since 2.6.34 I think - but it's not enabled by default (I suspect this is
what was causing VMWare ESXi to balk as well).

For any HP server using any of the newer HP SAS RAID controllers (like the
HP P212 SmartArray) - you need to use the new HPSA SCSI driver and NOT the
previous CCISS block level driver.

If you install Ubuntu and are using Grub2 to load - then you can create a
new Grub2 menu item (which I've made the default) which contains the
following line (with your own root device).

linux /vmlinuz-2.6.35-24-server root=/dev/mapper/media01-root ro quiet
cciss.cciss_allow_hpsa=1

This will turn on the new HPSA SCSI driver, and disable the old block level
CCISS driver.

If you download and use lsscsi - you should see all of your devices
including the LTO tape drive listed with SCSI LUNs.

If you Google -  A SCSI-based Linux device driver for HP Smart Array
Controllers you'll find HP's 'how-to' document on HPSA.

2) The next step is to download the LTFS source from HP here at
http://bit.ly/h9gXZo. Although HP and IBM state that they only officially
support Redhat and openSuse installations - it will compile fine on Ubuntu
(with all of the dependencies present of course).

The best place for documentation on LTFS including commands for formatting
and mounting tapes is here at IBM (since LTFS is an IBM led project I
believe)

http://publib.boulder.ibm.com/infocenter/ltfs/cust/index.jsp

Under the 'Managing' section you'll find information for Linux and Mac OSX.

I was able to backup a large volume from a Mac OSX machine, and mount and
restore it fine to a Linux volume using LTFS.

Here are a few of the Linux LTFS commands for convenience...

#Format Media
mkltfs -d /dev/st0

#Force ReFormat
mkltfs -f -d /dev/st0

#Mounting media
$ mkdir /mnt/lto5
$ ltfs -o devname=/dev/st0 /mnt/lto5

#Checking media
$ ltfsck

After that you can use the mounted volume just like any other (well almost)
via cp. So far it's been very fast - and I'm pretty close to recommending
this as our preferred format for a video archive we're creating. I'm able to
backup about 500GB per hour - which is not bad and is about what the drive
is rated for. More interestingly - I'm able to restore a file from anywhere
in the backup set in just a few minutes. 

To remove the tape just umount the volume (don't use the lfts command to
offline the tape) and then eject the tape.

Hope this helps...

Best,

Tony


-Original Message-
From: owner-amanda-us...@amanda.org [mailto:owner-amanda-us...@amanda.org]
On Behalf Of skolo
Sent: Saturday, February 26, 2011 1:17 PM
To: amanda-users@amanda.org
Subject: [Amanda-users] LTO5 and LTFS

Anthony,

I'm starting to get into HP LTFS after getting an Ultrium 3000 Ext Drive
connected to a HP DL 365 G5 Server using a HP P411/512 card.

I actually tried first off using a Citrix Xen Server 5.6 host with the idea
of sharing the LTFS to a dedicated Virtual Machine client. However I was
unable to get the necessary Tape Driver software (cpq_cciss) for the version
of CentOS that Xen uses working. I can see the tape drive (/dev/st0) quite
happily, but no matter what I do I can't get LTFS to work :-(
I'm going back to the idea of using a dedicated (physical) machine running
Ubuntu 10.04.1 LTS connected to the Ultrium 3000 via a P212/256 and was
going to ask you what the process was for you to get yours up and going ?
One of the issues I struck with Xen was that I was not able to get fuse
loaded into the kernel (even compiling it from source). I am not sure if its
because it is loaded in by default and just not listed or whether something
else could be wrong. I am yet to tackle the physical server so I may be OK,
but thought I'd double check with you as you indicate that you had no
problems.

Any advice appreciated.

Kind regards,
Shaun

+--
|This was sent by kolo...@gmail.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--




[Amanda-users] LTO5 and LTFS

2011-02-25 Thread skolo
Anthony,

I'm starting to get into HP LTFS after getting an Ultrium 3000 Ext Drive 
connected to a HP DL 365 G5 Server using a HP P411/512 card.

I actually tried first off using a Citrix Xen Server 5.6 host with the idea of 
sharing the LTFS to a dedicated Virtual Machine client. However I was unable to 
get the necessary Tape Driver software (cpq_cciss) for the version of CentOS 
that Xen uses working. I can see the tape drive (/dev/st0) quite happily, but 
no matter what I do I can't get LTFS to work :-(
I'm going back to the idea of using a dedicated (physical) machine running 
Ubuntu 10.04.1 LTS connected to the Ultrium 3000 via a P212/256 and was going 
to ask you what the process was for you to get yours up and going ? One of the 
issues I struck with Xen was that I was not able to get fuse loaded into the 
kernel (even compiling it from source). I am not sure if its because it is 
loaded in by default and just not listed or whether something else could be 
wrong. I am yet to tackle the physical server so I may be OK, but thought I'd 
double check with you as you indicate that you had no problems.

Any advice appreciated.

Kind regards,
Shaun

+--
|This was sent by kolo...@gmail.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--




Re: [Amanda-users] amanda 3.2.1 on Solaris x86 -- open file limit

2011-02-09 Thread Jean-Louis Martineau

Hi Sebastian,

Thanks for that useful information.

The correct fix is probably to also change all u_long variables to u_int32?

Jean-Louis

shesselbarth wrote:

IXDR_GET_U_LONG is defined in /usr/linclude/rpc/xdr.h (on linux)
This file is included from /usr/linclude/rpc/rpc.h which is included in
ndmp-src/ndmp?.h



Hi,

I just stumbled across the same build issues with 64-bit (open)solaris.
After some research on linux and solaris rpc/xdr.h headers I think
IXDR_GET/PUT_U_LONG got removed from solaris 64-bit for whatever
reason (I guess LONG is somehow ambiguous in size while INT32 isn't)

Moreover, I have a warning in my xdr.h from linux:

/* WARNING: The IXDR_*_LONG defines are removed by Sun for new platforms
 * and shouldn't be used any longer. Code which use this defines or longs
 * in the RPC code will not work on 64bit Solaris platforms !
 */

For a first try I replaced all occurrences of GET/PUT_U_LONG with the 
corresponding U_INT32 macro. At least it compiles cleanly now.


I DID NOT ACTUALLY RUN THE COMPILED AMANDA, YET!

Can you please comment on the changes I have made and if they are
reasonable, i.e. don't break cross-platform compatibility.

Regards,
  Sebastian

+--
|This was sent by hesselba...@ims.uni-hannover.de via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--


  




Re: [Amanda-users] amanda 3.2.1 on Solaris x86 -- open file limit

2011-02-09 Thread Brian Cuttler
On Tue, Feb 08, 2011 at 07:55:47PM -0500, shesselbarth wrote:
  IXDR_GET_U_LONG is defined in /usr/linclude/rpc/xdr.h (on linux)
  This file is included from /usr/linclude/rpc/rpc.h which is included in
  ndmp-src/ndmp?.h

I have a successful build on Solaris 10x86 for 64bit-amanda, I
had a secondary issue with the calls to Perl so I'm not running
it (maybe the 64-bit install and then install the 32-bit amanda
perl modules on top of it...).


from my config.log for amanda 3.2.1

$ ./configure --with-user=amanda --with-group=sys
   --with-udpportrange=932,948  --with-tcpportrange=10084,10100
   --with-gnutar=/usr/sfw/bin/gtar --with-gnuplot=/opt/sfw/bin/gnuplot
   --without-libiconv-prefix CC=/opt/SUNWspro/bin/cc
   EGREP=/usr/sfw/bin/gegrep
  CFLAGS=-I/usr/sfw/include -I/opt/csw/include -fast -xtarget=woodcrest-m64
  CPPFLAGS=-D__amd64 -I/usr/sfw/include -I/opt/csw/include
  LDFLAGS=-L/opt/csw/lib/amd64 -L/usr/lib/amd64 -L/usr/sfw/lib/amd64
   -R/opt/csw/lib/amd64 -R/usr/lib/amd64 -R/usr/sfw/lib/amd64
--without-ndmp


PATH: /usr/local/bin
PATH: /usr/bin
PATH: /usr/sbin
PATH: /usr/dt/bin
PATH: /usr/openwin/bin
PATH: /usr/ccs/bin
PATH: /usr/sfw/bin
PATH: /opt/sfw/gcc-3/bin
PATH: /opt/sfw/bin
PATH: /usr/local/bin
PATH: /opt/SUNWspro/bin
PATH: /usr/ucb
PATH: /local/molbio/phylip/i96pc





 Hi,
 
 I just stumbled across the same build issues with 64-bit (open)solaris.
 After some research on linux and solaris rpc/xdr.h headers I think
 IXDR_GET/PUT_U_LONG got removed from solaris 64-bit for whatever
 reason (I guess LONG is somehow ambiguous in size while INT32 isn't)
 
 Moreover, I have a warning in my xdr.h from linux:
 
 /* WARNING: The IXDR_*_LONG defines are removed by Sun for new platforms
  * and shouldn't be used any longer. Code which use this defines or longs
  * in the RPC code will not work on 64bit Solaris platforms !
  */
 
 For a first try I replaced all occurrences of GET/PUT_U_LONG with the 
 corresponding U_INT32 macro. At least it compiles cleanly now.
 
 I DID NOT ACTUALLY RUN THE COMPILED AMANDA, YET!
 
 Can you please comment on the changes I have made and if they are
 reasonable, i.e. don't break cross-platform compatibility.
 
 Regards,
   Sebastian
 
 +--
 |This was sent by hesselba...@ims.uni-hannover.de via Backup Central.
 |Forward SPAM to ab...@backupcentral.com.
 +--
 
 
---
   Brian R Cuttler brian.cutt...@wadsworth.org
   Computer Systems Support(v) 518 486-1697
   Wadsworth Center(f) 518 473-6384
   NYS Department of HealthHelp Desk 518 473-0773



IMPORTANT NOTICE: This e-mail and any attachments may contain
confidential or sensitive information which is, or may be, legally
privileged or otherwise protected by law from further disclosure.  It
is intended only for the addressee.  If you received this in error or
from someone who was not authorized to send it to you, please do not
distribute, copy or use it or any attachments.  Please notify the
sender immediately by reply e-mail and delete this from your
system. Thank you for your cooperation.




[Amanda-users] amanda 3.2.1 on Solaris x86 -- open file limit

2011-02-08 Thread shesselbarth
 IXDR_GET_U_LONG is defined in /usr/linclude/rpc/xdr.h (on linux)
 This file is included from /usr/linclude/rpc/rpc.h which is included in
 ndmp-src/ndmp?.h

Hi,

I just stumbled across the same build issues with 64-bit (open)solaris.
After some research on linux and solaris rpc/xdr.h headers I think
IXDR_GET/PUT_U_LONG got removed from solaris 64-bit for whatever
reason (I guess LONG is somehow ambiguous in size while INT32 isn't)

Moreover, I have a warning in my xdr.h from linux:

/* WARNING: The IXDR_*_LONG defines are removed by Sun for new platforms
 * and shouldn't be used any longer. Code which use this defines or longs
 * in the RPC code will not work on 64bit Solaris platforms !
 */

For a first try I replaced all occurrences of GET/PUT_U_LONG with the 
corresponding U_INT32 macro. At least it compiles cleanly now.

I DID NOT ACTUALLY RUN THE COMPILED AMANDA, YET!

Can you please comment on the changes I have made and if they are
reasonable, i.e. don't break cross-platform compatibility.

Regards,
  Sebastian

+--
|This was sent by hesselba...@ims.uni-hannover.de via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--




[Amanda-users] Amanda LIVECD with ztape/ftape suport

2011-02-07 Thread juanato
Hello from Spain. I would apreciate some help about any linux distribution with 
kernel 2.x support includind ftape/ztape and amanda package included. Maybe 
livecd for better practices restoring old systems, for evaluate proposites. I 
would aprecitae some help to dump qic/travan tapes using dd linux commad to put 
in a file the complete tape cardtridge to manage via virtual tape devices 
supported via Bacula, amanda or any GNU/GPL package with Linux binaries. Than 
ks a lot.

+--
|This was sent by juan...@gmail.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--




[Amanda-users] amrecover error :Not found in archive

2011-01-18 Thread upengan78
Just an update.

The issue was resolved by modifying all of DLEs in disklist from /export/./abc 
to /export/abc which means I removed '.'  from all DLEs.

Now I can recover anyfile/directory using amrecover. Hope this is helpful for 
people using '.' inside their DLEs. 

Thanks

+--
|This was sent by upendra.gan...@gmail.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--




[Amanda-users] amrecover error :Not found in archive

2011-01-12 Thread upengan78
Well, I have added one more partition in disklist file -  /etc  . This sure 
does not have '.' like /export/./local/

I have started to think that either tar/gtar or amrecove, one of that does not 
work well with a . in the specified DLE.  It may also be version on solaris 
atleast. 

I am going to check tomorrow if I can recover /etc files using amrecover then I 
will move all DLE entries to that w/o '.'

amfetchdump worked fine. Can't use that always though as I don't have too much 
space for full restore...

Thanks

+--
|This was sent by upendra.gan...@gmail.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--




[Amanda-users] amrecover error :Not found in archive

2011-01-11 Thread upengan78
Hi,

I have been backing up using amanda for about 1 month and I agree that I should 
have checked amrestore earlier.  Now that I am trying to restore 
files/directories using amrestore, there is always this error : 

Not found in archive
tar: Exiting with failure status due to previous errors

This is what I did in amrestore,

amrecover add gcc-3.2-sol8-sparc-local
Added file /local/gcc-3.2-sol8-sparc-local
amrecover list
TAPE WF-3:16,17,18,19,20,21 LEVEL 0 DATE 2011-01-10-00-01-17
/local/gcc-3.2-sol8-sparc-local
amrecover extract

Extracting files using tape drive 
file:/bk/location/amanda/vtapes/weeklyfull/slots on host amandaserver.
The following tapes are needed: WF-3

Extracting files using tape drive 
file:/bk/location/amanda/vtapes/weeklyfull/slots on host amandaserver.
Load tape WF-3 now
Continue [?/Y/n/s/d]? y
Volume 'WF-3' not found
Load tape WF-3 now
Continue [?/Y/n/d]? d
New device name [?]: default
Using default tape from server amandaserver.
Continue [?/Y/n/d]? y
Restoring files into directory /tmp
All existing files in /tmp can be deleted
Continue [?/Y/n]? y

tar: ./local/gcc-3.2-sol8-sparc-local: Not found in archive
tar: Exiting with failure status due to previous errors

That's it and session hangs here. I press CTRL - C to exit to shell.


DLE for this partition is setup as below,

amclient.domain /export/./local /export {
  comp-tar
  include ./local
}   -1


Two Debug files from server : 
amidxtaped.2011034155.debug - http://dpaste.com/307437/
amindexd.2011033827.debug - http://dpaste.com/307439/

Debug file from client :
amrecover.2011033548.debug - http://pastebin.com/zV20sQXs


If you need more info then let me know.

Appreciate your help! Thank you

+--
|This was sent by upendra.gan...@gmail.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--




[Amanda-users] amrecover error :Not found in archive

2011-01-11 Thread upengan78
If I do grep manually from terminal, I see that file is available.

/opt/csw/bin/gzip -dc 
/opt/csw/etc/amanda/weeklyfull/index/amclient.domain/_export_._local/2011011117_0.gz
 | grep gcc-3.2-sol8-sparc-local 

/local/gcc-3.2-sol8-sparc-local

+--
|This was sent by upendra.gan...@gmail.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--




[Amanda-users] Making sure AMANDA does not run over tapesize

2011-01-05 Thread rory_f
Hey,

So I've noticed that sometimes Amanda will fill up a tape with more than 400gb 
(LTO3) - I'm assuming this is down to compression? Is there another way to 
limit this from happening apart from turning hardware and software compression 
off?

Thanks,

+--
|This was sent by r...@mrxfx.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--




Re: [Amanda-users] Making sure AMANDA does not run over tapesize

2011-01-05 Thread Brian Cuttler

Rory,

When I am using tape compression I often...sorry.

When using SW tape compression amanda will know what the
typical compression is for any given DLE and will (I believe)
use the expected compressed size of the data when estimating
overall tape usage.

When I use HW compression I often lie about the tape length,
extending the actually physical size by the expected compression
amount so that I am able to utilize the full physical tape.
This is very valuable for me in the couple of cases where I
have a non-spanning DLE that is larger than the physical tape
would be without compression, else amanda would report that the
DLE was larger than the tape...

What goal/outcome are you seeking ?

Brian

On Wed, Jan 05, 2011 at 11:34:47AM -0500, rory_f wrote:
 Hey,
 
 So I've noticed that sometimes Amanda will fill up a tape with more than 
 400gb (LTO3) - I'm assuming this is down to compression? Is there another way 
 to limit this from happening apart from turning hardware and software 
 compression off?
 
 Thanks,
 
 +--
 |This was sent by r...@mrxfx.com via Backup Central.
 |Forward SPAM to ab...@backupcentral.com.
 +--
 
 
---
   Brian R Cuttler brian.cutt...@wadsworth.org
   Computer Systems Support(v) 518 486-1697
   Wadsworth Center(f) 518 473-6384
   NYS Department of HealthHelp Desk 518 473-0773



IMPORTANT NOTICE: This e-mail and any attachments may contain
confidential or sensitive information which is, or may be, legally
privileged or otherwise protected by law from further disclosure.  It
is intended only for the addressee.  If you received this in error or
from someone who was not authorized to send it to you, please do not
distribute, copy or use it or any attachments.  Please notify the
sender immediately by reply e-mail and delete this from your
system. Thank you for your cooperation.




[Amanda-users] Making sure AMANDA does not run over tapesize

2011-01-05 Thread rory_f
I want to ensure tapes are filled 100% each time where possible. I've written a 
script in python to look at directory, figure out size, and create a disklist 
which will ensure a round about size for each disklist file - so for instance 
it will try to create a disklist file that contains entries in groups of 
400gb's - the size of a tape. I know amanda will fill a tape to 100% where 
possible but sometimes, if it is using compression, this doesn't work, and the 
first two tapes will fill 500gb+ and then the last tape will be left with 
200gb. This is a waste of 200gb - I'm trying to make sure all tapes are full 
where possible and not waste any space.

I know I could take the tape that is half full and archive the contents again 
with added content but this is time consuming.

I just want to make sure amanda is working with my script to make sure all 
tapes are being filled. 

Do you see what i'm getting at?

Thanks,




Brian Cuttler wrote:
 Rory,
 
 When I am using tape compression I often...sorry.
 
 When using SW tape compression amanda will know what the
 typical compression is for any given DLE and will (I believe)
 use the expected compressed size of the data when estimating
 overall tape usage.
 
 When I use HW compression I often lie about the tape length,
 extending the actually physical size by the expected compression
 amount so that I am able to utilize the full physical tape.
 This is very valuable for me in the couple of cases where I
 have a non-spanning DLE that is larger than the physical tape
 would be without compression, else amanda would report that the
 DLE was larger than the tape...
 
 What goal/outcome are you seeking ?
 
   Brian
 
 On Wed, Jan 05, 2011 at 11:34:47AM -0500, rory_f wrote:
 
  Hey,
  
  So I've noticed that sometimes Amanda will fill up a tape with more than 
  400gb (LTO3) - I'm assuming this is down to compression? Is there another 
  way to limit this from happening apart from turning hardware and software 
  compression off?
  
  Thanks,
  
  +--
  |This was sent by rory  at  mrxfx.com via Backup Central.
  |Forward SPAM to abuse  at  backupcentral.com.
  +--
  
  
  
 ---
 


+--
|This was sent by r...@mrxfx.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--




[Amanda-users] Making sure AMANDA does not run over tapesize

2011-01-05 Thread rory_f

choogendyk wrote:
 On 1/5/11 12:00 PM, rory_f wrote:
 
  I want to ensure tapes are filled 100% each time where possible. I've 
  written a script in python to look at directory, figure out size, and 
  create a disklist which will ensure a round about size for each disklist 
  file - so for instance it will try to create a disklist file that contains 
  entries in groups of 400gb's - the size of a tape. I know amanda will fill 
  a tape to 100% where possible but sometimes, if it is using compression, 
  this doesn't work, and the first two tapes will fill 500gb+ and then the 
  last tape will be left with 200gb. This is a waste of 200gb - I'm trying to 
  make sure all tapes are full where possible and not waste any space.
  
 
 Not to be rude, but that's a false economy.
 
 It could just as easily be said that you would be wasting tape capacity by 
 not using compression.
 
 You are asking to not allow more than 400GB per tape, and thus no more than 
 1200GB on the set of 3. 
 Then you are complaining that the 1200GB is unevenly distributed across the 3 
 tapes, because 
 compression allowed more than 400GB on each of the first 2 tapes. So, stated 
 another way, you are 
 asking that the wasted (or unused) 300GB (or so) of space be distributed 
 across all 3 tapes, 
 rather than just being on the last tape, and/or to just not use compression 
 so that you can imagine 
 that you are not wasting tape.
 
 500GB per tape means that you are getting about 20% compression. If that is 
 consistent, have your 
 python script set to queue up somewhere between 1400GB to 1500GB for backup, 
 the choice depending on 
 how close you want to shave it (with a higher risk of over running the last 
 tape). Then you are 
 being economical with your tape usage, getting a couple hundred more GB on 
 the set of tapes than you 
 were originally thinking.
 
 Of course, compressibility varies widely. Huge directories of TIFF and JPEG 
 files can be essentially 
 uncompressible. Typical unix directories of predominantly text based stuff, 
 like log files or 
 configuration files, are highly compressible, and repetitive things like 
 Apache access logs can 
 compress as much as 10:1. So, you have to know your data to efficiently plan 
 what you are trying to do.
 
 


Ok. I totally see your point - you are very correct.The majority of the files 
being backed up are images - dpx, exr, cin, etc - we are a VFX house.

I guess with a wide range of varying file types theres no real way to 'predict' 
the type of compression, is there?

Perhaps what i'm looking to do is guarantee the most economical way of filling 
the tapes.. It would be great to count on compression of 20% every time but it 
seems to vary too much to rely on AMANDA to work this way.

Thanks for your view though - initially I didnt look at it that way.

+--
|This was sent by r...@mrxfx.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--




Re: [Amanda-users] Making sure AMANDA does not run over tapesize

2011-01-05 Thread Brian Cuttler

Rory,

On Wed, Jan 05, 2011 at 12:00:40PM -0500, rory_f wrote:
 I want to ensure tapes are filled 100% each time where possible. I've
 written a script in python to look at directory, figure out size, and
 create a disklist which will ensure a round about size for each disklist
 file - so for instance it will try to create a disklist file that contains
 entries in groups of 400gb's - the size of a tape. I know amanda will fill
 a tape to 100% where possible but sometimes, if it is using compression,
 this doesn't work, and the first two tapes will fill 500gb+ and then the
 last tape will be left with 200gb. This is a waste of 200gb - I'm trying
 to make sure all tapes are full where possible and not waste any space.


I'm not certain I understand your example. If you have 12000 GB to
write and you write 500 gb to the first two tapes and 200 gb to the
last tape you are using the same three tapes you'd use if you
wrote 400 gig to each. You either waste the space equally at the end
of all three tapes or all at ones on the last tape.

If a tape will hold 500 gig and you only put 400 gig the tape
isn't full. This overfull is a hazard of HW compression that
you don't experience with SW compression. If you know from 
experience that your data will compress by 20% and you have a
400 gig tape and you are using HW compression you can lie in
the tapetype definition and claim its a 500 gig tape so amanda
will be able to estimate usage better, but its not going to be
an exact fit because even with SW compression where amanada can
track the data there is always a little real-life flux in the
data compression (unless of course the data is static, in which
case you might want to archive it rather than back it up).

There are settings within amanda that help to fill tapes.

Admittedly I've only used taperalgo and have been very happy
with the largestfit setting, but there are also the values
flush-threshold-dumped
flush-threshold-scheduled 
taperflush

which have examples settings in the amanda.conf file.
These settings should help you fill tapes, will even
delay flushing of the work area to tape if the tape 
is not expected to be filled.

For my money the more you can delegate to amanda and the
less fiddling with the disklist or other files the better.


 I know I could take the tape that is half full and archive the contents again 
 with added content but this is time consuming.
 
 I just want to make sure amanda is working with my script to make sure all 
 tapes are being filled. 

Yah... but you can't fill them by not filling them.

 Do you see what i'm getting at?

I think so, do you see where I'm going ?

 Thanks,

your welcome,

Brian


 Brian Cuttler wrote:
  Rory,
  
  When I am using tape compression I often...sorry.
  
  When using SW tape compression amanda will know what the
  typical compression is for any given DLE and will (I believe)
  use the expected compressed size of the data when estimating
  overall tape usage.
  
  When I use HW compression I often lie about the tape length,
  extending the actually physical size by the expected compression
  amount so that I am able to utilize the full physical tape.
  This is very valuable for me in the couple of cases where I
  have a non-spanning DLE that is larger than the physical tape
  would be without compression, else amanda would report that the
  DLE was larger than the tape...
  
  What goal/outcome are you seeking ?
  
  Brian
  
  On Wed, Jan 05, 2011 at 11:34:47AM -0500, rory_f wrote:
  
   Hey,
   
   So I've noticed that sometimes Amanda will fill up a tape with more than 
   400gb (LTO3) - I'm assuming this is down to compression? Is there another 
   way to limit this from happening apart from turning hardware and software 
   compression off?
   
   Thanks,
   
   +--
   |This was sent by rory  at  mrxfx.com via Backup Central.
   |Forward SPAM to abuse  at  backupcentral.com.
   +--
   
   
   
  ---
  
 
 
 +--
 |This was sent by r...@mrxfx.com via Backup Central.
 |Forward SPAM to ab...@backupcentral.com.
 +--
 
 
---
   Brian R Cuttler brian.cutt...@wadsworth.org
   Computer Systems Support(v) 518 486-1697
   Wadsworth Center(f) 518 473-6384
   NYS Department of HealthHelp Desk 518 473-0773



IMPORTANT NOTICE: This e-mail and any attachments may contain
confidential or sensitive information which is, or may be, legally
privileged or otherwise protected by law from further disclosure.  It
is intended only for the addressee.  If you received this in error or
from someone who was not authorized to send it to you, please do not
distribute, copy or use it or any 

[Amanda-users] Making sure AMANDA does not run over tapesize

2011-01-05 Thread rory_f

Charles Curley wrote:
 On Wed, 05 Jan 2011 13:19:42 -0500
 rory_f amanda-forum  at  backupcentral.com wrote:
 
 
  Perhaps what i'm looking to do is guarantee the most economical way
  of filling the tapes.
  
 
 Which is more valuable, a few terabytes of tape space, or your time?
 
 -- 
 
 Charles Curley  /\ASCII Ribbon Campaign
 Looking for fine software   \ /Respect for open standards
 and/or writing?  X No HTML/RTF in email
 http://www.charlescurley.com/ \No M$ Word docs in email
 
 Key fingerprint = CE5C 6645 A45A 64E4 94C0  809C FFF6 4C48 4ECD DFDB


I guess that depends if you ask me (as it is my time) or the boss (who pays for 
the tapes ;-])

+--
|This was sent by r...@mrxfx.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--




Re: [Amanda-users] Making sure AMANDA does not run over tapesize

2011-01-05 Thread Charles Curley
On Wed, 05 Jan 2011 13:19:42 -0500
rory_f amanda-fo...@backupcentral.com wrote:

 Perhaps what i'm looking to do is guarantee the most economical way
 of filling the tapes.

Which is more valuable, a few terabytes of tape space, or your time?

-- 

Charles Curley  /\ASCII Ribbon Campaign
Looking for fine software   \ /Respect for open standards
and/or writing?  X No HTML/RTF in email
http://www.charlescurley.com/ \No M$ Word docs in email

Key fingerprint = CE5C 6645 A45A 64E4 94C0  809C FFF6 4C48 4ECD DFDB


Re: [Amanda-users] Making sure AMANDA does not run over tapesize

2011-01-05 Thread Chris Hoogendyk



On 1/5/11 12:00 PM, rory_f wrote:

I want to ensure tapes are filled 100% each time where possible. I've written a 
script in python to look at directory, figure out size, and create a disklist 
which will ensure a round about size for each disklist file - so for instance 
it will try to create a disklist file that contains entries in groups of 
400gb's - the size of a tape. I know amanda will fill a tape to 100% where 
possible but sometimes, if it is using compression, this doesn't work, and the 
first two tapes will fill 500gb+ and then the last tape will be left with 
200gb. This is a waste of 200gb - I'm trying to make sure all tapes are full 
where possible and not waste any space.


Not to be rude, but that's a false economy.

It could just as easily be said that you would be wasting tape capacity by not 
using compression.

You are asking to not allow more than 400GB per tape, and thus no more than 1200GB on the set of 3. 
Then you are complaining that the 1200GB is unevenly distributed across the 3 tapes, because 
compression allowed more than 400GB on each of the first 2 tapes. So, stated another way, you are 
asking that the wasted (or unused) 300GB (or so) of space be distributed across all 3 tapes, 
rather than just being on the last tape, and/or to just not use compression so that you can imagine 
that you are not wasting tape.


500GB per tape means that you are getting about 20% compression. If that is consistent, have your 
python script set to queue up somewhere between 1400GB to 1500GB for backup, the choice depending on 
how close you want to shave it (with a higher risk of over running the last tape). Then you are 
being economical with your tape usage, getting a couple hundred more GB on the set of tapes than you 
were originally thinking.


Of course, compressibility varies widely. Huge directories of TIFF and JPEG files can be essentially 
uncompressible. Typical unix directories of predominantly text based stuff, like log files or 
configuration files, are highly compressible, and repetitive things like Apache access logs can 
compress as much as 10:1. So, you have to know your data to efficiently plan what you are trying to do.



--
---

Chris Hoogendyk

-
   O__   Systems Administrator
  c/ /'_ --- Biology  Geology Departments
 (*) \(*) -- 140 Morrill Science Center
~~ - University of Massachusetts, Amherst

hoogen...@bio.umass.edu

---

Erdös 4




[Amanda-users] Solaris ZFS Amanda *compressed* GNUTAR SLOWWWWWWW

2010-12-21 Thread NetWatchman

# date
Tue Dec 21 20#58;51#58;33 EST 2010
# ls -ltr /amandadump/dmp001/20101221004214 | tail
-rw---nbsp; nbsp;1 amandanbsp; nbsp;sysnbsp; nbsp; nbsp; 1073741824 
Dec 21 20#58;36 zulu01._rz2pool_mace.0.598.tmp
-rw---nbsp; nbsp;1 amandanbsp; nbsp;sysnbsp; nbsp; nbsp; 1073741824 
Dec 21 20#58;37 zulu01._rz2pool_mace.0.599.tmp
-rw---nbsp; nbsp;1 amandanbsp; nbsp;sysnbsp; nbsp; nbsp; 1073741824 
Dec 21 20#58;39 zulu01._rz2pool_mace.0.600.tmp
-rw---nbsp; nbsp;1 amandanbsp; nbsp;sysnbsp; nbsp; nbsp; 1073741824 
Dec 21 20#58;41 zulu01._rz2pool_mace.0.601.tmp
-rw---nbsp; nbsp;1 amandanbsp; nbsp;sysnbsp; nbsp; nbsp; 1073741824 
Dec 21 20#58;43 zulu01._rz2pool_mace.0.602.tmp
-rw---nbsp; nbsp;1 amandanbsp; nbsp;sysnbsp; nbsp; nbsp; 1073741824 
Dec 21 20#58;45 zulu01._rz2pool_mace.0.603.tmp
-rw---nbsp; nbsp;1 amandanbsp; nbsp;sysnbsp; nbsp; nbsp; 1073741824 
Dec 21 20#58;47 zulu01._rz2pool_mace.0.604.tmp
-rw---nbsp; nbsp;1 amandanbsp; nbsp;sysnbsp; nbsp; nbsp; 1073741824 
Dec 21 20#58;48 zulu01._rz2pool_mace.0.605.tmp
-rw---nbsp; nbsp;1 amandanbsp; nbsp;sysnbsp; nbsp; nbsp; 1073741824 
Dec 21 20#58;50 zulu01._rz2pool_mace.0.606.tmp
-rw---nbsp; nbsp;1 amandanbsp; nbsp;sysnbsp; nbsp; nbsp; 470843392 
Dec 21 20#58;51 zulu01._rz2pool_mace.0.607.tmp
# date
Tue Dec 21 20#58;51#58;54 EST 2010
# date
Tue Dec 21 20#58;52#58;14 EST 2010
# date
Tue Dec 21 20#58;52#58;20 EST 2010
# date
Tue Dec 21 20#58;52#58;27 EST 2010
# date
Tue Dec 21 20#58;52#58;30 EST 2010
# ls -ltr /amandadump/dmp001/20101221004214 | tail
-rw---nbsp; nbsp;1 amandanbsp; nbsp;sysnbsp; nbsp; nbsp; 1073741824 
Dec 21 20#58;36 zulu01._rz2pool_mace.0.598.tmp
-rw---nbsp; nbsp;1 amandanbsp; nbsp;sysnbsp; nbsp; nbsp; 1073741824 
Dec 21 20#58;37 zulu01._rz2pool_mace.0.599.tmp
-rw---nbsp; nbsp;1 amandanbsp; nbsp;sysnbsp; nbsp; nbsp; 1073741824 
Dec 21 20#58;39 zulu01._rz2pool_mace.0.600.tmp
-rw---nbsp; nbsp;1 amandanbsp; nbsp;sysnbsp; nbsp; nbsp; 1073741824 
Dec 21 20#58;41 zulu01._rz2pool_mace.0.601.tmp
-rw---nbsp; nbsp;1 amandanbsp; nbsp;sysnbsp; nbsp; nbsp; 1073741824 
Dec 21 20#58;43 zulu01._rz2pool_mace.0.602.tmp
-rw---nbsp; nbsp;1 amandanbsp; nbsp;sysnbsp; nbsp; nbsp; 1073741824 
Dec 21 20#58;45 zulu01._rz2pool_mace.0.603.tmp
-rw---nbsp; nbsp;1 amandanbsp; nbsp;sysnbsp; nbsp; nbsp; 1073741824 
Dec 21 20#58;47 zulu01._rz2pool_mace.0.604.tmp
-rw---nbsp; nbsp;1 amandanbsp; nbsp;sysnbsp; nbsp; nbsp; 1073741824 
Dec 21 20#58;48 zulu01._rz2pool_mace.0.605.tmp
-rw---nbsp; nbsp;1 amandanbsp; nbsp;sysnbsp; nbsp; nbsp; 1073741824 
Dec 21 20#58;50 zulu01._rz2pool_mace.0.606.tmp
-rw---nbsp; nbsp;1 amandanbsp; nbsp;sysnbsp; nbsp; nbsp; 943620096 
Dec 21 20#58;52 zulu01._rz2pool_mace.0.607.tmp


Equates to 470MB/minute or  1MB/s.

CPU is barely utilized:


otal#58; 98 processes, 806 lwps, load averages#58; 1.50, 1.44, 1.43
nbsp; nbsp;PID USERNAMEnbsp; SIZEnbsp; nbsp;RSS STATEnbsp; PRI NICEnbsp; 
nbsp; nbsp; TIMEnbsp; CPU PROCESS/NLWPnbsp; nbsp; nbsp; nbsp;
nbsp;25639 amandanbsp; nbsp;1708K 1012K cpu1nbsp; nbsp; nbsp;0nbsp; 
nbsp; 0nbsp; 17#58;12#58;46nbsp; 11% gzip/1
nbsp;25640 amandanbsp; nbsp;7176K 1164K sleepnbsp; nbsp;33nbsp; nbsp; 
0nbsp; nbsp;2#58;17#58;34 1.8% sendbackup/1
nbsp; nbsp;123 rootnbsp; nbsp; nbsp; nbsp; 0Knbsp; nbsp; 0K sleepnbsp; 
nbsp;99nbsp; -20nbsp; nbsp;4#58;57#58;12 1.1% zpool-rz2pool/51
nbsp;25645 amandanbsp; nbsp;2448K 1324K sleepnbsp; nbsp;52nbsp; nbsp; 
0nbsp; nbsp;1#58;01#58;30 0.8% gtar/1
nbsp;25641 rootnbsp; nbsp; nbsp; nbsp;42Mnbsp; nbsp;41M sleepnbsp; 
nbsp;49nbsp; nbsp; 0nbsp; nbsp;1#58;01#58;08 0.8% gtar/1
nbsp; nbsp;545 rootnbsp; nbsp; nbsp; nbsp;12M 9996K sleepnbsp; 
nbsp;59nbsp; nbsp; 0nbsp; nbsp;0#58;54#58;49 0.4% syslogd/15
nbsp; nbsp;119 rootnbsp; nbsp; nbsp; nbsp; 0Knbsp; nbsp; 0K sleepnbsp; 
nbsp;99nbsp; -20nbsp; nbsp;1#58;50#58;23 0.2% zpool-amandadum/51
nbsp;25636 amandanbsp; nbsp;7536K 2872K sleepnbsp; nbsp;59nbsp; nbsp; 
0nbsp; nbsp;0#58;22#58;04 0.2% chunker/1
nbsp; nbsp;868 rootnbsp; nbsp; nbsp;7708K 1596K sleepnbsp; nbsp;59nbsp; 
nbsp; 0nbsp; nbsp;1#58;13#58;09 0.2% dhcpd/1
nbsp;25457 amandanbsp; nbsp;7592K 2968K sleepnbsp; nbsp;59nbsp; nbsp; 
0nbsp; nbsp;0#58;16#58;33 0.2% dumper/1
nbsp;25588 amandanbsp; nbsp;7864K 2992K sleepnbsp; nbsp;59nbsp; nbsp; 
0nbsp; nbsp;0#58;14#58;24 0.1% amandad/1
nbsp;25969 rootnbsp; nbsp; nbsp;3776K 2944K cpu5nbsp; nbsp; 59nbsp; 
nbsp; 0nbsp; nbsp;0#58;00#58;00 0.0% prstat/1
nbsp; nbsp;665 rootnbsp; nbsp; nbsp;2560Knbsp; 772K sleepnbsp; 
nbsp;59nbsp; nbsp; 0nbsp; nbsp;0#58;06#58;04 0.0% fbconsole/1
nbsp; nbsp;723 daemonnbsp; nbsp;2512K 1440K sleepnbsp; nbsp;60nbsp; 
-20nbsp; nbsp;1#58;12#58;29 0.0% nfsd/12
nbsp; 1060 noaccessnbsp; 132Mnbsp; 103M sleepnbsp; nbsp;59nbsp; nbsp; 
0nbsp; nbsp;0#58;05#58;46 0.0% java/18
nbsp;NPROC USERNAMEnbsp; SWAPnbsp; nbsp;RSS MEMORYnbsp; nbsp; nbsp; 
TIMEnbsp; CPUnbsp; nbsp; nbsp; nbsp; nbsp; nbsp; nbsp; nbsp; nbsp; 
nbsp; nbsp; nbsp; nbsp; nbsp; nbsp;
nbsp; nbsp; 30 amandanbsp; 

[Amanda-users] Solaris ZFS Amanda *compressed* GNUTAR SLOWWWWWWW

2010-12-21 Thread NetWatchman

# date
Tue Dec 21 20#58;51#58;33 EST 2010
# ls -ltr /amandadump/dmp001/20101221004214 | tail
-rw---nbsp; nbsp;1 amandanbsp; nbsp;sysnbsp; nbsp; nbsp; 1073741824 
Dec 21 20#58;36 zulu01._rz2pool_mace.0.598.tmp
-rw---nbsp; nbsp;1 amandanbsp; nbsp;sysnbsp; nbsp; nbsp; 1073741824 
Dec 21 20#58;37 zulu01._rz2pool_mace.0.599.tmp
-rw---nbsp; nbsp;1 amandanbsp; nbsp;sysnbsp; nbsp; nbsp; 1073741824 
Dec 21 20#58;39 zulu01._rz2pool_mace.0.600.tmp
-rw---nbsp; nbsp;1 amandanbsp; nbsp;sysnbsp; nbsp; nbsp; 1073741824 
Dec 21 20#58;41 zulu01._rz2pool_mace.0.601.tmp
-rw---nbsp; nbsp;1 amandanbsp; nbsp;sysnbsp; nbsp; nbsp; 1073741824 
Dec 21 20#58;43 zulu01._rz2pool_mace.0.602.tmp
-rw---nbsp; nbsp;1 amandanbsp; nbsp;sysnbsp; nbsp; nbsp; 1073741824 
Dec 21 20#58;45 zulu01._rz2pool_mace.0.603.tmp
-rw---nbsp; nbsp;1 amandanbsp; nbsp;sysnbsp; nbsp; nbsp; 1073741824 
Dec 21 20#58;47 zulu01._rz2pool_mace.0.604.tmp
-rw---nbsp; nbsp;1 amandanbsp; nbsp;sysnbsp; nbsp; nbsp; 1073741824 
Dec 21 20#58;48 zulu01._rz2pool_mace.0.605.tmp
-rw---nbsp; nbsp;1 amandanbsp; nbsp;sysnbsp; nbsp; nbsp; 1073741824 
Dec 21 20#58;50 zulu01._rz2pool_mace.0.606.tmp
-rw---nbsp; nbsp;1 amandanbsp; nbsp;sysnbsp; nbsp; nbsp; 470843392 
Dec 21 20#58;51 zulu01._rz2pool_mace.0.607.tmp
# date
Tue Dec 21 20#58;51#58;54 EST 2010
# date
Tue Dec 21 20#58;52#58;14 EST 2010
# date
Tue Dec 21 20#58;52#58;20 EST 2010
# date
Tue Dec 21 20#58;52#58;27 EST 2010
# date
Tue Dec 21 20#58;52#58;30 EST 2010
# ls -ltr /amandadump/dmp001/20101221004214 | tail
-rw---nbsp; nbsp;1 amandanbsp; nbsp;sysnbsp; nbsp; nbsp; 1073741824 
Dec 21 20#58;36 zulu01._rz2pool_mace.0.598.tmp
-rw---nbsp; nbsp;1 amandanbsp; nbsp;sysnbsp; nbsp; nbsp; 1073741824 
Dec 21 20#58;37 zulu01._rz2pool_mace.0.599.tmp
-rw---nbsp; nbsp;1 amandanbsp; nbsp;sysnbsp; nbsp; nbsp; 1073741824 
Dec 21 20#58;39 zulu01._rz2pool_mace.0.600.tmp
-rw---nbsp; nbsp;1 amandanbsp; nbsp;sysnbsp; nbsp; nbsp; 1073741824 
Dec 21 20#58;41 zulu01._rz2pool_mace.0.601.tmp
-rw---nbsp; nbsp;1 amandanbsp; nbsp;sysnbsp; nbsp; nbsp; 1073741824 
Dec 21 20#58;43 zulu01._rz2pool_mace.0.602.tmp
-rw---nbsp; nbsp;1 amandanbsp; nbsp;sysnbsp; nbsp; nbsp; 1073741824 
Dec 21 20#58;45 zulu01._rz2pool_mace.0.603.tmp
-rw---nbsp; nbsp;1 amandanbsp; nbsp;sysnbsp; nbsp; nbsp; 1073741824 
Dec 21 20#58;47 zulu01._rz2pool_mace.0.604.tmp
-rw---nbsp; nbsp;1 amandanbsp; nbsp;sysnbsp; nbsp; nbsp; 1073741824 
Dec 21 20#58;48 zulu01._rz2pool_mace.0.605.tmp
-rw---nbsp; nbsp;1 amandanbsp; nbsp;sysnbsp; nbsp; nbsp; 1073741824 
Dec 21 20#58;50 zulu01._rz2pool_mace.0.606.tmp
-rw---nbsp; nbsp;1 amandanbsp; nbsp;sysnbsp; nbsp; nbsp; 943620096 
Dec 21 20#58;52 zulu01._rz2pool_mace.0.607.tmp


Equates to 470MB/minute or  10MB/s.

CPU is barely utilized:


otal#58; 98 processes, 806 lwps, load averages#58; 1.50, 1.44, 1.43
nbsp; nbsp;PID USERNAMEnbsp; SIZEnbsp; nbsp;RSS STATEnbsp; PRI NICEnbsp; 
nbsp; nbsp; TIMEnbsp; CPU PROCESS/NLWPnbsp; nbsp; nbsp; nbsp;
nbsp;25639 amandanbsp; nbsp;1708K 1012K cpu1nbsp; nbsp; nbsp;0nbsp; 
nbsp; 0nbsp; 17#58;12#58;46nbsp; 11% gzip/1
nbsp;25640 amandanbsp; nbsp;7176K 1164K sleepnbsp; nbsp;33nbsp; nbsp; 
0nbsp; nbsp;2#58;17#58;34 1.8% sendbackup/1
nbsp; nbsp;123 rootnbsp; nbsp; nbsp; nbsp; 0Knbsp; nbsp; 0K sleepnbsp; 
nbsp;99nbsp; -20nbsp; nbsp;4#58;57#58;12 1.1% zpool-rz2pool/51
nbsp;25645 amandanbsp; nbsp;2448K 1324K sleepnbsp; nbsp;52nbsp; nbsp; 
0nbsp; nbsp;1#58;01#58;30 0.8% gtar/1
nbsp;25641 rootnbsp; nbsp; nbsp; nbsp;42Mnbsp; nbsp;41M sleepnbsp; 
nbsp;49nbsp; nbsp; 0nbsp; nbsp;1#58;01#58;08 0.8% gtar/1
nbsp; nbsp;545 rootnbsp; nbsp; nbsp; nbsp;12M 9996K sleepnbsp; 
nbsp;59nbsp; nbsp; 0nbsp; nbsp;0#58;54#58;49 0.4% syslogd/15
nbsp; nbsp;119 rootnbsp; nbsp; nbsp; nbsp; 0Knbsp; nbsp; 0K sleepnbsp; 
nbsp;99nbsp; -20nbsp; nbsp;1#58;50#58;23 0.2% zpool-amandadum/51
nbsp;25636 amandanbsp; nbsp;7536K 2872K sleepnbsp; nbsp;59nbsp; nbsp; 
0nbsp; nbsp;0#58;22#58;04 0.2% chunker/1
nbsp; nbsp;868 rootnbsp; nbsp; nbsp;7708K 1596K sleepnbsp; nbsp;59nbsp; 
nbsp; 0nbsp; nbsp;1#58;13#58;09 0.2% dhcpd/1
nbsp;25457 amandanbsp; nbsp;7592K 2968K sleepnbsp; nbsp;59nbsp; nbsp; 
0nbsp; nbsp;0#58;16#58;33 0.2% dumper/1
nbsp;25588 amandanbsp; nbsp;7864K 2992K sleepnbsp; nbsp;59nbsp; nbsp; 
0nbsp; nbsp;0#58;14#58;24 0.1% amandad/1
nbsp;25969 rootnbsp; nbsp; nbsp;3776K 2944K cpu5nbsp; nbsp; 59nbsp; 
nbsp; 0nbsp; nbsp;0#58;00#58;00 0.0% prstat/1
nbsp; nbsp;665 rootnbsp; nbsp; nbsp;2560Knbsp; 772K sleepnbsp; 
nbsp;59nbsp; nbsp; 0nbsp; nbsp;0#58;06#58;04 0.0% fbconsole/1
nbsp; nbsp;723 daemonnbsp; nbsp;2512K 1440K sleepnbsp; nbsp;60nbsp; 
-20nbsp; nbsp;1#58;12#58;29 0.0% nfsd/12
nbsp; 1060 noaccessnbsp; 132Mnbsp; 103M sleepnbsp; nbsp;59nbsp; nbsp; 
0nbsp; nbsp;0#58;05#58;46 0.0% java/18
nbsp;NPROC USERNAMEnbsp; SWAPnbsp; nbsp;RSS MEMORYnbsp; nbsp; nbsp; 
TIMEnbsp; CPUnbsp; nbsp; nbsp; nbsp; nbsp; nbsp; nbsp; nbsp; nbsp; 
nbsp; nbsp; nbsp; nbsp; nbsp; nbsp;
nbsp; nbsp; 30 amandanbsp; 

[Amanda-users] Backing up 50TB (ZFS) with Amanda using disk to disk

2010-12-18 Thread NetWatchman
[quote=Jean-Louis Martineau]
set:
flush-threshold-dumped 100
flush-threshold-scheduled 100
taperflush 100

 
 
 OK...I see that as documented here:
 
 http://wiki.zmanda.com/index.php/FAQ:Why_does_Amanda_not_append_to_a_tape%3F
 
 This would seem to allow full tape utilization without having to resort to 
 vtapes...I'm pretty sure I was already doing this and it didn't work 
 reliably...will try again.
 
 As I understand it, it would seem that the holding disk needs to be 
 *slightly* larger than the largest tape you are going to write to.  I would 
 also seem that setting your backup files size to something pretty granular 
 (e.g. 1GB, 5GB or at most 10GB) would also help ensure you make full use of a 
 tape.
 
 Now I've got a MASSIVE amount of IO capacity (300MB/s for the holding disk) 
 and since each tape is connected via its own dedicated SAS/SATA port I've 
 got about 100MB/s for as many tapes as I load at the same time (e.g. if I 
 load 5 I could be writing as much as 500MB/s)...of course this means I have 
 to split all of the source disks I'm backup up from across multiple Amanda 
 configs, right?
 
 But then that complicates things, I presume I need a dedicated holding disk 
 for each config?  or can multiple configs share the same holding disk?
 
 If the former, then I presume what I really want is holding disks like this:
 
 drive 1 - 2TB - holding1 (DailySet1)
 drive 2 - 2TB - holding2 (DailySet2)
 drive 3 - 2TB - holding3 (DailySet3)
 drive 4 - 2TB - holding4 (DailySet4)
 
 But now I'm confused...if I have all my flush params set to 100% and my 
 holding disk EQUALS the size of a backup tape, how am I ever going to reach 
 the 100% threshold (would seem I would always be short by a few GBs)???
 
 Perhaps I should go back to a stripped holding disk and carve out multiple 
 holding disks inside that and essentially oversubscribe the strip a bit, for 
 example:
 
 Stripe (drive1, drive2, drive3) - 6TB total
  /hold/hold1 - config in amanda as 2.1TB
  /hold/hold2 - 2.1TB
  /hold/hold3 - 2.1TB
  /hold/hold4 - 2.1TB
 
 ...or something like that...not sure if I can get away with oversubscribing 
 the Stripe that much...if not I think 4 2TB drives would work fine.
 
 This would allow any holding disk to easily grow to 2.0TB and thus fill a 2TB 
 (or 1.5TB) tape fully...yes/no?
 
 
 Jean-Louis
[url]

+--
|This was sent by baldw...@mynetwatchman.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--




[Amanda-users] Backing up 50TB (ZFS) with Amanda using disk to disk

2010-12-18 Thread NetWatchman
OK...I see that as documented here:

http://wiki.zmanda.com/index.php/FAQ:Why_does_Amanda_not_append_to_a_tape%3F

This would seem to allow full tape utilization without having to resort to 
vtapes...I'm pretty sure I was already doing this and it didn't work 
reliably...will try again.

As I understand it, it would seem that the holding disk needs to be *slightly* 
larger than the largest tape you are going to write to. I would also seem 
that setting your backup files size to something pretty granular (e.g. 1GB, 5GB 
or at most 10GB) would also help ensure you make full use of a tape.

Now I've got a MASSIVE amount of IO capacity (300MB/s for the holding disk) and 
since each tape is connected via its own dedicated SAS/SATA port I've got 
about 100MB/s for as many tapes as I load at the same time (e.g. if I load 5 I 
could be writing as much as 500MB/s)...of course this means I have to split all 
of the source disks I'm backup up from across multiple Amanda configs, right?

But then that complicates things, I presume I need a dedicated holding disk for 
each config? or can multiple configs share the same holding disk?

If the former, then I presume what I really want is holding disks like this:

drive 1 - 2TB - holding1 (DailySet1)
drive 2 - 2TB - holding2 (DailySet2)
drive 3 - 2TB - holding3 (DailySet3)
drive 4 - 2TB - holding4 (DailySet4)

But now I'm confused...if I have all my flush params set to 100% and my holding 
disk EQUALS the size of a backup tape, how am I ever going to reach the 100% 
threshold (would seem I would always be short by a few GBs)???

Perhaps I should go back to a stripped holding disk and carve out multiple 
holding disks inside that and essentially oversubscribe the strip a bit, for 
example:

Stripe (drive1, drive2, drive3) - 6TB total
/hold/hold1 - config in amanda as 2.1TB
/hold/hold2 - 2.1TB
/hold/hold3 - 2.1TB
/hold/hold4 - 2.1TB

...or something like that...not sure if I can get away with oversubscribing the 
Stripe that much...if not I think 4 2TB drives would work fine.

This would allow any holding disk to easily grow to 2.0TB and thus fill a 2TB 
(or 1.5TB) tape fully...yes/no?

+--
|This was sent by baldw...@mynetwatchman.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--




[Amanda-users] Backing up 50TB (ZFS) with Amanda using disk to disk

2010-12-16 Thread NetWatchman
I've embraced centralized storage in a big way with the hopes that it would 
facilitate the backup process (e.g. if you have all the data and boot images on 
one system you don't even worry about doing network backups).  In the last six 
months I've deployed 3 Solaris/ZFS storage servers to handle storage needs for 
my home office and 2 remote colos.  

I've been using Amanda for about a year now but have run into a number of 
problems when the volume of data starts scaling into the Terabytes.  I think 
Amanda is capable of dealing with much more, but I'm finding that the Best 
Practices for doing this aren't very well documented.  So I'm going to outline 
what I've been doing and what challenges I've been encountering in hopes that 
we all might be able to collectively improve the documentation for these kind 
of scenarios...which if increasingly important as more and more folks take the 
leap as I have to centralized storage architectures.


Here's what my infra looks like:

ZFS1: (home office / desktop farm)

Norco 4020 chassis (20 drive capacity)
SuperMicro m/b with onboard LSI1068E SAS
HP SAS expander
SAS expander connects to onboard SAS ports and to Norco SAS backplane
SAS expander has external SAS port allowing future expansion to another SAS 
JBOD chassis (e.g. Norco #2 or whatever)
LSI 9200-8e SAS2 controller
External 8 drive SANS digital SAS enclosure
SANS digital connected to LSI9200-8e via two SAS wide ports

9 Hitachi 2TB in Norco case in RaidZ2, plus 2 spares, 12TB usable
I'll be doubling this to 24TB shortly as almost out of space.

Current ZPOOL look like this:



rz2poolnbsp; nbsp; nbsp; nbsp; nbsp; nbsp; nbsp; nbsp; nbsp;12Tnbsp; 
nbsp;900Gnbsp; nbsp;1.2Tnbsp; nbsp; 42%nbsp; nbsp; /rz2pool
rz2pool/bootimagesnbsp; nbsp; nbsp; 12Tnbsp; nbsp; 36Gnbsp; 
nbsp;1.2Tnbsp; nbsp; nbsp;3%nbsp; nbsp; /rz2pool/bootimages
rz2pool/cnbsp; nbsp; nbsp; nbsp; nbsp; nbsp;12Tnbsp; 
nbsp;2.6Tnbsp; nbsp;1.2Tnbsp; nbsp; 68%nbsp; nbsp; /rz2pool/c
rz2pool/expnbsp; nbsp; nbsp; nbsp; nbsp; nbsp; nbsp;12Tnbsp; 
nbsp;7.3Gnbsp; nbsp;1.2Tnbsp; nbsp; nbsp;1%nbsp; nbsp; /rz2pool/exp
rz2pool/macenbsp; nbsp; nbsp; nbsp; nbsp; nbsp; 12Tnbsp; 
nbsp;7.5Tnbsp; nbsp;1.2Tnbsp; nbsp; 86%nbsp; nbsp; /rz2pool/mace
rz2pool/macedbnbsp; nbsp; nbsp; nbsp; nbsp; 12Tnbsp; nbsp; 19Gnbsp; 
nbsp;1.2Tnbsp; nbsp; nbsp;2%nbsp; nbsp; /rz2pool/macedb
rz2pool/projectsnbsp; nbsp; nbsp; nbsp; 12Tnbsp; nbsp;695Knbsp; 
nbsp;1.2Tnbsp; nbsp; nbsp;1%nbsp; nbsp; /rz2pool/projects
rz2pool/tftpbootnbsp; nbsp; nbsp; nbsp; 12Tnbsp; nbsp;8.7Gnbsp; 
nbsp;1.2Tnbsp; nbsp; nbsp;1%nbsp; nbsp; /rz2pool/tftpboot
rz2pool/vmwarenbsp; nbsp; nbsp; nbsp; nbsp; 12Tnbsp; nbsp; 48Knbsp; 
nbsp;1.2Tnbsp; nbsp; nbsp;1%nbsp; nbsp; /rz2pool/vmware



The bulk of the data to be backup is from the MACE pool which contains 
real-time traffic captures from a bank of 100 PCs that are doing generalized 
web surfing.  There is about 100GB/week of data added here.


3 Hitachi 2TB drives in SANS Digital case in RAID0 for Amanda holding disk
20+ Hitachi drives for backup tapes to be swapped in and out of 5 remaining 
SANs digital slots

My overall plan here is to use the Norco case for capacity expansion.
Use 3-drives in the SAS digital for Amanda holding disk as follows:


# df -h
amandadumpnbsp; nbsp; nbsp; nbsp; nbsp; nbsp; nbsp;4.0Tnbsp; 
nbsp;3.2Tnbsp; nbsp;785Gnbsp; nbsp; 81%nbsp; nbsp; /amandadump

# zpool status amandadump
nbsp; pool#58; amandadump
nbsp;state#58; ONLINE

nbsp; nbsp; nbsp; nbsp; NAMEnbsp; nbsp; nbsp; nbsp; STATEnbsp; nbsp; 
nbsp;READ WRITE CKSUM
nbsp; nbsp; nbsp; nbsp; amandadumpnbsp; ONLINEnbsp; nbsp; nbsp; 
nbsp;0nbsp; nbsp; nbsp;0nbsp; nbsp; nbsp;0
nbsp; nbsp; nbsp; nbsp; nbsp; c0t21d0nbsp; nbsp;ONLINEnbsp; nbsp; 
nbsp; nbsp;0nbsp; nbsp; nbsp;0nbsp; nbsp; nbsp;0
nbsp; nbsp; nbsp; nbsp; nbsp; c0t22d0nbsp; nbsp;ONLINEnbsp; nbsp; 
nbsp; nbsp;0nbsp; nbsp; nbsp;0nbsp; nbsp; nbsp;0
nbsp; nbsp; nbsp; nbsp; nbsp; c0t23d0nbsp; nbsp;ONLINEnbsp; nbsp; 
nbsp; nbsp;0nbsp; nbsp; nbsp;0nbsp; nbsp; nbsp;0




And then use the other 5 available slots in the SANs digital chassis for 
hot-swapping disk-based tapes as needed.

I've been formatting individual 1.5TB or 2TB drives to be tapes, e.g.:


# zpool status tap0102
nbsp; pool#58; tap0102
nbsp;state#58; ONLINE

config#58;

nbsp; nbsp; nbsp; nbsp; NAMEnbsp; nbsp; nbsp; nbsp; STATEnbsp; nbsp; 
nbsp;READ WRITE CKSUM
nbsp; nbsp; nbsp; nbsp; tap0102nbsp; nbsp; nbsp;ONLINEnbsp; nbsp; 
nbsp; nbsp;0nbsp; nbsp; nbsp;0nbsp; nbsp; nbsp;0
nbsp; nbsp; nbsp; nbsp; nbsp; c0t26d0nbsp; nbsp;ONLINEnbsp; nbsp; 
nbsp; nbsp;0nbsp; nbsp; nbsp;0nbsp; nbsp; nbsp;

# df -h | grep tap
tap0102nbsp; nbsp; nbsp; nbsp; nbsp; nbsp; nbsp; nbsp; 1.3Tnbsp; 
nbsp;1.3Tnbsp; nbsp; nbsp;0Knbsp; nbsp;100%nbsp; nbsp; /tap0102



Here are some of the challenges/questions:

My full backups are huge (multi-TB) so filling tapes is not a problem.  
However, daily incremental might only be 20-30GB.  Since I'm using 1.5 and/or 
2TB drive for tapes I'm able to get 

Re: [Amanda-users] Backing up 50TB (ZFS) with Amanda using disk to disk

2010-12-16 Thread Jean-Louis Martineau

NetWatchman wrote:

Here are some of the challenges/questions:

My full backups are huge (multi-TB) so filling tapes is not a problem.  However, daily incremental might 
only be 20-30GB.  Since I'm using 1.5 and/or 2TB drive for tapes I'm able to get good 
filling of the tapes when doing full backups, but I'll potentially only fill up 50GB other 
timesthis potentially leaves a lot of tapes' mostly empty and is kinda a waste of an expensive 
resource.

I can work around this by disabling auto-flush...doing a bunch of backups over the course 
of several days until I have at least 2TB on the holding disk, then mounting a 
tape and doing a flush...problem is then it requires me to manual monitor it 
which I'd like to avoid...
  

set:
flush-threshold-dumped 100
flush-threshold-scheduled 100
taperflush 100


I've read all of the discussions on using vtapes, but they all center on using a fix 
mounted drive (or multiple drives in RAID).  So I certainly could create 200 10GB vtapes 
on a 2TB drive, but I really need a total of 20TB of backup capacitiy...how do I setup 
vtapes on 20 different drives, but I only have the slot capacity to have 5 of 
them physically mounted at any given time.  Bottom-line, I'm unclear on how I could use a 
vtape approach with MULTIPLE physical disks.   I don't want to do RAID as I want the 
ability to migrate some of these backups offsite...doing vtapes on RAID would seem to 
signficantly complicate that process.
  
amanda-3.3 (not yet released) can use multiple changer, you set a 
chg-disk for each physical disks.


Jean-Louis


[Amanda-users] amanda.conf config help

2010-12-15 Thread upengan78
Hello again,

I think I am starting to realize that something needs to be configured properly 
in order to Fill the tapes to their MAX capacity. I'd like some advise on that. 
I can create another thread for that however, it may be something to do with 
existing configuration so I am continuing the discussion here.

I am doing below cron jobs currently (no manual amdumps)
#Amcheck
59 19 * * 1-5 su - amanda -c /opt/csw/sbin/amcheck -a weeklyfull
5 20 * * * su - amanda -c /opt/csw/sbin/amcheck -a monthlyfull
#amdump
01 00 * * 1-5 su - amanda -c /opt/csw/sbin/amdump weeklyfull
15 01 * * * su - amanda -c /opt/csw/sbin/amdump monthlyfull

weeklyfull- amanda.conf
inparallel 4
maxdumps 4
dumpcycle 7
runspercycle 5
tapecycle 9
runtapes 3
dumpuser amanda
tpchanger chg-disk# a virtual tape changer
tapedev file:/bk/location/amanda/vtapes/weeklyfull/slots
changerfile /opt/csw/etc/amanda/weeklyfull/changerfile
labelstr WF-.*
#label_new_tapes PLUTO-%%
autolabel WF-%%
tapetype DVD_SIZED_DISK
logdir /opt/csw/etc/amanda/weeklyfull
infofile /opt/csw/etc/amanda/weeklyfull/curinfo
indexdir /opt/csw/etc/amanda/weeklyfull/index
tapelist /opt/csw/etc/amanda/weeklyfull/tapelist
#etimeout 600 # number of seconds per filesystem for estimates.
etimeout 3600 # number of seconds per filesystem for estimates.
#etimeout -600   # total number of seconds for estimates.
# a positive number will be multiplied by the number of filesystems on
# each host; a negative number will be taken as an absolute total time-out.
# The default is 5 minutes per filesystem.
#dtimeout 1800# number of idle seconds before a dump is aborted.
dtimeout 3600# number of idle seconds before a dump is aborted.
ctimeout 30  # maximum number of seconds that amcheck waits
 # for each client host


holdingdisk hd1 {
directory /random/amandahold/hold
}

holdingdisk hd2 {
directory /random1/amanda/holdingdisk2
}

define dumptype comp-tar {
program GNUTAR
compress fast
index yes
record yes  # Important! avoid interfering with production runs
#  tape_splitsize 1 Gb
tape_splitsize 1024 mbytes
#  fallback_splitsize 512 MB 
fallback_splitsize 4096 MB 
split_diskbuffer /random/buffer
}  

define tapetype DVD_SIZED_DISK {
filemark 1 KB
length 10240 MB
}





inparallel 5
maxdumps 5
dumpcycle 30 days
runspercycle 30 
tapecycle 20 
runtapes 9 
dumpuser amanda
tpchanger chg-disk# a virtual tape changer
tapedev file:/bk/location/amanda/vtapes/monthlyfull/slots
changerfile /opt/csw/etc/amanda/monthlyfull/changerfile
labelstr MF-.*
#label_new_tapes PLUTO-%%
autolabel MF-%%
tapetype DVD_SIZED_DISK
logdir /opt/csw/etc/amanda/monthlyfull
infofile /opt/csw/etc/amanda/monthlyfull/curinfo
indexdir /opt/csw/etc/amanda/monthlyfull/index
tapelist /opt/csw/etc/amanda/monthlyfull/tapelist
#etimeout 600 # number of seconds per filesystem for estimates.
etimeout 3600 # number of seconds per filesystem for estimates.
#etimeout -600   # total number of seconds for estimates.
# a positive number will be multiplied by the number of filesystems on
# each host; a negative number will be taken as an absolute total time-out.
# The default is 5 minutes per filesystem.
#dtimeout 1800# number of idle seconds before a dump is aborted.
dtimeout 3600# number of idle seconds before a dump is aborted.
ctimeout 30  # maximum number of seconds that amcheck waits
 # for each client host


holdingdisk hd1 {
directory /random/amandahold/hold
}

holdingdisk hd2 {
directory /random1/amanda/holdingdisk2
}

define dumptype comp-tar {
program GNUTAR
compress fast
index yes
record yes  # Important! avoid interfering with production runs
#   tape_splitsize 1 Gb
tape_splitsize 1024 mbytes
#fallback_splitsize 512 MB 
   fallback_splitsize 4096 MB 
   split_diskbuffer /random/buffer
}  

define tapetype DVD_SIZED_DISK {
filemark 1 KB
length 10240 MB
}


monthlyfull amanda.conf

inparallel 5
maxdumps 5
dumpcycle 30 days
runspercycle 30 
tapecycle 20 
runtapes 9 
dumpuser amanda
tpchanger chg-disk# a virtual tape changer
tapedev file:/bk/location/amanda/vtapes/monthlyfull/slots
changerfile /opt/csw/etc/amanda/monthlyfull/changerfile
labelstr MF-.*
#label_new_tapes PLUTO-%%
autolabel MF-%%
tapetype DVD_SIZED_DISK
logdir /opt/csw/etc/amanda/monthlyfull
infofile /opt/csw/etc/amanda/monthlyfull/curinfo
indexdir /opt/csw/etc/amanda/monthlyfull/index
tapelist /opt/csw/etc/amanda/monthlyfull/tapelist
#etimeout 600 # number of seconds per filesystem for estimates.
etimeout 3600 # number of seconds per filesystem for estimates.
#etimeout -600   # total number of seconds for estimates.
# a positive number will be multiplied by the number of filesystems on
# each host; a negative number will be taken as an absolute total time-out.
# The default is 5 minutes per filesystem.
#dtimeout 1800# number 

Re: [Amanda-users] amanda.conf config help

2010-12-15 Thread Brian Cuttler
Upendra,

I don't recall how large your holding area is or if you end up
with multiple DLE in holding waiting for tape or not.

I've had good success with taperalgo and selecting the largestfit
option. This seems to fill my (non-spanning physical) tapes pretty
close to 100%.

If you have lots of work area and can delay your dumps until there
are mulipul DLE in the work area so the taper algorithms have more
choices there are paramters for that as well.

I believe (unless they have been removed in the last itteration)
that these relatively new parameters will help you get where you
want to go.

Note this is one of several examples in the amanda.conf file.
There are several examples to look at.

# You want to fill tapes completely even in the case of failed dumps, and
# don't care if some dumps are left on the holding disk after a run:
# flush-threshold-dumped100 # (or more)
# flush-threshold-scheduled 100 # (or more)
# taperflush100
# autoflush yes



On Wed, Dec 15, 2010 at 01:34:31PM -0500, upengan78 wrote:
 Hello again,
 
 I think I am starting to realize that something needs to be configured 
 properly in order to Fill the tapes to their MAX capacity. I'd like some 
 advise on that. I can create another thread for that however, it may be 
 something to do with existing configuration so I am continuing the discussion 
 here.
 
 I am doing below cron jobs currently (no manual amdumps)
 #Amcheck
 59 19 * * 1-5 su - amanda -c /opt/csw/sbin/amcheck -a weeklyfull
 5 20 * * * su - amanda -c /opt/csw/sbin/amcheck -a monthlyfull
 #amdump
 01 00 * * 1-5 su - amanda -c /opt/csw/sbin/amdump weeklyfull
 15 01 * * * su - amanda -c /opt/csw/sbin/amdump monthlyfull
 
 weeklyfull- amanda.conf
 inparallel 4
 maxdumps 4
 dumpcycle 7
 runspercycle 5
 tapecycle 9
 runtapes 3
 dumpuser amanda
 tpchanger chg-disk# a virtual tape changer
 tapedev file:/bk/location/amanda/vtapes/weeklyfull/slots
 changerfile /opt/csw/etc/amanda/weeklyfull/changerfile
 labelstr WF-.*
 #label_new_tapes PLUTO-%%
 autolabel WF-%%
 tapetype DVD_SIZED_DISK
 logdir /opt/csw/etc/amanda/weeklyfull
 infofile /opt/csw/etc/amanda/weeklyfull/curinfo
 indexdir /opt/csw/etc/amanda/weeklyfull/index
 tapelist /opt/csw/etc/amanda/weeklyfull/tapelist
 #etimeout 600 # number of seconds per filesystem for estimates.
 etimeout 3600 # number of seconds per filesystem for estimates.
 #etimeout -600   # total number of seconds for estimates.
 # a positive number will be multiplied by the number of filesystems on
 # each host; a negative number will be taken as an absolute total time-out.
 # The default is 5 minutes per filesystem.
 #dtimeout 1800# number of idle seconds before a dump is aborted.
 dtimeout 3600# number of idle seconds before a dump is aborted.
 ctimeout 30  # maximum number of seconds that amcheck waits
  # for each client host
 
 
 holdingdisk hd1 {
 directory /random/amandahold/hold
 }
 
 holdingdisk hd2 {
 directory /random1/amanda/holdingdisk2
 }
 
 define dumptype comp-tar {
 program GNUTAR
 compress fast
 index yes
 record yes  # Important! avoid interfering with production runs
 #  tape_splitsize 1 Gb
 tape_splitsize 1024 mbytes
 #  fallback_splitsize 512 MB 
 fallback_splitsize 4096 MB 
 split_diskbuffer /random/buffer
 }  
 
 define tapetype DVD_SIZED_DISK {
 filemark 1 KB
 length 10240 MB
 }
 
 
 
 
 
 inparallel 5
 maxdumps 5
 dumpcycle 30 days
 runspercycle 30 
 tapecycle 20 
 runtapes 9 
 dumpuser amanda
 tpchanger chg-disk# a virtual tape changer
 tapedev file:/bk/location/amanda/vtapes/monthlyfull/slots
 changerfile /opt/csw/etc/amanda/monthlyfull/changerfile
 labelstr MF-.*
 #label_new_tapes PLUTO-%%
 autolabel MF-%%
 tapetype DVD_SIZED_DISK
 logdir /opt/csw/etc/amanda/monthlyfull
 infofile /opt/csw/etc/amanda/monthlyfull/curinfo
 indexdir /opt/csw/etc/amanda/monthlyfull/index
 tapelist /opt/csw/etc/amanda/monthlyfull/tapelist
 #etimeout 600 # number of seconds per filesystem for estimates.
 etimeout 3600 # number of seconds per filesystem for estimates.
 #etimeout -600   # total number of seconds for estimates.
 # a positive number will be multiplied by the number of filesystems on
 # each host; a negative number will be taken as an absolute total time-out.
 # The default is 5 minutes per filesystem.
 #dtimeout 1800# number of idle seconds before a dump is aborted.
 dtimeout 3600# number of idle seconds before a dump is aborted.
 ctimeout 30  # maximum number of seconds that amcheck waits
  # for each client host
 
 
 holdingdisk hd1 {
 directory /random/amandahold/hold
 }
 
 holdingdisk hd2 {
 directory /random1/amanda/holdingdisk2
 }
 
 define dumptype comp-tar {
 program GNUTAR
 compress fast
 index yes
 record yes  # Important! avoid interfering with production runs
 #   tape_splitsize 1 

[Amanda-users] amanda.conf config help

2010-12-15 Thread upengan78
Thanks for your quick advise Brian,

I still want to post my DLEs and their sizes, may be you or someone can throw 
additional ideas.

weeklyfull  ~ total size about 58GB

pluto.ece.iit.edu /export/./abc /export {12G 
  comp-tar
  include ./abc
}   -1


pluto.ece.iit.edu /export/./def /export {  4.5GB
  comp-tar
  include ./def
}  -1 


pluto.ece.iit.edu /export/./ghi /export { 1.2GB
  comp-tar
  include ./ghi
}   2


pluto.ece.iit.edu /export/./jkl /export {   1GB
  comp-tar
  include ./jkl
}  -1


#pluto.ece.iit.edu /export/./mno /export {   23GB
#  comp-tar
#  include ./mno
#}   2


pluto.ece.iit.edu /export/./pqr /export {  14GB
  comp-tar
  include ./pqr
}   2

pluto.ece.iit.edu /export/./stu /export {  162 MB
  comp-tar
  include ./stu
}   -1



Monthlyfull ~ total size about 90GB


pluto.ece.iit.edu /export/./vwx /export { 3.3 GB
  comp-tar
  include ./vwx
}  -1


pluto.ece.iit.edu /export/./yz /export {   1.0GB
  comp-tar
  include ./yz
}   -1

pluto.ece.iit.edu /export/OTHER/./a-m /export/OTHER {9GB
  comp-tar
  include ./[a-m]*
}   -1

pluto.ece.iit.edu /export/OTHER/./n-z /export/OTHER {   9GB
  comp-tar
  include ./[n-z]*
}   2

pluto.ece.iit.edu /export/OTHER/./_rest_ /export/OTHER {  9GB
  comp-tar
 exclude append ./[a-z]*
}   2


pluto.ece.iit.edu /export/./jack /export { 2.3GB
  comp-tar
  include ./jack
}   2


pluto.ece.iit.edu /export/./users /export { 24 GB
  comp-tar
  include ./users
}   -1


pluto.ece.iit.edu /export/./egg /export {8GB
  comp-tar
  include ./egg
}   -1


pluto.ece.iit.edu /export/./apple /export {21GB
  comp-tar
  include ./apple
}   2


pluto.ece.iit.edu /export/./vls /export {14GB
  comp-tar
  include ./vls
}   -1


Apart from this,

Total holding space from hd1 and hd2 is = 17 GB + 41 GB = 58GB. May be I should 
try to use  flush-threshhold-* options if there is nothing else that can be 
adjusted in my case.

+--
|This was sent by upendra.gan...@gmail.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--




[Amanda-users] amanda.conf config help

2010-12-15 Thread upengan78
Thanks for your quick advise Brian,

I still want to post my DLEs and their sizes, may be you or someone can throw 
additional ideas.

weeklyfull  ~ total size about 58GB

client.domain.com /export/./abc /export {12G 
  comp-tar
  include ./abc
}   -1


client.domain.com /export/./def /export {  4.5GB
  comp-tar
  include ./def
}  -1 


client.domain.com /export/./ghi /export { 1.2GB
  comp-tar
  include ./ghi
}   2


client.domain.com /export/./jkl /export {   1GB
  comp-tar
  include ./jkl
}  -1


#client.domain.com /export/./mno /export {   23GB
#  comp-tar
#  include ./mno
#}   2


client.domain.com /export/./pqr /export {  14GB
  comp-tar
  include ./pqr
}   2

client.domain.com /export/./stu /export {  162 MB
  comp-tar
  include ./stu
}   -1



Monthlyfull ~ total size about 90GB


client.domain.com /export/./vwx /export { 3.3 GB
  comp-tar
  include ./vwx
}  -1


client.domain.com /export/./yz /export {   1.0GB
  comp-tar
  include ./yz
}   -1

client.domain.com /export/OTHER/./a-m /export/OTHER {9GB
  comp-tar
  include ./[a-m]*
}   -1

client.domain.com /export/OTHER/./n-z /export/OTHER {   9GB
  comp-tar
  include ./[n-z]*
}   2

client.domain.com /export/OTHER/./_rest_ /export/OTHER {  9GB
  comp-tar
 exclude append ./[a-z]*
}   2


client.domain.com /export/./jack /export { 2.3GB
  comp-tar
  include ./jack
}   2


client.domain.com /export/./users /export { 24 GB
  comp-tar
  include ./users
}   -1


client.domain.com /export/./egg /export {8GB
  comp-tar
  include ./egg
}   -1


client.domain.com /export/./apple /export {21GB
  comp-tar
  include ./apple
}   2


client.domain.com /export/./vls /export {14GB
  comp-tar
  include ./vls
}   -1


Apart from this,

Total holding space from hd1 and hd2 is = 17 GB + 41 GB = 58GB. May be I should 
try to use  flush-threshhold-* options if there is nothing else that can be 
adjusted in my case.

+--
|This was sent by upendra.gan...@gmail.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--




[Amanda-users] amanda.conf config help

2010-12-14 Thread upengan78
Thank you Jon and everyone who have helped me understand a lot of things in 
Amanda. I have been able to split DLEs into multiple DLEs for the sake for 
distributing them in proper schedule. I also tried using inparallel and maxdump 
in amanda config and was successful only after configuring correct spindles in 
DLEs.

And yes, split_diskbuffer is correct. I am using it now which I think I ignored 
completely earlier. 

Currently, I have scheduled 2 amanda configurations in cron for amcheck/amdump. 
These jobs are going fine. One is weeklyfullbackup(daily incremental(5 days) 
and other monthlyfull ( daily incremental ) . I am thinking about buying 
another storage in order to accomodate additional DLEs that I have enabled. 
300G is not enough. And, yes virtual tapes option looks cost effective as well..

I will write more ..

Thanks!

+--
|This was sent by upendra.gan...@gmail.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--




[Amanda-users] A question about how ssh auth works

2010-12-14 Thread k anderson3454
Thanks so much for this information.

+--
|This was sent by k.anderson3...@yahoo.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--




[Amanda-users] amanda.conf config help

2010-12-03 Thread upengan78
Ok guys, I got busier with other tasks so could not reply. sorry about that.

I did manage upgrading to Amanda 3.x version on solaris amanda server and 
solaris amanda client.

I tried amcheck and it worked. So I tried amdump. However it seems to me I 
really need to adjust the numbers because I got below email when I ran the 
dump. May be using multiple DLEs is my only option rather than one here.

Here is the Amanda email I got. I don't understand what it is trying to tell me 
to do.


These dumps were to tapes TEST-1, TEST-2.
The next 6 tapes Amanda expects to use are: 6 new tapes.
The next 6 new tapes already labelled are: TEST-3, TEST-4, TEST-5, TEST-6, 
TEST-7, TEST-8
STRANGE DUMP SUMMARY:
  TEST.domain.com /bk/location lev 0  STRANGE (see below)



STATISTICS:
  Total   Full  Incr.
      
Estimate Time (hrs:min) 0:29
Run Time (hrs:min)  2:07
Dump Time (hrs:min) 1:38   1:38   0:00
Output Size (meg)16131.316131.30.0
Original Size (meg)  31043.731043.70.0
Avg Compressed Size (%) 52.0   52.0--
Filesystems Dumped 1  1  0
Avg Dump Rate (k/s)   2795.5 2795.5--

Tape Time (hrs:min) 1:38   1:38   0:00
Tape Size (meg)  16131.316131.30.0
Tape Used (%)  157.5  157.50.0
Filesystems Taped  1  1  0
Parts Taped   33 33  0
Avg Tp Write Rate (k/s)   2795.9 2795.9--

USAGE BY TAPE:
  Label   Time Size  %NbNc
  TEST-1 1:1210485088k  100.0 120
  TEST-2 0:27 6556954k   62.5 013

STRANGE DUMP DETAILS:
  /-- TEST.domain.com /bk/location lev 0 STRANGE
  sendbackup: start [TEST.domain.com:/bk/location level 0]
  sendbackup: info BACKUP=/opt/csw/bin/gtar
  sendbackup: info RECOVER_CMD=/opt/csw/bin/gzip -dc |/opt/csw/bin/gtar -xpGf - 
...
  sendbackup: info COMPRESS_SUFFIX=.gz
  sendbackup: info end
  ? /opt/csw/bin/gtar: ./fall2010/ece429/albert/cpu32/filenames.log: File 
removed before we read it
  ? /opt/csw/bin/gtar: ./fall2010/ece429/albert/cpu32/sue/no_name.out: File 
removed before we read it
  ? /opt/csw/bin/gtar: ./fall2010/ece429/albert/cpu32/sue/no_name.out.BAK: File 
removed before we read it
  ? /opt/csw/bin/gtar: ./fall2010/ece429/albert/cpu32/sue/no_name.sim: File 
removed before we read it
  ? /opt/csw/bin/gtar: ./fall2010/ece429/albert/cpu32/sue/no_name.sp: File 
removed before we read it
  ? /opt/csw/bin/gtar: ./fall2010/ece429/albert/cpu32/sue/no_name.sp.cache: 
File removed before we read it
  ? /opt/csw/bin/gtar: ./fall2010/ece429/albert/cpu32/sue/no_name.st0: File 
removed before we read it
  ? /opt/csw/bin/gtar: ./fall2010/ece429/albert/cpu32/sue/tclIndex: File 
removed before we read it
  | Total bytes written: 32551710720 (31GiB, 5.3MiB/s)
  sendbackup: size 31788780
  sendbackup: end
  \

NOTES:
  planner: Incremental of TEST.domain.com:/bk/location bumped to level 2.
  planner: TEST.domain.com /bk/location 20101202160419 0 
/opt/csw/libexec/amanda/runtar exited with status 1: see 
/tmp/amanda/client/test/sendsize.20101202160241.debug
  taper: Will request retry of failed split part.
  taper: tape TEST-1 kb 9961472 fm 20 [OK]
  taper: tape TEST-2 kb 6556954 fm 13 [OK]
  big estimate: TEST.domain.com /bk/location 0
  est: 20791072kout 16518426k


DUMP SUMMARY:
DUMPER STATSTAPER STATS
HOSTNAME DISKL  ORIG-kB   OUT-kB  COMP%  MMM:SS   KB/s MMM:SS   KB/s
-- --- -
TEST.ece.ii /bk/location 0 31788780 16518426   52.0   98:29 2795.5  98:28 2795.9

(brought to you by Amanda version 3.1.1)


Here is the amanda.conf

dumpcycle 7
runspercycle 7
tapecycle 29 
runtapes 6
dumpuser amanda
tpchanger chg-disk# a virtual tape changer
tapedev file:/random/amanda/vtapes/test/slots
changerfile /opt/csw/etc/amanda/test/changerfile
labelstr TEST-.*
#label_new_tapes TEST-%%
autolabel TEST-%%
tapetype DVD_SIZED_DISK
logdir /opt/csw/etc/amanda/test
infofile /opt/csw/etc/amanda/test/curinfo
indexdir /opt/csw/etc/amanda/test/index
tapelist /opt/csw/etc/amanda/test/tapelist
#etimeout 600 # number of seconds per filesystem for estimates.
etimeout 3600 # number of seconds per filesystem for estimates.
#etimeout -600   # total number of seconds for estimates.
# a positive number will be multiplied by the number of filesystems on
# each host; a negative number will be taken as an absolute total time-out.
# The default is 5 minutes per filesystem.
#dtimeout 1800# number of idle seconds before a dump is aborted.
dtimeout 3600# number of idle seconds before a dump is aborted.
ctimeout 30  # maximum number of seconds 

Re: [Amanda-users] amanda.conf config help

2010-12-03 Thread Jon LaBadie
On Fri, Dec 03, 2010 at 09:55:14AM -0500, upengan78 wrote:
 Ok guys, I got busier with other tasks so could not reply. sorry about that.
 
 I did manage upgrading to Amanda 3.x version on solaris amanda server and 
 solaris amanda client.
 
 I tried amcheck and it worked. So I tried amdump. However it seems to me I 
 really need to adjust the numbers because I got below email when I ran the 
 dump. May be using multiple DLEs is my only option rather than one here.
 

Congratulations on a successful amdump!

 Here is the Amanda email I got. I don't understand what it is trying to tell 
 me to do.
 
 
 These dumps were to tapes TEST-1, TEST-2.
 The next 6 tapes Amanda expects to use are: 6 new tapes.
 The next 6 new tapes already labelled are: TEST-3, TEST-4, TEST-5, TEST-6, 
 TEST-7, TEST-8

OK, you allow amanda to use up to six tapes, but it only needed two.

 STRANGE DUMP SUMMARY:
   TEST.domain.com /bk/location lev 0  STRANGE (see below)

Lots of strange messages are quite normal, particularly when
dumping an active/live file system.

 
 STATISTICS:
   Total   Full  Incr.
       
 Estimate Time (hrs:min) 0:29
 Run Time (hrs:min)  2:07
 Dump Time (hrs:min) 1:38   1:38   0:00
 Output Size (meg)16131.316131.30.0
 Original Size (meg)  31043.731043.70.0
 Avg Compressed Size (%) 52.0   52.0--

fairly compressible data.

 Filesystems Dumped 1  1  0
 Avg Dump Rate (k/s)   2795.5 2795.5--
 
 Tape Time (hrs:min) 1:38   1:38   0:00
 Tape Size (meg)  16131.316131.30.0
 Tape Used (%)  157.5  157.50.0
 Filesystems Taped  1  1  0
 Parts Taped   33 33  0

Hmm, a 16.1GB dump written in taped in 33 parts.  Sounds
like your splits are about 0.5GB.  I wouldn't leave it that
low unless you are really concerned with maximizing each
vtape's filling.

 Avg Tp Write Rate (k/s)   2795.9 2795.9--
 
 USAGE BY TAPE:
   Label   Time Size  %NbNc
   TEST-1 1:1210485088k  100.0 120
   TEST-2 0:27 6556954k   62.5 013
 

Hmm, 20 parts successfully taped on the first (TEST-1) vtape.
So you must be sizing them at about 10GB.

 STRANGE DUMP DETAILS:
   /-- TEST.domain.com /bk/location lev 0 STRANGE
   sendbackup: start [TEST.domain.com:/bk/location level 0]
   sendbackup: info BACKUP=/opt/csw/bin/gtar
   sendbackup: info RECOVER_CMD=/opt/csw/bin/gzip -dc |/opt/csw/bin/gtar -xpGf 
 - ...
   sendbackup: info COMPRESS_SUFFIX=.gz
   sendbackup: info end
   ? /opt/csw/bin/gtar: ./fall2010/ece429/albert/cpu32/filenames.log: File 
 removed before we read it
   ? /opt/csw/bin/gtar: ./fall2010/ece429/albert/cpu32/sue/no_name.out: File 
 removed before we read it
   ? /opt/csw/bin/gtar: ./fall2010/ece429/albert/cpu32/sue/no_name.out.BAK: 
 File removed before we read it
   ? /opt/csw/bin/gtar: ./fall2010/ece429/albert/cpu32/sue/no_name.sim: File 
 removed before we read it
   ? /opt/csw/bin/gtar: ./fall2010/ece429/albert/cpu32/sue/no_name.sp: File 
 removed before we read it
   ? /opt/csw/bin/gtar: ./fall2010/ece429/albert/cpu32/sue/no_name.sp.cache: 
 File removed before we read it
   ? /opt/csw/bin/gtar: ./fall2010/ece429/albert/cpu32/sue/no_name.st0: File 
 removed before we read it
   ? /opt/csw/bin/gtar: ./fall2010/ece429/albert/cpu32/sue/tclIndex: File 
 removed before we read it
   | Total bytes written: 32551710720 (31GiB, 5.3MiB/s)
   sendbackup: size 31788780
   sendbackup: end
   \

These are strange, not failed.  I.e., tar noted something but
continued to run.  Gtar makes a list of files it will backup up
then copies them.  If during the interval between noting a file
to back up and the actual backup, some{one|thing} removes the
file, the above strange messages are produced.  I'm not
concerned about missing backups of things that were just removed,
they were probably some type of temporary file.

Strange messages are also generaged about files that change while
being backed up (eg. log files) and the backup of these may be
inaccurate.  Gtar does not backup network sockets nor (I think)
device files.  But it generates unexpected, thus strange mesgs.

 
 NOTES:
   planner: Incremental of TEST.domain.com:/bk/location bumped to level 2.
   planner: TEST.domain.com /bk/location 20101202160419 0 
 /opt/csw/libexec/amanda/runtar exited with status 1: see 
 /tmp/amanda/client/test/sendsize.20101202160241.debug
   taper: Will request retry of failed split part.
   taper: tape TEST-1 kb 9961472 fm 20 [OK]

While attempting to put part 21 on the vtape, the end of the
tape was reached.  Thus an error.  But from the line below,
it successfully was sent to the next vtape.

   taper: tape TEST-2 kb 6556954 fm 13 [OK]
   big estimate: 

[Amanda-users] amanda.conf config help

2010-12-03 Thread upengan78
#Congratulations on a successful amdump! 

Wow, tell you what I am so happy to see those words! Thanks much!

By the time I saw your reply, I had impatiently split my single DLE into 9 DLEs 
which also includes the last one for excluding all 8 DLEs on the top.

I did see same message again in the amdump summary : 
  taper: Will request retry of failed split part.
  taper: tape TEST-1 kb 9845830 fm 15 [OK]
  taper: tape TEST-2 kb 6661060 fm 7 [OK]

But now that I read your reply, I think it just doesn't mean a lot as long as I 
see OK below that particular message. Here is full DUMP summary :
These dumps were to tapes TEST-1, TEST-2.
The next 6 tapes Amanda expects to use are: 6 new tapes.
The next 6 new tapes already labelled are: TEST-3, TEST-4, TEST-5, TEST-6, 
TEST-7, TEST-8


STATISTICS:
  Total   Full  Incr.
      
Estimate Time (hrs:min) 0:10
Run Time (hrs:min)  1:51
Dump Time (hrs:min) 1:38   1:38   0:00
Output Size (meg)16120.016120.00.0
Original Size (meg)  30661.630661.60.0
Avg Compressed Size (%) 52.6   52.6--
Filesystems Dumped 9  9  0
Avg Dump Rate (k/s)   2809.3 2809.3--

Tape Time (hrs:min) 0:17   0:17   0:00
Tape Size (meg)  16120.016120.00.0
Tape Used (%)  157.4  157.40.0
Filesystems Taped  9  9  0
Parts Taped   22 22  0
Avg Tp Write Rate (k/s)  16506.916506.9--

USAGE BY TAPE:
  Label   Time Size  %NbNc
  TEST-1 0:1010485219k  100.0 815
  TEST-2 0:07 6661060k   63.5 2 7

NOTES:
  planner: Adding new disk TEST.domain.com:/bk/location/./apple2007.
  planner: Adding new disk TEST.domain.com:/bk/location/./apple2008.
  planner: Adding new disk TEST.domain.com:/bk/location/./apple2009.
  planner: Adding new disk TEST.domain.com:/bk/location/./apple2010.
  planner: Adding new disk TEST.domain.com:/bk/location/./pear2008.
  planner: Adding new disk TEST.domain.com:/bk/location/./pear2009.
  planner: Adding new disk TEST.domain.com:/bk/location/./pear2010.
  planner: Adding new disk TEST.domain.com:/bk/location/./other.
  planner: Adding new disk TEST.domain.com:/bk/location/./_rest_.
  taper: Will request retry of failed split part.
  taper: tape TEST-1 kb 9845830 fm 15 [OK]
  taper: tape TEST-2 kb 6661060 fm 7 [OK]
  big estimate: TEST.domain.com /bk/location/./apple2007 0
  est: 244128kout 119306k
  big estimate: TEST.domain.com /bk/location/./apple2009 0
  est: 834272kout 521751k
  big estimate: TEST.domain.com /bk/location/./apple2010 0
  est: 3646880kout 2857940k


DUMP SUMMARY:
   DUMPER STATS   TAPER STATS
HOSTNAME DISKL ORIG-kB  OUT-kB  COMP%  MMM:SS   KB/s MMM:SSKB/s
-- - --
TEST.domain.co -u/./_rest_ 0 2981200 2155287   72.37:25 4838.8   2:05 
17242.3
TEST.domain.co -./apple2007 0  488140  119306   24.41:40 1188.0   0:06 
19884.3
TEST.domain.co -./apple2008 0 6484400 3803120   58.7   28:08 2253.5   4:23 
14460.5
TEST.domain.co -./apple2009 0 1668440  521751   31.33:17 2642.9   0:29 
17991.4
TEST.domain.co -./apple2010 0 7308970 2857940   39.1   22:30 2117.1   2:46 
17216.5
TEST.domain.co -tu/./other 0 5833440 3828839   65.6   18:01 3542.5   3:44 
17093.0
TEST.domain.co -pear2008 0   88010   46283   52.60:23 2040.3   0:02 23141.5
TEST.domain.co -pear2009 0 4127930 1932585   46.89:16 3475.2   1:53 17102.5
TEST.domain.co -pear2010 0 2416990 1241776   51.47:15 2853.0   1:12 17246.9

(brought to you by Amanda version 3.1.1)


Now I am thinking about, removing runtapes option completely, and add 
splitdisk_buffer option to the holding disk, I am confused about one thing, why 
you mentioned that looks like your splits are about 0.5GB.  I have actually 
set tape_splitsize = 1Gb although I do have fallback_splitsize = 512MB . Do you 
mean I should grow this value to a higher value or you said that about 
tape_splitsize value?

Other than this, your explanation was really informative and thanks again for 
that.

+--
|This was sent by upendra.gan...@gmail.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--




Re: [Amanda-users] amanda.conf config help

2010-12-03 Thread Brian Cuttler

dispite the multiple DLEs the wall clock time is not shorter
than the dumptime, check use of inparallel and maxdump and
you should be able to get some concurrency, not that running
into the work day is an issue with this config, but its good
to practice and now about it later.

On Fri, Dec 03, 2010 at 04:04:16PM -0500, upengan78 wrote:
 #Congratulations on a successful amdump! 
 
 Wow, tell you what I am so happy to see those words! Thanks much!
 
 By the time I saw your reply, I had impatiently split my single DLE into 9 
 DLEs which also includes the last one for excluding all 8 DLEs on the top.
 
 I did see same message again in the amdump summary : 
   taper: Will request retry of failed split part.
   taper: tape TEST-1 kb 9845830 fm 15 [OK]
   taper: tape TEST-2 kb 6661060 fm 7 [OK]
 
 But now that I read your reply, I think it just doesn't mean a lot as long as 
 I see OK below that particular message. Here is full DUMP summary :
 These dumps were to tapes TEST-1, TEST-2.
 The next 6 tapes Amanda expects to use are: 6 new tapes.
 The next 6 new tapes already labelled are: TEST-3, TEST-4, TEST-5, TEST-6, 
 TEST-7, TEST-8
 
 
 STATISTICS:
   Total   Full  Incr.
       
 Estimate Time (hrs:min) 0:10
 Run Time (hrs:min)  1:51
 Dump Time (hrs:min) 1:38   1:38   0:00
 Output Size (meg)16120.016120.00.0
 Original Size (meg)  30661.630661.60.0
 Avg Compressed Size (%) 52.6   52.6--
 Filesystems Dumped 9  9  0
 Avg Dump Rate (k/s)   2809.3 2809.3--
 
 Tape Time (hrs:min) 0:17   0:17   0:00
 Tape Size (meg)  16120.016120.00.0
 Tape Used (%)  157.4  157.40.0
 Filesystems Taped  9  9  0
 Parts Taped   22 22  0
 Avg Tp Write Rate (k/s)  16506.916506.9--
 
 USAGE BY TAPE:
   Label   Time Size  %NbNc
   TEST-1 0:1010485219k  100.0 815
   TEST-2 0:07 6661060k   63.5 2 7
 
 NOTES:
   planner: Adding new disk TEST.domain.com:/bk/location/./apple2007.
   planner: Adding new disk TEST.domain.com:/bk/location/./apple2008.
   planner: Adding new disk TEST.domain.com:/bk/location/./apple2009.
   planner: Adding new disk TEST.domain.com:/bk/location/./apple2010.
   planner: Adding new disk TEST.domain.com:/bk/location/./pear2008.
   planner: Adding new disk TEST.domain.com:/bk/location/./pear2009.
   planner: Adding new disk TEST.domain.com:/bk/location/./pear2010.
   planner: Adding new disk TEST.domain.com:/bk/location/./other.
   planner: Adding new disk TEST.domain.com:/bk/location/./_rest_.
   taper: Will request retry of failed split part.
   taper: tape TEST-1 kb 9845830 fm 15 [OK]
   taper: tape TEST-2 kb 6661060 fm 7 [OK]
   big estimate: TEST.domain.com /bk/location/./apple2007 0
   est: 244128kout 119306k
   big estimate: TEST.domain.com /bk/location/./apple2009 0
   est: 834272kout 521751k
   big estimate: TEST.domain.com /bk/location/./apple2010 0
   est: 3646880kout 2857940k
 
 
 DUMP SUMMARY:
DUMPER STATS   TAPER STATS
 HOSTNAME DISKL ORIG-kB  OUT-kB  COMP%  MMM:SS   KB/s MMM:SS
 KB/s
 -- - 
 --
 TEST.domain.co -u/./_rest_ 0 2981200 2155287   72.37:25 4838.8   2:05 
 17242.3
 TEST.domain.co -./apple2007 0  488140  119306   24.41:40 1188.0   0:06 
 19884.3
 TEST.domain.co -./apple2008 0 6484400 3803120   58.7   28:08 2253.5   4:23 
 14460.5
 TEST.domain.co -./apple2009 0 1668440  521751   31.33:17 2642.9   0:29 
 17991.4
 TEST.domain.co -./apple2010 0 7308970 2857940   39.1   22:30 2117.1   2:46 
 17216.5
 TEST.domain.co -tu/./other 0 5833440 3828839   65.6   18:01 3542.5   3:44 
 17093.0
 TEST.domain.co -pear2008 0   88010   46283   52.60:23 2040.3   0:02 
 23141.5
 TEST.domain.co -pear2009 0 4127930 1932585   46.89:16 3475.2   1:53 
 17102.5
 TEST.domain.co -pear2010 0 2416990 1241776   51.47:15 2853.0   1:12 
 17246.9
 
 (brought to you by Amanda version 3.1.1)
 
 
 Now I am thinking about, removing runtapes option completely, and add 
 splitdisk_buffer option to the holding disk, I am confused about one thing, 
 why you mentioned that looks like your splits are about 0.5GB.  I have 
 actually set tape_splitsize = 1Gb although I do have fallback_splitsize = 
 512MB . Do you mean I should grow this value to a higher value or you said 
 that about tape_splitsize value?
 
 Other than this, your explanation was really informative and thanks again for 
 that.
 
 +--
 |This was sent by upendra.gan...@gmail.com via 

[Amanda-users] amanda.conf config help

2010-12-03 Thread upengan78
Oh, it looks like when I add splitdisk_buffer, it shows me below error on 
amcheck,

amcheck test
/opt/csw/etc/amanda/test/amanda.conf, line 43: dumptype parameter expected
/opt/csw/etc/amanda/test/amanda.conf, line 43: end of line is expected
amcheck: errors processing config file


Relevant portion from amanda.conf

holdingdisk hd1 {
directory /random/amandahold/test
}

define dumptype comp-tar {
program GNUTAR
compress fast
index yes
record yes  # Important! avoid interfering with production runs
   tape_splitsize 1 Gb
   #fallback_splitsize 512 MB
   fallback_splitsize 2048 MB
   splitdisk_buffer /random/amandahold/test
}

define tapetype DVD_SIZED_DISK {
filemark 1 KB
length 10240 MB
}

I should have mentioned, Amanda server has 16GB of memory, you still recommend 
splitdisk_buffer?

+--
|This was sent by upendra.gan...@gmail.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--




[Amanda-users] amanda.conf config help

2010-12-03 Thread upengan78
Actually  the reason I decided to split DLE into multiple DLEs because I 
thought it is tar/gzipping 31GB data in one go and it might cause lower i/o 
speed and also feared if it 'd use more server resources. Although , I am 
actually looking to make best use of 16GB memory on amanda server and 8 GB 
memory on amanda client.

+--
|This was sent by upendra.gan...@gmail.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--




Re: [Amanda-users] amanda.conf config help

2010-12-03 Thread Thomas Marko
Hi!

 amcheck test
 /opt/csw/etc/amanda/test/amanda.conf, line 43: dumptype parameter expected
 /opt/csw/etc/amanda/test/amanda.conf, line 43: end of line is expected
 amcheck: errors processing config file

I think you mean the split_diskbuffer parameter:

See example amanda.conf:

# split_diskbuffer - (optional) When dumping a split dump  in  PORT-WRITE
# mode (usually meaning no holding disk), buffer the split
# chunks to a file in the directory specified by this
option.
# Default: [none]

Cheers,
Thomas


Re: [Amanda-users] amanda.conf config help

2010-12-03 Thread Jon LaBadie
On Fri, Dec 03, 2010 at 04:29:59PM -0500, upengan78 wrote:
 Oh, it looks like when I add splitdisk_buffer, it shows me below error on 
 amcheck,
 
 amcheck test
 /opt/csw/etc/amanda/test/amanda.conf, line 43: dumptype parameter expected
 /opt/csw/etc/amanda/test/amanda.conf, line 43: end of line is expected
 amcheck: errors processing config file

spelling error split_diskbuffer != splitdisk_buffer 

 
 Relevant portion from amanda.conf
 
 holdingdisk hd1 {
 directory /random/amandahold/test
 }
 
 define dumptype comp-tar {
 program GNUTAR
 compress fast
 index yes
 record yes  # Important! avoid interfering with production runs
tape_splitsize 1 Gb
#fallback_splitsize 512 MB
fallback_splitsize 2048 MB
splitdisk_buffer /random/amandahold/test
 }
 
 define tapetype DVD_SIZED_DISK {
 filemark 1 KB
 length 10240 MB
 }
 
 I should have mentioned, Amanda server has 16GB of memory, you still 
 recommend splitdisk_buffer?
 
 +--
 |This was sent by upendra.gan...@gmail.com via Backup Central.
 |Forward SPAM to ab...@backupcentral.com.
 +--
 
 
 
 End of included message 

-- 
Jon H. LaBadie  j...@jgcomp.com
 JG Computing
 12027 Creekbend Drive  (703) 787-0884
 Reston, VA  20194  (703) 787-0922 (fax)


Re: [Amanda-users] amanda.conf config help

2010-12-03 Thread Jon LaBadie
On Fri, Dec 03, 2010 at 04:04:16PM -0500, upengan78 wrote:
 
 Now I am thinking about, removing runtapes option completely, and add 
 splitdisk_buffer option to the holding disk, I am confused about one thing, 
 why you mentioned that looks like your splits are about 0.5GB.  I have 
 actually set tape_splitsize = 1Gb although I do have fallback_splitsize = 
 512MB . Do you mean I should grow this value to a higher value or you said 
 that about tape_splitsize value?
 

At the time I wrote that I saw you had backed up and taped
16GB of data and they were taped in 33 parts.  16GB / 33
is about 0.5GB/part.

Your amdump run used the fallback_splitsize because you
had not specified a split_diskbuffer.  If one had been
specified, data would move from holding disk to the
split_diskbuffer creating parts of tape_splitsize.
If a part failed, such as at the end of a tape, then
the part would be retried on the next tape (if available).

Without a split_diskbuffer the parts have to be created
in memory.  In this situation the parts are sized to the
fallback_splitsize rather than tape_splitsize.  Typically
this is smaller than tape_splitsize to avoid using too much
memory.  As you had no split_diskbuffer, the fallback_splitsize
(0.5GB) was used as the part size.

-- 
Jon H. LaBadie  j...@jgcomp.com
 JG Computing
 12027 Creekbend Drive  (703) 787-0884
 Reston, VA  20194  (703) 787-0922 (fax)


[Amanda-users] amanda.conf config help

2010-11-29 Thread upengan78
Sorry for being late to reply as I was away from computer for last 5 days.


Has this system successfully passed an amcheck configname?

Yes, It works fine always. No errors found.

Here is the o/p 

Holding disk /random//amanda/amandahold/test: 3881997 kB disk space available, 
using 3881997 kB
slot 1: read label `TEST-1', date `20101124'
cannot overwrite active tape TEST-1
slot 2: read label `TEST-2', date `X'
NOTE: skipping tape-writable test
Tape TEST-2 label ok
Server check took 0.815 seconds

Amanda Backup Client Hosts Check

Client check: 1 host checked in 0.536 seconds, 0 problems found

(brought to you by Amanda 2.5.2p1)


@ Jon,

It doesn't show me 3.1.1 version available in CSW which I am using here. See 
below,

/opt/csw/bin/pkgutil -c amanda
Checking catalog integrity with gpg.
gpg: Signature made Sun Oct 17 21:35:22 2010 CDT using DSA key ID A1999E90
gpg: Good signature from Blastwave Software (Blastwave.org Inc.) 
softw...@blastwave.org
package   installed catalog
CSWamanda 2.5.2p1,REV=2008.05.21SAME

Is openCSW for opensolaris or something, ofcourse I will google it now :)

Also, I am going to try using splitsize parameter and try again now. (5GB)

Thanks

+--
|This was sent by upendra.gan...@gmail.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--




Re: [Amanda-users] amanda.conf config help

2010-11-29 Thread Jon LaBadie
On Mon, Nov 29, 2010 at 10:08:04AM -0500, upengan78 wrote:
 
 @ Jon,
 
 It doesn't show me 3.1.1 version available in CSW which I am using here. See 
 below,
 
 /opt/csw/bin/pkgutil -c amanda
 Checking catalog integrity with gpg.
 gpg: Signature made Sun Oct 17 21:35:22 2010 CDT using DSA key ID A1999E90
 gpg: Good signature from Blastwave Software (Blastwave.org Inc.) 
 softw...@blastwave.org
 package   installed catalog
 CSWamanda 2.5.2p1,REV=2008.05.21SAME
 
 Is openCSW for opensolaris or something, ofcourse I will google it now :)

I don't know the details, but there was a disagreement among
the owner of the blastwave name and some (all?) of the developers
who maintained the packages.  This eventually caused a fork of
the catalog with the new one being called openCSW.

It seemed to me that openCSW is now more complete and better
maintained than blastwave.  I switched earlier this year.
I think packages from the two sites are supposed to be
?interchangeable? (i.e. you should be able to pick and
choose from either site).  I've not encountered any
problems caused by my switching sites, but if I were
switching a production system I'd make sure all pkgs
were from the same site.

jl
-- 
Jon H. LaBadie  j...@jgcomp.com
 JG Computing
 12027 Creekbend Drive  (703) 787-0884
 Reston, VA  20194  (703) 787-0922 (fax)


[Amanda-users] amanda.conf config help

2010-11-29 Thread upengan78
Sorry for being late to reply as I was away from computer for last 5 days.


Has this system successfully passed an amcheck configname?

Yes, It works fine always. No errors found.

Here is the o/p 

Holding disk /random//amanda/amandahold/test: 3881997 kB disk space available, 
using 3881997 kB
slot 1: read label `TEST-1', date `20101124'
cannot overwrite active tape TEST-1
slot 2: read label `TEST-2', date `X'
NOTE: skipping tape-writable test
Tape TEST-2 label ok
Server check took 0.815 seconds

Amanda Backup Client Hosts Check

Client check: 1 host checked in 0.536 seconds, 0 problems found

(brought to you by Amanda 2.5.2p1)


@ Jon,

It doesn't show me 3.1.1 version available in CSW which I am using here. See 
below,

/opt/csw/bin/pkgutil -c amanda
Checking catalog integrity with gpg.
gpg: Signature made Sun Oct 17 21:35:22 2010 CDT using DSA key ID A1999E90
gpg: Good signature from Blastwave Software (Blastwave.org Inc.) 
softw...@blastwave.org
package   installed catalog
CSWamanda 2.5.2p1,REV=2008.05.21SAME

Is openCSW for opensolaris or something, ofcourse I will google it now :)

Also, I am going to try using splitsize parameter and try again now. (5GB)

Thanks

+--
|This was sent by upendra.gan...@gmail.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--




[Amanda-users] amanda.conf config help

2010-11-29 Thread upengan78
Sorry for double reply. I pressed back button on Midori and forum 
reposted/resent my reply earlier.

Thanks for that info JL. I was completely unaware of  OpenCSW. I will try to 
use Opencsw and see if I can get Amanda to 3.1.1 and proceed from there which 
will be best thing to do. Anyhow I don't think things have changed a lot 
between amanda version as far as parameters are concerned? so this dicussion is 
always helping me. 

I will try amanda 3.1.1 and let you know how that works. Thanks much!

+--
|This was sent by upendra.gan...@gmail.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--




[Amanda-users] amanda.conf config help

2010-11-24 Thread upengan78
I have made some changed to amanda.conf. First I have changed disk size to 
10GB, then number of Vtapes/slots = 29 and also added runtapes = 6

My amanda.conf

dumpcycle 7
runspercycle 7
tapecycle 29
runtapes 6
dumpuser amanda
tpchanger chg-disk# a virtual tape changer
tapedev file:/random/amanda/vtapes/test/slots
changerfile /opt/csw/etc/amanda/test/changerfile
labelstr TEST-.*
#label_new_tapes TEST-%%
#autolabel TEST-%%
tapetype DVD_SIZED_DISK
logdir /opt/csw/etc/amanda/test
infofile /opt/csw/etc/amanda/test/curinfo
indexdir /opt/csw/etc/amanda/test/index
tapelist /opt/csw/etc/amanda/test/tapelist
#etimeout 600 # number of seconds per filesystem for estimates.
etimeout 3600 # number of seconds per filesystem for estimates.
#etimeout -600   # total number of seconds for estimates.
# a positive number will be multiplied by the number of filesystems on
# each host; a negative number will be taken as an absolute total time-out.
# The default is 5 minutes per filesystem.
#dtimeout 1800# number of idle seconds before a dump is aborted.
dtimeout 3600# number of idle seconds before a dump is aborted.
ctimeout 30  # maximum number of seconds that amcheck waits
 # for each client host


holdingdisk hd1 {
directory /another drive/amanda/amandahold/test
}

define dumptype comp-tar {
program GNUTAR
compress fast
index yes
record yes  # Important! avoid interfering with production runs
}

define tapetype DVD_SIZED_DISK {
filemark 1 KB
length 10240 MB
}

I am still confused about spanning because 
http://wiki.zmanda.com/index.php/How_To:Split_Dumps_Across_Tapes mentions that 
spanning is automatically enabled in Vtapes. Now is that for specific version 
that means or all versions and is there any command to verify if it is enabled 
or not? Copying the relevent portion from wiki below.

Disk Backups (Vtapes)

For vtapes, spanning is automatically enabled, as the VFS device supports LEOM. 
You can add a part_size if you'd like to split dumps into smaller parts; 
otherwise, Amanda will just fill each vtape with a single part before moving on 
to the next vtape.

Thanks for continuing to help guys. Appreciate it and it is wonderful to see 
how much deep knowledge people have about backups and sharing it happily. And, 
I do understand there has to be offsite storage as I know pipes can burst and 
it has happened before. These things I plan to consider once I am confortable 
with setup. :)

Can anyone advise on spanning ?  how to do it in Vtapes?

Thanks

+--
|This was sent by upendra.gan...@gmail.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--




Re: [Amanda-users] amanda.conf config help

2010-11-24 Thread Brian Cuttler

Nothing strickes me as wrong with your new config but...

I'm sorry, I can't actually comment on spanning, never
configured it on any of my amanda servers.

Just for the sake of argument I'm going to include an
extract of my disklist file.

Originally we backed up a 1Tbyte drive, this was impractical,
as much for wall clock time as for any other reason.

Because we are no longer backing up the 'partition' we can't
use dump, the DLE specifies tar. I broke the partition into
two main pieces, with planning and a little luck the directories
that being with letters [A-Q] will contain about the same amount
of data as the directories beginning R to Z.

Because _users_ have access on this file system to create new
directories at will I also included a DLE that would pick up
any directories or files beginning with lower-case letters.
These I allow to backup all at once since there currently
aren't any. ie, avoid missing them by oversight.

trel   /Users   comp-user-tar
#trel   /treluser-tar
trel   /trelAQ /trel   {
comp-server-user-tar
include ./[A-Q]*
}

trel   /trelRZ /trel   {
comp-server-user-tar
include ./[R-Z]*
}

trel   /trelaz /trel   {
user-tar
include ./[a-z]*
}


On Wed, Nov 24, 2010 at 11:37:31AM -0500, upengan78 wrote:
 I have made some changed to amanda.conf. First I have changed disk size to 
 10GB, then number of Vtapes/slots = 29 and also added runtapes = 6
 
 My amanda.conf
 
 dumpcycle 7
 runspercycle 7
 tapecycle 29
 runtapes 6
 dumpuser amanda
 tpchanger chg-disk# a virtual tape changer
 tapedev file:/random/amanda/vtapes/test/slots
 changerfile /opt/csw/etc/amanda/test/changerfile
 labelstr TEST-.*
 #label_new_tapes TEST-%%
 #autolabel TEST-%%
 tapetype DVD_SIZED_DISK
 logdir /opt/csw/etc/amanda/test
 infofile /opt/csw/etc/amanda/test/curinfo
 indexdir /opt/csw/etc/amanda/test/index
 tapelist /opt/csw/etc/amanda/test/tapelist
 #etimeout 600 # number of seconds per filesystem for estimates.
 etimeout 3600 # number of seconds per filesystem for estimates.
 #etimeout -600   # total number of seconds for estimates.
 # a positive number will be multiplied by the number of filesystems on
 # each host; a negative number will be taken as an absolute total time-out.
 # The default is 5 minutes per filesystem.
 #dtimeout 1800# number of idle seconds before a dump is aborted.
 dtimeout 3600# number of idle seconds before a dump is aborted.
 ctimeout 30  # maximum number of seconds that amcheck waits
  # for each client host
 
 
 holdingdisk hd1 {
 directory /another drive/amanda/amandahold/test
 }
 
 define dumptype comp-tar {
 program GNUTAR
 compress fast
 index yes
 record yes  # Important! avoid interfering with production runs
 }
 
 define tapetype DVD_SIZED_DISK {
 filemark 1 KB
 length 10240 MB
 }
 
 I am still confused about spanning because 
 http://wiki.zmanda.com/index.php/How_To:Split_Dumps_Across_Tapes mentions 
 that spanning is automatically enabled in Vtapes. Now is that for specific 
 version that means or all versions and is there any command to verify if it 
 is enabled or not? Copying the relevent portion from wiki below.
 
 Disk Backups (Vtapes)
 
 For vtapes, spanning is automatically enabled, as the VFS device supports 
 LEOM. You can add a part_size if you'd like to split dumps into smaller 
 parts; otherwise, Amanda will just fill each vtape with a single part before 
 moving on to the next vtape.
 
 Thanks for continuing to help guys. Appreciate it and it is wonderful to see 
 how much deep knowledge people have about backups and sharing it happily. 
 And, I do understand there has to be offsite storage as I know pipes can 
 burst and it has happened before. These things I plan to consider once I am 
 confortable with setup. :)
 
 Can anyone advise on spanning ?  how to do it in Vtapes?
 
 Thanks
 
 +--
 |This was sent by upendra.gan...@gmail.com via Backup Central.
 |Forward SPAM to ab...@backupcentral.com.
 +--
 
 
---
   Brian R Cuttler brian.cutt...@wadsworth.org
   Computer Systems Support(v) 518 486-1697
   Wadsworth Center(f) 518 473-6384
   NYS Department of HealthHelp Desk 518 473-0773



IMPORTANT NOTICE: This e-mail and any attachments may contain
confidential or sensitive information which is, or may be, legally
privileged or otherwise protected by law from further disclosure.  It
is intended only for the addressee.  If you received this in error or
from someone who was not authorized to send it to you, please do not
distribute, copy or use it or any attachments.  Please notify the
sender immediately by reply e-mail and delete this from your
system. Thank you for your cooperation.




[Amanda-users] amanda.conf config help

2010-11-24 Thread upengan78
Hello Brian,

Thanks again. I just ran amdump test and looks like there is some issue which I 
am not able to understand why it's happening. Here is ther email from Amanda 
Admin.


These dumps were to tape TEST-1.
The next 6 tapes Amanda expects to use are: 6 new tapes.
The next 6 new tapes already labelled are: TEST-2, TEST-3, TEST-4, TEST-5, 
TEST-6, TEST-7.

FAILURE AND STRANGE DUMP SUMMARY:
  TEST.domain.com  bk/location  lev 0  FAILED [dump larger than available tape 
space, 30927955 KB, incremental dump also larger than tape]
  planner: FATAL cannot fit anything on tape, bailing out


STATISTICS:
  Total   Full  Incr.
      
Estimate Time (hrs:min)0:00
Run Time (hrs:min) 1:28
Dump Time (hrs:min)0:00   0:00   0:00
Output Size (meg)   0.00.00.0
Original Size (meg) 0.00.00.0
Avg Compressed Size (%) -- -- -- 
Filesystems Dumped0  0  0
Avg Dump Rate (k/s) -- -- -- 

Tape Time (hrs:min)0:00   0:00   0:00
Tape Size (meg) 0.00.00.0
Tape Used (%)   0.00.00.0
Filesystems Taped 0  0  0

Chunks Taped  0  0  0
Avg Tp Write Rate (k/s) -- -- -- 

USAGE BY TAPE:
  Label Time  Size  %NbNc
  TEST-1   0:000k0.0 0 0


NOTES:
  planner: disk TEST.domain.com:bk/location, full dump (30927955KB) will be 
larger than available tape space, you could define a splitsize
  driver: WARNING: got empty schedule from planner
  taper: tape TEST-1 kb 0 fm 0 [OK]


DUMP SUMMARY:
   DUMPER STATS   TAPER STATS 
HOSTNAME DISKL ORIG-kB  OUT-kB  COMP%  MMM:SS   KB/s MMM:SS   KB/s
-- - -
TEST.domain.com bk/location 0 FAILED 



Here is the amdump.1

amdump: start at Wed Nov 24 11:46:40 CST 2010
amdump: datestamp 20101124
amdump: starttime 20101124114640
driver: pid 9565 executable /opt/csw/libexec/driver version 2.5.2p1
planner: pid 9564 executable /opt/csw/libexec/planner version 2.5.2p1
planner: build: VERSION=Amanda-2.5.2p1
planner:BUILT_DATE=Wed May 21 15:37:10 EDT 2008
planner:BUILT_MACH=SunOS ra 5.8 Generic_117350-53 sun4u sparc 
SUNW,Sun-Blade-1000
planner:CC=cc
planner:CONFIGURE_COMMAND='./configure' '--prefix=/opt/csw' 
'--infodir=/opt/csw/share/info' '--mandir=/opt/csw/share/man' 
'--with-dumperdir=/opt/csw/lib/amanda/dumper' '--with-index-server=localhost' 
'--with-gnutar=/opt/csw/bin/gtar' '--with-maxtapeblocksize=512' 
'--with-user=amanda' '--with-group=sys' '--with-amandahosts' 
'--disable-libtool' '--disable-shared' '--disable-static'
planner: paths: bindir=/opt/csw/bin sbindir=/opt/csw/sbin
planner:libexecdir=/opt/csw/libexec mandir=/opt/csw/share/man
planner:AMANDA_TMPDIR=/tmp/amanda AMANDA_DBGDIR=/tmp/amanda
planner:CONFIG_DIR=/opt/csw/etc/amanda DEV_PREFIX=/dev/dsk/
planner:RDEV_PREFIX=/dev/rdsk/ DUMP=/usr/sbin/ufsdump
planner:RESTORE=/usr/sbin/ufsrestore VDUMP=UNDEF VRESTORE=UNDEF
planner:XFSDUMP=UNDEF XFSRESTORE=UNDEF VXDUMP=UNDEF VXRESTORE=UNDEF
planner:SAMBA_CLIENT=UNDEF GNUTAR=/opt/csw/bin/gtar
planner:COMPRESS_PATH=/opt/csw/bin/gzip
planner:UNCOMPRESS_PATH=/opt/csw/bin/gzip
planner:LPRCMD=/opt/csw/bin/lpr MAILER=/usr/bin/mailx
planner:listed_incr_dir=/opt/csw/var/amanda/gnutar-lists
planner: defs:  DEFAULT_SERVER=localhost DEFAULT_CONFIG=DailySet1
planner:DEFAULT_TAPE_SERVER=localhost HAVE_MMAP NEED_STRSTR
planner:HAVE_SYSVSHM LOCKING=POSIX_FCNTL SETPGRP_VOID DEBUG_CODE
planner:AMANDA_DEBUG_DAYS=4 BSD_SECURITY RSH_SECURITY USE_AMANDAHOSTS
planner:CLIENT_LOGIN=amanda FORCE_USERID HAVE_GZIP
planner:COMPRESS_SUFFIX=.gz COMPRESS_FAST_OPT=--fast
planner:COMPRESS_BEST_OPT=--best UNCOMPRESS_OPT=-dc
READING CONF FILES...
planner: timestamp 20101124
planner: time 0.033: startup took 0.033 secs

SENDING FLUSHES...
ENDFLUSH

SETTING UP FOR ESTIMATES...
planner: time 0.033: setting up estimates for TEST.domain.com:bk/location
driver: tape size 10485760
driver: adding holding disk 0 dir /amandaservername/amanda/amandahold/test size 
3881997 chunksize 1048576
TEST.domain.com:bk/location overdue 14931 days for level 0
setup_estimate: TEST.domain.com:bk/location: command 0, options: none
last_level 0 next_level0 -14931 level_days 0getting estimates 0 (-2) 1 (-2) 
-1 (-2)
planner: time 0.037: setting up estimates took 0.004 secs

GETTING ESTIMATES...
reserving 3881997 out of 3881997 for degraded-mode dumps
driver: send-cmd time 0.102 to taper: START-TAPER 20101124

Re: [Amanda-users] amanda.conf config help

2010-11-24 Thread Brian Cuttler
On Wed, Nov 24, 2010 at 02:22:52PM -0500, upengan78 wrote:
 Hello Brian,
 
 Thanks again. I just ran amdump test and looks like there is some issue which 
 I am not able to understand why it's happening. Here is ther email from 
 Amanda Admin.
 
 
 These dumps were to tape TEST-1.
 The next 6 tapes Amanda expects to use are: 6 new tapes.
 The next 6 new tapes already labelled are: TEST-2, TEST-3, TEST-4, TEST-5, 
 TEST-6, TEST-7.
 
 FAILURE AND STRANGE DUMP SUMMARY:
   TEST.domain.com  bk/location  lev 0  FAILED [dump larger than available 
 tape space, 30927955 KB, incremental dump also larger than tape]
   planner: FATAL cannot fit anything on tape, bailing out

When you increased the size of your vtapes did you also alter
the length of the tapes in your tapedev definition ?

amanda seems to think that vtape_length * 6  estimated_dump_size



IMPORTANT NOTICE: This e-mail and any attachments may contain
confidential or sensitive information which is, or may be, legally
privileged or otherwise protected by law from further disclosure.  It
is intended only for the addressee.  If you received this in error or
from someone who was not authorized to send it to you, please do not
distribute, copy or use it or any attachments.  Please notify the
sender immediately by reply e-mail and delete this from your
system. Thank you for your cooperation.




[Amanda-users] amanda.conf config help

2010-11-24 Thread upengan78
Hi,

I just had to create those slots directories (29) and then I edited amanda.conf 
and modified length as  length 10240 MB  in tapetype defination. After this 
ran amlabel to label 29 slots and then amcheck test which did not find problems.

Only thing I had not done was restart Amanda server processes again. However, I 
wonder if that plays any role here. I am trying amdump again will keep posted.

Good day.

+--
|This was sent by upendra.gan...@gmail.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--




Re: [Amanda-users] amanda.conf config help

2010-11-24 Thread Jon LaBadie
On Wed, Nov 24, 2010 at 11:37:31AM -0500, upengan78 wrote:
 I have made some changed to amanda.conf. First I have changed disk size to 
 10GB, then number of Vtapes/slots = 29 and also added runtapes = 6
 
 My amanda.conf
 
 dumpcycle 7
 runspercycle 7
 tapecycle 29
 runtapes 6
 dumpuser amanda
 tpchanger chg-disk# a virtual tape changer
 tapedev file:/random/amanda/vtapes/test/slots
 changerfile /opt/csw/etc/amanda/test/changerfile
 labelstr TEST-.*
 #label_new_tapes TEST-%%
 #autolabel TEST-%%
 tapetype DVD_SIZED_DISK
 logdir /opt/csw/etc/amanda/test
 infofile /opt/csw/etc/amanda/test/curinfo
 indexdir /opt/csw/etc/amanda/test/index
 tapelist /opt/csw/etc/amanda/test/tapelist
 #etimeout 600 # number of seconds per filesystem for estimates.
 etimeout 3600 # number of seconds per filesystem for estimates.
 #etimeout -600   # total number of seconds for estimates.
 # a positive number will be multiplied by the number of filesystems on
 # each host; a negative number will be taken as an absolute total time-out.
 # The default is 5 minutes per filesystem.
 #dtimeout 1800# number of idle seconds before a dump is aborted.
 dtimeout 3600# number of idle seconds before a dump is aborted.
 ctimeout 30  # maximum number of seconds that amcheck waits
  # for each client host
 
 
 holdingdisk hd1 {
 directory /another drive/amanda/amandahold/test
 }
 
 define dumptype comp-tar {
 program GNUTAR
 compress fast
 index yes
 record yes  # Important! avoid interfering with production runs
 }
 
 define tapetype DVD_SIZED_DISK {
 filemark 1 KB
 length 10240 MB
 }
 
 I am still confused about spanning because 
 http://wiki.zmanda.com/index.php/How_To:Split_Dumps_Across_Tapes mentions 
 that spanning is automatically enabled in Vtapes. Now is that for specific 
 version that means or all versions and is there any command to verify if it 
 is enabled or not? Copying the relevent portion from wiki below.
 
 Disk Backups (Vtapes)
 
 For vtapes, spanning is automatically enabled, as the VFS device supports 
 LEOM. You can add a part_size if you'd like to split dumps into smaller 
 parts; otherwise, Amanda will just fill each vtape with a single part before 
 moving on to the next vtape.
 
From the same document:

  In Amanda earlier than Amanda-3.2, this was done with some dumptype 
parameters.
   For these versions, see How To:Split Dumps Across Tapes (Amanda-3.1 and 
earlier).

You said you are using amanda version 2.5.x.  Clearly that is earlier than 3.2.

-- 
Jon H. LaBadie  j...@jgcomp.com
 JG Computing
 12027 Creekbend Drive  (703) 787-0884
 Reston, VA  20194  (703) 787-0922 (fax)


Re: [Amanda-users] amanda.conf config help

2010-11-24 Thread Jon LaBadie
On Tue, Nov 23, 2010 at 10:27:15AM -0500, upengan78 wrote:
 Hi,
 
 Using Amanda on Solaris 10 2.5.2p1,REV=2008.05.21 and
 amanda client on a solaris 8 which I beleive has same
 version of Amanda.
 

OpenCSW shows amanda version 3.1.1.  Why are you using
such an old version?

Jon
-- 
Jon H. LaBadie  j...@jgcomp.com
 JG Computing
 12027 Creekbend Drive  (703) 787-0884
 Reston, VA  20194  (703) 787-0922 (fax)


  1   2   3   4   5   6   7   >