RE: Amanda and ZFS

2008-04-25 Thread Anthony Worrall
Hi

unfortunately zfsdump, or zfs send as it is now, does not relate to
ufsdump in any way :-(


From man zfs 

zfs send [-i snapshot1] snapshot2
 Creates a stream representation of snapshot2,  which  is
 written to standard output. The output can be redirected
 to a file or to a different machine (for example,  using
 ssh(1). By default, a full stream is generated.

 -i snapshot1Generate  an  incremental  stream   from
 snapshot1  to snapshot2. The incremental
 source snapshot1 can be specified as the
 last component of the snapshot name (for
 example, the part after the @), and it
 will be assumed to be from the same file
 system as snapshot2.

 The format of the stream is evolving. No backwards  compati-
 bility  is  guaranteed.  You may not be able to receive your
 streams on future versions of ZFS.

I wrote a script to use this but I had a problem getting estimates for
the incremental snapshots.

I can not see how amrecover would not be able to restore from the
snapshot as it does not know the format used. In fact there is no way
that I know of to extract a file from the snapshot sort of recovering
the whole snapshot. This is probably not too much of a big issue as the
tape backup is only needed for disaster recover and snapshot can be used
for file recovery.

amrestore could be used with zfs receive to recover the snapshot.

One of the properties of zfs is that in encourages the use of a
filesystem for a logical set of files, i.e. user home directory,
software package etc.
This means that every time you create a new filesystem you need to
create a new DLE for amanda. In fact creating the amanda DLE takes
longer than creating the zfs filesystem.

You can not just use tar to dump multiple zfs filestems because amamda
tells tar not to cross filesystem boundaries.

You could probably write a wrapper to tar to remove --one-file-system
option to get around this limitation.

  

Anthony Worrall

 
 -Original Message-
 From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED]
 On Behalf Of Chris Hoogendyk
 Sent: 25 April 2008 13:39
 To: Nick Smith
 Cc: amanda-users@amanda.org
 Subject: Re: Amanda and ZFS
 
 
 
 Nick Smith wrote:
  Dear Amanda Administrators.
 
  What dump configuration would you suggest for backing up a ZFS pool
of
  about 300GB? Within the pool there several smaller 'filesystems'.
 
  Would you :
 
  1.  Use a script to implement ZFS snapshots and send these to the
server
  as the DLE?
  2.  Use tar to backup the filesystems? We do not make much use of
ACLs
  so tar's lack of ACL support shouldn't be an issue?
  3.  Something else?
 
  Question : If a use 2 can still use 'amrecover' which AFAIK would be
the
 case if I went with 1??
 
  The host is a Sun Solaris 10 X86 box is that pertinent.
 
 
 I'm not on Solaris 10 yet, and haven't used ZFS, but . . .
 
 I understand that with ZFS you have zfsdump (just as with ufs I have
 ufsdump). So you could usse zfsdump with snapshots. I'm guessing it
 wouldn't be too hard to modify the wrapper I wrote for Solaris 9 that
 uses ufsdump with snapshots and is documented here

http://wiki.zmanda.com/index.php/Backup_client#Chris_Hoogendyk.27s_Examp
le
 
 If you have that pool logically broken up into a number of smaller
 pieces that can be snapshotted and dumped, it will make it smoother
for
 Amanda's planner to distribute the load over the dump cycle.
 
 Shouldn't have any problems with amrecover.
 
 
 
 ---
 
 Chris Hoogendyk
 
 -
 O__   Systems Administrator
c/ /'_ --- Biology Department
   (*) \(*) -- 140 Morrill Science Center
 ~~ - University of Massachusetts, Amherst
 
 [EMAIL PROTECTED]
 
 ---


RE: Amanda and ZFS

2008-04-25 Thread Anthony Worrall

neat

 -Original Message-
 From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED]
 On Behalf Of Jon LaBadie
 Sent: 25 April 2008 16:00
 To: amanda-users@amanda.org
 Subject: Re: Amanda and ZFS
 
 On Fri, Apr 25, 2008 at 02:32:27PM +0100, Anthony   Worrall wrote:
  Hi
 
  unfortunately zfsdump, or zfs send as it is now, does not relate
to
  ufsdump in any way :-(
 
 
   [ big snip ]
 
  One of the properties of zfs is that in encourages the use of a
  filesystem for a logical set of files, i.e. user home directory,
  software package etc.
  This means that every time you create a new filesystem you need to
  create a new DLE for amanda. In fact creating the amanda DLE takes
  longer than creating the zfs filesystem.
 
  You can not just use tar to dump multiple zfs filestems because
amamda
  tells tar not to cross filesystem boundaries.
 
  You could probably write a wrapper to tar to remove
--one-file-system
  option to get around this limitation.
 
 Another way would be to use include directives.  For example, if the
 zfs pool was /pool and had file systems of a, b,c, and d, you could
 set up multiple DLEs that were rooted at /pool (different tag names)
 and had include directives of include ./a ./c and another with
 include ./b ./d  While traversing each of the included starting
 points (directories), tar would never cross a file system boundary.
 
 --
 Jon H. LaBadie  [EMAIL PROTECTED]
  JG Computing
  12027 Creekbend Drive(703) 787-0884
  Reston, VA  20194(703) 787-0922 (fax)


SDLT-4 compareded to LTO-3

2007-03-02 Thread Anthony Worrall
Hi

This is not strictly an amanda question but I thought I would see if any
one has any views on SDLT-4 compared to LTO-3.

We are currently looking at replacing our tape devices an are looking at
SDLT-4 which seems to be about the same price as LTO-3 but offer twice
the capacity. Has anyone got any experience of these drives. I am told
by our supplier that they are selling many more LTO-3 than SDLT-4. Is it
just that SDLT-4 is newer is there some reason?

Cheers

Anthony Worrall




RE: SDLT-4 compareded to LTO-3

2007-03-02 Thread Anthony Worrall
Hi 

Sorry a bit of dyslexia I meant DLT-S4 not SDLT-4

Here is a reference
http://www.quantum.com/Products/TapeDrives/DLT/DLT-S4/Index.asp

Cheers

Anthony Worrall


 -Original Message-
 From: Joshua Baker-LePain [mailto:[EMAIL PROTECTED]
 Sent: 02 March 2007 12:28
 To: Anthony Worrall
 Cc: amanda-users@amanda.org
 Subject: Re: SDLT-4 compareded to LTO-3
 
 On Fri, 2 Mar 2007 at 10:31am, Anthony   Worrall wrote
 
  This is not strictly an amanda question but I thought I would see if
any
  one has any views on SDLT-4 compared to LTO-3.
 
  We are currently looking at replacing our tape devices an are
looking at
  SDLT-4 which seems to be about the same price as LTO-3 but offer
twice
  the capacity. Has anyone got any experience of these drives. I am
told
  by our supplier that they are selling many more LTO-3 than SDLT-4.
Is it
  just that SDLT-4 is newer is there some reason?
 
 Maybe I'm just not looking hard enough, but all I can find is SDLT-II,
 which is 300GB native -- got any links?  IIRC, LTO-4 has been
announced,
 but I can't find any products yet.
 
 The reason I like LTO is that it's an open standard with a well
defined
 roadmap and an emphasis on compatibility among generations.
 
 --
 Joshua Baker-LePain
 Department of Biomedical Engineering
 Duke University



retention policy for dumptypes

2006-05-24 Thread Anthony Worrall
Hi

Ian Turner's comments about the new device API and the possibility of a
retention (not to be confused with re-tension) policy in the dumptype
along with Alexander Jolk's recent comments got me thinking.

With a retention policy by dumptype you would want amanda not to
overwrite a tape if it contained any dumps that need to be retained.
However you might want to add an option to allow amanda to overwrite the
tape if there is enough holding disk space for it to first read all the
dumps that need to be retained off the tape. This obviously is slightly
risky so it would be nice to have Alex's utility that would read dumps
that need to be retained back to a holding disk and then allow the tape
to be overwritten only after these dumps have been put onto another
tape. (It would be nice if the utility could read from one tape and
write to another for those people who have two tape devices but no
holding disk). Amanda could then issue a warning about dumps that need
to be moved in this way in a similar fashion to the way it does now
about level 0 dumps that will be overwritten. 

The retention policy could also be more complicated than just keep for N
days. You could require M level 0 dumps to be kept or a strategy that
would give effectively daily snapshots for the last two weeks, weekly
snapshots for the last two months and monthly snapshots for the last
year. I am not sure how to frame this but I hope you get the idea.

With the ability to move dumps between tapes I wonder if a changer
that consist of a number of vtapes and a physical drive might be useful.
Then recent dumps could be on the vtapes and any dumps you want to keep
for longer periods could be migrated to physical tapes. (Can amanda cope
with a changer with more than one tape drive be it physical or virtual?)

I have been playing with vtapes using RAIT with the vtapes being NFS
mounted from an offsite (well out of the building) server, so that for
the time we have the tapes in the building there is an out of the
building copy. It would be nice to be able to use a combination of vtape
and physical tapes to extend the time we can keep level 0 backups.

Discussion anyone?


Cheers

Anthony Worrall



RE: A question about RAIT with mixed vtapes, and physical tapes

2006-03-29 Thread Anthony Worrall
Hi

We have just done exactly this. I setup the rait as a chg-multi with 5
slots
I then ran a little script to copy the tape label before running amdump
I also setup two other configs one for just using the tape which is used
by amcheck and on which just uses the vtapes so we can recover files
without having to mount the tapes. 
I later increase the number of vtapes to 10 so we could run amflush
without  overwriting the last nights tape by using amodified version of
the script below

I hope this helps

Please feel free to contact me if you need more details

#!/bin/sh

VTAPE=file:/var/amanda/vtapes/sir-1/vtape$1
DEV=/dev/rmt/0

AMMT=/usr/local/amanda/sbin/ammt
AMDD=/usr/local/amanda/sbin/amdd
AMDUMP=/usr/local/amanda/sbin/amdump
AMTAPE=/usr/local/amanda/sbin/amtape

$AMMT -t $VTAPE rewind
$AMMT -t $DEV rewind
$AMDD if=$DEV of=$VTAPE bs=32k count=1 21  /dev/null

$AMTAPE sir-1 slot $1
$AMDUMP sir-1

$AMMT -t $DEV offline

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of stan
Sent: 29 March 2006 13:58
To: amanda users list
Subject: A question about RAIT with mixed vtapes, and physical tapes

I'm setting up a new system using RAIT with one vtape, and one physical
tape in each RAIT set. I only have room on my disks for 5 vtapes, but I
have 25 tapes. I would like to just have the last 5 backup sets (one
week)
on the vtapes at any one time, but rotate through all 25 physical tapes.


Can I do this?

How can I label just the physical tapes without overwrite the labels on
the
vtapes?


-- 
U.S. Encouraged by Vietnam Vote - Officials Cite 83% Turnout Despite
Vietcong Terror 
- New York Times 9/3/1967



amanda and zfs

2006-03-08 Thread Anthony Worrall
Hi

I have playing about with Sun's new filesystem ZFS.

One of the things I wanted to do was make sure I could get amanda to
backup zfs.
The way the backup utility works with zfs is based on snapshots. You can
either
do a full bakup of a filesystem or the difference between two snapshots.
The restore process is an all or nothing recovery of the snapshot. There
is no
way to recover a single file from a backup or to index the snapshot.

I have hacked a script to replace ufsdump which will manage the
snapshots and
appropriately respond to amanda's sendsize and sendbackup requests which
is included below.
It requires that the amanda user has a role that can run the zfs
command. Also the
zfs DLE's should have indexing turned off.

Here are some observations about zfs and amanda. ZFS encourages the use
of more
smaller filesystems at the home directory or even software package level
as it is easy
to grow and manage filesystems. This may have an impact on amanda in a
couple of ways.

1. Amanda seems to be more likely to do a full backup of the filesystem
since it is
   small. However with many small filesystems the total amount of data
backed up may 
   end up being larger. Currently I have only a handfull of filestesm
with a few megabytes
   each and amanda has only request fullbackups

2. With many filesystems managing the DLE's becomes more complex. One
idea maybe to 
   haves the DLE's for a client stored on the client. Amanda would then
need to get
   the DLEs during the estimate phase or when doing a check of the
client. This 
   would also be useful in the case where the management of the client
is separated 
   from the backup management.

I guess some of this issues may be addressed in 2.5 

Cheers

Anthony Worrall

#!/bin/pfksh
#need to use pfksh so that zfs can be run

ZFS=/usr/sbin/zfs

#Get the last argument
for FS in $@;
do
 echo $FS  /dev/null
done

#Check if the filesystem is zfs or ufs
res=`$ZFS list | /usr/xpg4/bin/egrep ${FS}$`
if [ $? -eq 0 ]; then
  DEV=`echo $res | awk '{ print $1}'`
  ESTIMATE=0
  LEVEL=unknown
  UPDATE=0

  case $1 in
*0*) LEVEL=0;;
*1*) LEVEL=1;;
*2*) LEVEL=2;;
*3*) LEVEL=3;;
*4*) LEVEL=4;;
*5*) LEVEL=5;;
*6*) LEVEL=6;;
*7*) LEVEL=7;;
*8*) LEVEL=8;;
*9*) LEVEL=9;;
*) echo level NOT specfied; exit -1;;
  esac

  case $1 in
*S*) ESTIMATE=1;;
  esac

  case $1 in
*u*) UPDATE=1;;
  esac

  $ZFS list -H -t snapshot | /usr/xpg4/bin/grep -q [EMAIL PROTECTED]
  if [ $? -eq 0 ]; then
$ZFS destroy [EMAIL PROTECTED]
  fi
  $ZFS snapshot [EMAIL PROTECTED]

  # make sure all the snapshots up to $LEVEL exist
  n=1
  while [ $n -le $LEVEL ];
  do
$ZFS list -H -t snapshot | /usr/xpg4/bin/grep -q [EMAIL PROTECTED]
if [ $? -ne 0 ]; then
  LEVEL=$(( $n - 1 ))
  break
fi
n=$(($n+1))
  done

  full=`$ZFS list -H -o refer [EMAIL PROTECTED]
#convert returned size into kilobytes
  case $full in
*K) full=`echo $full | sed -e 's/K//'`;;
*M) full=`echo $full | sed -e 's/M//'`; full=$(( $full * 1024 ));;
*G) full=`echo $full | sed -e 's/G//'`; full=$(( $full * 1024 *
1024));;
  esac
  if [ $LEVEL -gt 0 ]; then
incr=`$ZFS list -H -o refer [EMAIL PROTECTED]
case $full in
  *K) full=`echo $full | sed -e 's/K//'`;;
  *M) full=`echo $full | sed -e 's/M//'`; full=$(( $full * 1024 ));;
  *G) full=`echo $full | sed -e 's/G//'`; full=$(( $full * 1024 *
1024));;
esac
size=$(( $full - $incr ))
  else
size=$full
  fi
  size=$(( $size + 16 ))
  if [ $ESTIMATE == 1 ]; then
#echo doing Estimate $DEV at level $LEVEL 2
size=$(( $size * 1024 ))
echo $size
$ZFS destroy [EMAIL PROTECTED]
  else
#echo Dumping $DEV at level $LEVEL 2
if [ $LEVEL -eq 0 ]; then
  $ZFS backup [EMAIL PROTECTED]
else
  $ZFS backup -i [EMAIL PROTECTED] [EMAIL PROTECTED]
fi
block=$(( $size * 2 ))
MB=$(( $size / 1024 ))
echo DUMP: $block blocks (${MB}MB) 2
if [ $UPDATE -eq 1 ]; then
  for snap in `$ZFS list -H -t snapshot | awk '{print $1}' | egrep
${DEV}@[1-9]`; do
n=`echo $snap | cut -f2 [EMAIL PROTECTED]
if [ $n -gt $LEVEL ]; then
   $ZFS destroy $snap
fi
  done
  n=$(( $LEVEL + 1 ))
  $ZFS rename [EMAIL PROTECTED] [EMAIL PROTECTED]
else
  $ZFS destroy [EMAIL PROTECTED]
fi
  fi
else
  /usr/lib/fs/ufs/ufsdump $*
fi











tape usage

2006-02-06 Thread Anthony Worrall
Hi

Is there an easy way to see what the usage of the tapes are?

There seem to be was of doing this for a host or DLE but not a tape.

Cheers

Anthony Worrall




Re: tape usage

2006-02-06 Thread Anthony Worrall
On Mon, 2006-02-06 at 12:19, Anthony Worrall wrote:
 Hi
 
 Is there an easy way to see what the usage of the tapes are?
 
 There seem to be was of doing this for a host or DLE but not a tape.
 
 Cheers
 
 Anthony Worrall
 
 
Here is a dirty way to do it

grep writing end marker amdump* | sed -e 's/^.*\[//' | awk '{ sum +=
$4 } END { printf(%f GB\n, sum/(1024*1024)) }'



install man pages in 2.5.1p1 fails

2006-02-03 Thread Anthony Worrall
Hi

Just a small point but when I installed 2.5.1p1 the install-data-hook
failed
on amanda.conf.5 because it assumed that all the man pages where in
section 8.


original 

install-data-hook:
@list=$(man_MANS); \
for p in $$list; do \
pa=$(DESTDIR)$(mandir)/man8/`echo $$p|sed
'$(transform)'`;
echo chown $(BINARY_OWNER) $$pa; \
chown $(BINARY_OWNER) $$pa; \
echo chgrp $(SETUID_GROUP) $$pa; \
chgrp $(SETUID_GROUP) $$pa; \
done

fixed

install-data-hook:
@list=$(man_MANS); \
for p in $$list; do \
ext=`echo $$p | sed -e 's/^.*\\.//'`; \
pa=$(DESTDIR)$(mandir)/man$$ext/`echo $$p|sed
'$(transform)'`; \
echo chown $(BINARY_OWNER) $$pa; \
chown $(BINARY_OWNER) $$pa; \
echo chgrp $(SETUID_GROUP) $$pa; \
chgrp $(SETUID_GROUP) $$pa; \
done

apologies if this has already been fixed

Cheers

Anthony Worrall



Re: Amanda server selection advice

2006-01-31 Thread Anthony Worrall
Hi

I would guess the Ultra 40 if you go for the dual core option, there may
not 
be much of a difference in the single core options 2.8GHz vs 2.6GHz.
 
Form the sun web site


Ultra 40 Processor
One or two AMD Opteron 940-pin, 200-series single-core CPUs that range
from 2.0 GHz to 2.8 GHz (models 246, 250, and 254) and dual-core CPUs
that range from 2.2 GHz to 2.4 GHz (models 275 and 280) with three
8-GBps HyperTransport interconnects per CPU

W2100z Processor 
Two AMD Opteron 200 Series CPUs that range from Model 244 (1.8 GHz) to
Model 252 (2.6 GHz)

Anthony Worrall



On Tue, 2006-01-31 at 17:28, stan wrote:
 On Tue, Jan 31, 2006 at 10:50:35AM -0600, Graeme Humphries wrote:
  stan wrote:
  
  It's one of those corporate political corectness things. Management
  recognizes the nae, and if I sugest a non name brand, I have to a +lot_
  more expalining.
   
  
  Ahh well, I figured it'd be something like that. In any case, we're 
  doing server side compression, and I can't stress enough that you'll 
  need tons of CPU horsepower on the backup box if you're backing up a 
  large number of systems. Usually, items from our disklist take about 1/4 
  of the time to blow out to tape that they take to actually dump to the 
  holding disk, and the bottleneck is totally the server side compression. 
  Luckily, fast processors are cheap these days. ;)
 
 
 Which sort of leads directly back to the original question. Which of the 2
 bxes I mentioned originally would have the most CPU poweer?
 
 I'm failry certian it's the Ultra 40, but I could be wrong.
 
 -- 
 U.S. Encouraged by Vietnam Vote - Officials Cite 83% Turnout Despite Vietcong 
 Terror 
 - New York Times 9/3/1967



Re: awstats and amnda

2005-06-21 Thread Anthony Worrall

I might be wrong but amplot seems to give information on a single run.

The idea of using awstats is to give historical information over months
Also I am already using awstats for web and ftp servers so it would be 
easier

to have on place to look at stats.


Gene Heskett wrote:


On Monday 20 June 2005 12:03, Anthony Worrall wrote:
 


Hi

As anyone tried using something like awstats to display statics
about how amanda is doing? It would need a script to get the amdump
file into a format that awstat could be told about i.e. something
like  an xferlog.

Before I start  hacking I thought I would ask if anyone else has
already done
this.

Cheers


Anthony Worrall
   



Did you look at amplot?   It takes the dump log as direct input.

Unless you like to re-invent wheels. :)

 





awstats and amnda

2005-06-20 Thread Anthony Worrall

Hi

As anyone tried using something like awstats to display statics about
how amanda is doing? It would need a script to get the amdump file into
a format that awstat could be told about i.e. something like  an xferlog.

Before I start  hacking I thought I would ask if anyone else has already 
done

this.

Cheers


Anthony Worrall




Incremenats as big as full

2003-07-03 Thread Anthony Worrall
Hi

This is not realy an amanda problem but I thought one of you may have seen this 
problems before.

We have a filesytem which contians files that are basically static. However
when they are backed up the incrementals are almost as big as the full backups.

server amadmin sir-1 info advisor1 /advisor/video

Current info for advisor1 /advisor/video:
  Stats: dump rates (kps), Full:  2270.0, 2231.0, 2274.0
Incremental:  2119.0, 2250.0, 2275.0
  compressed size, Full:  62.1%, 62.1%, 62.1%
Incremental:  66.6%, 66.6%, 66.6%
  Dumps: lev datestmp  tape file   origK   compK secs
  0  20030618  sir-1-20   54 10650330 6617775 2915
  1  20030703  sir-1-107  9873700 6578667 3104

This happens with dump and gntar backups. If I do a find on the filesystem it 
claims that no files have been modified in the last 90 days.

advisor1:/tmp/amanda # find /advisor/video/* -type f -mtime -90 -ls

advisor1:/tmp/amanda # dump --version
dump: invalid option -- -
dump 0.4b26 (using libext2fs 1.26 of 3-Feb-2002)
usage:  dump [-0123456789acMnqSu] [-B records] [-b blocksize] [-d density]
 [-e inode#,inode#,...] [-E file] [-f file] [-h level]
 [-I nr errors] [-j zlevel] [-s feet] [-T date] [-z zlevel] 
filesystem
dump [-W | -w]

advisor1:/tmp/amanda # tar --version
tar (GNU tar) 1.13.18
Copyright 2000 Free Software Foundation, Inc.
This program comes with NO WARRANTY, to the extent permitted by law.
You may redistribute it under the terms of the GNU General Public License;
see the file named COPYING for details.
Written by John Gilmore and Jay Fenlason.

advisor1:/tmp/amanda # cat /etc/SuSE-release
SuSE Linux 8.0 (i386)
VERSION = 8.0

advisor1:/tmp/amanda # uname -a
Linux advisor1 2.4.20 #1 SMP Fri Jan 10 15:32:51 GMT 2003 i686 unknown

advisor1:/tmp/amanda # df -h /advisor/video/
FilesystemSize  Used Avail Use% Mounted on
/dev/sda2  17G   11G  6.0G  63% /advisor/video

advisor1:/tmp/amanda # mount -v | grep /advisor/video
/dev/sda2 on /advisor/video type ext3 (rw)


Anthony Worrall
School IT Networking Coordinator
School of Systems Engineering
The University of Reading,
Whiteknights, PO Box 225
Reading,
Berkshire, UK
RG6 6AY
Tel:   +44 (0)118 931 8610
Fax:   +44 (0)118 975 1822
Email: [EMAIL PROTECTED]



Re: Incremenats as big as full

2003-07-03 Thread Anthony Worrall
Hi

Paul hit the nail on the head.

It seem the administrator of that machine was running a find over the 
directories which updated the ctime of all the files.

Thanks Paul.


Anthony

- Begin Forwarded Message -


Anthony Worrall wrote:
 We have a filesytem which contians files that are basically static. However
 when they are backed up the incrementals are almost as big as the full 
backups.
...
 This happens with dump and gntar backups. If I do a find on the filesystem it 
 claims that no files have been modified in the last 90 days.
 
 advisor1:/tmp/amanda # find /advisor/video/* -type f -mtime -90 -ls

Check also the ctime  (i-node change time):

find /advisor/video/* -type f -ctime -90


-- 
Paul Bijnens, XplanationTel  +32 16 397.511
Technologielaan 21 bus 2, B-3001 Leuven, BELGIUMFax  +32 16 397.512
http://www.xplanation.com/  email:  [EMAIL PROTECTED]
***
* I think I've got the hang of it now:  exit, ^D, ^C, ^\, ^Z, ^Q, F6, *
* quit,  ZZ, :q, :q!,  M-Z, ^X^C,  logoff, logout, close, bye,  /bye, *
* stop, end, F3, ~., ^]c, +++ ATH, disconnect, halt,  abort,  hangup, *
* PF4, F20, ^X^X, :D::D, KJOB, F14-f-e, F8-e,  kill -1 $$,  shutdown, *
* kill -9 1,  Alt-F4,  Ctrl-Alt-Del,  AltGr-NumLock,  Stop-A,  ...*
* ...  Are you sure?  ...   YES   ...   Phew ...   I'm out  *
***



- End Forwarded Message -


Anthony Worrall
School IT Networking Coordinator
School of Systems Engineering
The University of Reading,
Whiteknights, PO Box 225
Reading,
Berkshire, UK
RG6 6AY
Tel:   +44 (0)118 931 8610
Fax:   +44 (0)118 975 1822
Email: [EMAIL PROTECTED]



Backing up NFS mirrored filesystem

2002-03-15 Thread Anthony Worrall

Hi 

We have a file system that is mirrored by rsync across two systems to save net 
work traffic and increase speed when accessing the files. It also allows one of 
the machines to be taken off site and still work.

At the moment I am backing up each filesystem separately. However since the file 
system is about the same size as out backup tapes I would like to be able to 
backup just one.


I could just use a redundant automount to access the filesytems on the backup 
server such as 

mirror  -ro server1,server2:/mirror

and then use tar to backup the files. However running NFS on each machine just
for this seems a bit excessive and it would be nice is I could get Amanda to
backup up which ever of the servers responded.

Any ideas




Anthony Worrall
School IT Networking Coordinator
The University of Reading,
School of Computer Science, Cybernetics and Electronic Engineering
Department of Computer Science,
Whiteknights, PO Box 225
Reading,
Berkshire, UK
RG6 6AY
Tel:   +44 (0)118 931 8610
Fax:   +44 (0)118 975 1822
Email: [EMAIL PROTECTED]




NDMP

2002-03-15 Thread Anthony Worrall

Hi

Is anyone working on allowing amanda to use NDMP?


Anthony Worrall
School IT Networking Coordinator
The University of Reading,
School of Computer Science, Cybernetics and Electronic Engineering
Department of Computer Science,
Whiteknights, PO Box 225
Reading,
Berkshire, UK
RG6 6AY
Tel:   +44 (0)118 931 8610
Fax:   +44 (0)118 975 1822
Email: [EMAIL PROTECTED]




Re: Read attribute dissapears on Irix

2001-06-14 Thread Anthony Worrall

When it reboots IRIX will reset the permisions on the devices (it 
rebuilds the entire /hw tree at boot time).

You can override the permisions by using the file /etc/ioperms

So is the user running amanda is in the group sys you would have an entry
like the following.


/dev/rdsk/dks1d1s7  640 rootsys

Date: Thu, 14 Jun 2001 09:33:34 -0400
From: Jan Boshoff [EMAIL PROTECTED]
X-Accept-Language: en
MIME-Version: 1.0
To: Amanda Users [EMAIL PROTECTED]
Subject: Read attribute dissapears on Irix
Content-Transfer-Encoding: 7bit
X-Scanner: exiscan *15AXif-0003Ah-00*vL5C8vCrTCI* 
http://duncanthrax.net/exiscan/

Hi everyone

I've successfully implemented Amanda on our cluster of 5 Unix machines,
and I'd like to thank the developers of Amanda for such a great tool -
it's allowed me to focus on my research again rather than doing menial
tasks for sysadm!  Thank you for a great package.

I'm running Amanda 2.4.1.p1 on mostly Irix 6.5 boxes.  My question seems
to be related to Irix, I've not seen this problem on the one AIX machine
that we have.

Every once in a while, when running amcheck, it returns with an error
stating that a filesystem is not readable.  Maybe someone else has run
into this problem, it only seems to occur on the SGI Irix 6.5 boxes.
The server is an Irix 6.5 box, and it sporadically happens to both
server and clients.  The error message is:

ERROR: [hostname]: [can not access /dev/rdsk/dks1d1s7
(/dev/dsk/dks1d1s7): Permission denied]

I then have do chmod g+r to /dev/rdsk/dks1d1s7 again to make the
filesystem readable.  I have not been able to correlate this with any
weird system behavior.  Does Irix automatically assign permissions to
these files?  Does anyone know what I can do to permanently change the
permission of the filesystem to be readable?

--

==
  Jan Boshoff
  PhD Student, Chemical Engineering
  Univ. of Delaware, DE USA

  Office: (302) 831-2345
==




Anthony Worrall
School IT Networking Coordinator
The University of Reading,
School of Computer Science, Cybernetics and Electronic Engineering
Department of Computer Science,
Whiteknights, PO Box 225
Reading,
Berkshire, UK
RG6 6AY
Tel:   +44 (0)1189 318610
Fax:   +44 (0)1189 751994
Email: [EMAIL PROTECTED]




Re: NFS mounted holding disk

2001-04-18 Thread Anthony Worrall


To: Anthony Worrall [EMAIL PROTECTED]
cc: [EMAIL PROTECTED], [EMAIL PROTECTED], [EMAIL PROTECTED]
Subject: Re: NFS mounted holding disk 
Date: Tue, 17 Apr 2001 10:51:17 -0400
From: Paul Lussier [EMAIL PROTECTED]
X-Scanner: exiscan *14pWpZ-BM-00*1pxFlM.Ffuc* 
http://duncanthrax.net/exiscan/


In a message dated: Tue, 17 Apr 2001 10:42:13 BST
Anthony Worrall said:

One reason would be the NFS server has a 1Gb interface and the clients and
tape server have only 100Mb.

Okay, so now you're saturating the 100Mb interface on the tape server 
twice?  I still don't see the advantage in this.

That is why I would like the clients to dump directly to the holding disk 
without the data going via the tape server.

Even so give the choice between dumping to an NFS mounted holding disk and
dumping directly to tape the former seems to be faster.


Of course having a big holding disk on the tape server would be even better.


-- 

Seeya,
Paul

   It may look like I'm just sitting here doing nothing,
   but I'm really actively waiting for all my problems to go away.

If you're not having fun, you're not doing it right!



Anthony Worrall
The University of Reading,
Department of Computer Science,
Whiteknights, PO Box 225
Reading,
Berkshire, UK
RG6 6AY
Tel:   +44 (0)1189 318610
Fax:   +44 (0)1189 751994
Email: [EMAIL PROTECTED]




RE: NFS mounted holding disk

2001-04-18 Thread Anthony Worrall

Hi

Unfortunately the NFS server is a NAS device on which I can not run amanda :-(.

Now if amanda could talk NDMP I could use the tape device on the NAS
server, but I would still need another box to run amanda and the data
would be going in and out of this box. Still that would be better than 
in, out and back again :-).


I would think if I had the same config on the tape server as the holding-disk
server up to the tape device, then I could just run amflush on the tape server.

  
From: "Bort, Paul" [EMAIL PROTECTED]
To: "'Anthony Worrall'" [EMAIL PROTECTED], 
[EMAIL PROTECTED]
Subject: RE: NFS mounted holding disk 
Date: Wed, 18 Apr 2001 10:32:03 -0400
X-Scanner: exiscan *14pt4u-0004vx-00*ccNglIAG/2M* 
http://duncanthrax.net/exiscan/

Here's a bad idea for you: Install Amanda server on both the NFS server and
the tape server; Run amdump on the NFS server using whatever configuration
you'd like, but don't let it dump to tape; use the tape server to back up
the NFS server at level 0 every time; after the tape server is done, tell
the NFS server to amflush to /dev/null. It's not pretty, and restores are
cumbersome, but it would accomplish the mission.

Of course, so would moving the tape drive(s) to the NFS server. Is that a
possibility? 


-Original Message-
From: Anthony Worrall [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, April 18, 2001 5:17 AM
To: [EMAIL PROTECTED]
Subject: Re: NFS mounted holding disk 



To: Anthony Worrall [EMAIL PROTECTED]
cc: [EMAIL PROTECTED], [EMAIL PROTECTED], [EMAIL PROTECTED]
Subject: Re: NFS mounted holding disk 
Date: Tue, 17 Apr 2001 10:51:17 -0400
From: Paul Lussier [EMAIL PROTECTED]
X-Scanner: exiscan *14pWpZ-BM-00*1pxFlM.Ffuc* 
http://duncanthrax.net/exiscan/


In a message dated: Tue, 17 Apr 2001 10:42:13 BST
Anthony Worrall said:

One reason would be the NFS server has a 1Gb interface and the clients and
tape server have only 100Mb.

Okay, so now you're saturating the 100Mb interface on the tape server 
twice?  I still don't see the advantage in this.

That is why I would like the clients to dump directly to the holding disk 
without the data going via the tape server.

Even so give the choice between dumping to an NFS mounted holding disk and
dumping directly to tape the former seems to be faster.


Of course having a big holding disk on the tape server would be even better.


-- 

Seeya,
Paul

  It may look like I'm just sitting here doing nothing,
   but I'm really actively waiting for all my problems to go away.

   If you're not having fun, you're not doing it right!



Anthony Worrall
The University of Reading,
Department of Computer Science,
Whiteknights, PO Box 225
Reading,
Berkshire, UK
RG6 6AY
Tel:   +44 (0)1189 318610
Fax:   +44 (0)1189 751994
Email: [EMAIL PROTECTED]

Anthony Worrall
The University of Reading,
Department of Computer Science,
Whiteknights, PO Box 225
Reading,
Berkshire, UK
RG6 6AY
Tel:   +44 (0)1189 318610
Fax:   +44 (0)1189 751994
Email: [EMAIL PROTECTED]