Re: 2.4.3 Tapeless file-driver pure-disk backups

2002-06-21 Thread Joshua Baker-LePain

On Thu, 20 Jun 2002 at 12:42pm, Steve Follmer wrote

 disk-to-disk backup is an idea whose time has come. I am prepared to
 believe that Amanda is wonderful (the Samba backup was a snap!), but it
 seems a bit old fashioned compared to say, Second Copy 2000.

I have no idea what Second Copy is (aside: quick look, ah, ok), but the 
target audiences are completely different.  Amanda is aimed at networks 
of *nix machines.  The beauty of amanda is that it scales easily from a 
single machine to several hundred, based on backup hardware.  And it is 
anything but old fashioned.

 What happens is if I just have one file:/backup/data directory, it gets
 erased and a full backup takes place every night. This might be
 acceptable but I'm exposed for the time that the backup takes place. Why
 doesn't amanda simply add incrementals to the one existing tape
 (file)?

On tape, appending is Bad.

 I probably must do what an earlier Bickle post described (more detail in
 the egroup)... To rotate 'tapes', i recommend creating sub-directories
 and symbolically linking them to a 'data' subdirectory, which amlabel et
 al will look for. The directories are labelled like tapes. I name the
 subdirectories according to date, then i have a shell script which
 creates a new directory every night, makes the symlink, labels the tape
 then runs the backup.

Why not just use a changer?  That's the best way to do this.  I found 
this after a bit of looking:

http://www.mail-archive.com/amanda-users@amanda.org/msg07758.html

Define as many tapes as you want in your changer, and you'll get your 
incrementals.

 file-driver pure-disk backups. I also humbly suggest that the typical
 amanda user and typical equipment cost has changed over the decades of
 amanda, and that future versions could emphasize pure disk backup more. 

And I would humbly suggest that you're probably wrong.  Yes, there are 
more home users backing up small networks now, and disk is cheap (which is 
why file: exists at all).  But I would bet that the majority of users (and 
certainly the majority of amanda-backed-up data) use tape.  I consider 
myself a pretty small install, and I backup nearly 600GB of data using 
amanda.  In corporate (and edu) environments, tapes still have lots of 
advantages over disks, not the least of which is off-site storage.

-- 
Joshua Baker-LePain
Department of Biomedical Engineering
Duke University






Re: Backing up PostgreSQL?

2002-06-21 Thread Kirk Strauser


At 2002-06-21T04:36:19Z, [EMAIL PROTECTED] (Greg A. Woods) writes:

 Wasn't this question originally about FreeBSD anyway?

It was, but this is pretty interesting too.  Feel free to continue.  :)
-- 
Kirk Strauser
The Strauser Group - http://www.strausergroup.com/



shmget error

2002-06-21 Thread J. Rashaad Jackson

I'm getting the following when trying to flush to tape:

taper: FATAL shmget: No space left on device

Any ideas?  How do I get around this?
-- 
I have spoken.

J. Rashaad JacksonUNIX Systems Administrator
3556 Samuel T. Dana Building(W) 734.615.1422
Ann Arbor, MI 48109 (M) 734.649.6641
http://www.umich.edu/~jrashaad  (F) 734.763.8965



msg13001/pgp0.pgp
Description: PGP signature


GNU Tar for SunOS 4.1.3 Amanda client

2002-06-21 Thread Jonathan R. Johnson

Dear Amanda users,

We have a legacy Sun Sparc II server (SunOS 4.1.3) that was, until
recently, our only file server with a tape drive (an Exabyte 8200, 2.3
Gb capacity).  We're in the process of centralizing files onto a RedHat
7.2 server running Amanda 2.4.2p2 (from the RPM -- no problems) with a
Sony AIT-1 (SDX-300C) tape drive.

Unfortunately, getting files off the Sun is proving to be tricky, due
in large part to its outdated networking software.  I briefly tried to
use NFS to mount a server file system on the Sun and copy over the
files, but the NFS mount didn't work right...not my strong suit,
anyway.  I've also tried various combinations of tar and rsh, to no
avail so far.

For now, I had hoped to just configure Amanda on it, even though this
server will be taken off line as soon as its files have been securely
set up on the new file server.  My first hurdle, though, is that the
GNU tar 1.13.25 from ftp://alpha.gnu.org/gnu/tar is not compiling
properly (we compiled and installed gcc 2.3.3 a long time ago),
encountering parsing errors in intl/loadmsgcat.c.  Maybe this isn't the
simplest way to take our files off this server...

I'd love some suggestions here.  Ideally someone just knows where I can
get tar 1.13.19 or newer precompiled for SunOS 4.1.3.  Otherwise, I'd
probably have to recompile gcc with updated sources, etc.  In the short
term, I suppose I could try using the existing dump software.

We need to back this thing up before it dies!!  :-)

Regards,

  Jonathan

-- 
 /   Jonathan R. Johnson   | Every word of God is flawless. \
 |Minnetonka Software, Inc.| -- Proverbs 30:5 |
 \ [EMAIL PROTECTED] |  My own words only speak for me. /




RE: 2.4.3 Tapeless file-driver pure-disk backups

2002-06-21 Thread Steve Follmer


I for one am glad that amanda has grown beyond its unix/tape roots to
support samba/windows and to support file: disk backups. This does not
have to be sacrificed, in adding even more robust support for file: and
also, some SWAT like GUI. And this would make amanda more useful for
more people, if the amanda community wants to go in that direction.
Though there will be conflicts; appending is bad on tape, good on disk.
Explicitly specifying incremental backups is bad for large networks, but
I would desire it for my small network. That said, certainly there are
windows and mac clients in the large commercial/university installations
that characterize the bulk of amanda users.

I admit that the majority of existing amanda users use tape, but its a
biased legacy sample. But let us not ignore the possible users
represented by the exponential growth in home networks, linux, and cheap
disks; a growing audience of users like myself, for whom disk backup is
better than none, who don't have the money to sink into a tape system,
and who don't see enough upside (what are the odds that 2 drives in
different machines will die on the same day) on changing and labeling
tapes and shipping them off to the abandoned limestone mine. 

I appreciate your advice about the autochanger and will explore that.

Cheers,

Steve

-Original Message-
From: Joshua Baker-LePain [mailto:[EMAIL PROTECTED]] 
Sent: Friday, June 21, 2002 6:05 AM
To: Steve Follmer
Cc: [EMAIL PROTECTED]
Subject: Re: 2.4.3 Tapeless file-driver pure-disk backups


On Thu, 20 Jun 2002 at 12:42pm, Steve Follmer wrote

 disk-to-disk backup is an idea whose time has come. I am prepared to 
 believe that Amanda is wonderful (the Samba backup was a snap!), but 
 it seems a bit old fashioned compared to say, Second Copy 2000.

I have no idea what Second Copy is (aside: quick look, ah, ok), but the 
target audiences are completely different.  Amanda is aimed at networks 
of *nix machines.  The beauty of amanda is that it scales easily from a 
single machine to several hundred, based on backup hardware.  And it is 
anything but old fashioned.

 What happens is if I just have one file:/backup/data directory, it 
 gets erased and a full backup takes place every night. This might be 
 acceptable but I'm exposed for the time that the backup takes place. 
 Why doesn't amanda simply add incrementals to the one existing tape 
 (file)?

On tape, appending is Bad.

 I probably must do what an earlier Bickle post described (more detail 
 in the egroup)... To rotate 'tapes', i recommend creating 
 sub-directories and symbolically linking them to a 'data' 
 subdirectory, which amlabel et al will look for. The directories are 
 labelled like tapes. I name the subdirectories according to date, then

 i have a shell script which creates a new directory every night, makes

 the symlink, labels the tape then runs the backup.

Why not just use a changer?  That's the best way to do this.  I found 
this after a bit of looking:

http://www.mail-archive.com/amanda-users@amanda.org/msg07758.html

Define as many tapes as you want in your changer, and you'll get your 
incrementals.

 file-driver pure-disk backups. I also humbly suggest that the typical 
 amanda user and typical equipment cost has changed over the decades of

 amanda, and that future versions could emphasize pure disk backup 
 more.

And I would humbly suggest that you're probably wrong.  Yes, there are 
more home users backing up small networks now, and disk is cheap (which
is 
why file: exists at all).  But I would bet that the majority of users
(and 
certainly the majority of amanda-backed-up data) use tape.  I consider 
myself a pretty small install, and I backup nearly 600GB of data using 
amanda.  In corporate (and edu) environments, tapes still have lots of 
advantages over disks, not the least of which is off-site storage.

-- 
Joshua Baker-LePain
Department of Biomedical Engineering
Duke University






Re: Backing up PostgreSQL?

2002-06-21 Thread Paul Jakma

On Fri, 14 Jun 2002, Greg A. Woods wrote:

 That's irrelevant from PostgreSQL's point of view.  There's no sure
 way to tell the postgresql process(es) to make the on-disk database
 image consistent before you create the snapshot.

if it's pg 7.1+ then it has write-ahead-logging, and the on-disk state 
/is/ guaranteed to be recoverable to a consistent state.

in which case snapshot+backup of snapshot is a valid way to backup 
pgsql.

also, i think pgsql pre-7.1 / without WAL by default does fsync()  
after every transactions. so, while still racy, likelyhood is you will
get a decent backup on a not too heavily used db. (but no guarantees).

--paulj




Re: use of changer

2002-06-21 Thread Nicholas Berry

Hi.

The sense data says a SCSI device/bus or power on reset occurred.

All is not well.  I assume you're using Linux, from the /dev/sg1 reference. Get the 
mtx package installed, and see what that makes of your changer.

Nik



 Jason Brooks [EMAIL PROTECTED] 06/20/02 06:36PM 
Hello,

How do I get amanda to use the barcode reader on my brandnew
scsi-changer?  (it's an independant changer with its own scsi-generic
interface)

the changer appears to work otherwise, though I haven't actually begun
testing backups on it...

the ./docs/TAPE.CHANGER file contains the following:

To initialize the label mapping you can also do an amtape 
config show.  This will put the mapping for all tapes in the 
magazine into the mapping file.

This doesn't work for me.  enclosed is my changer.conf file and the
relevent entries from my amanda.conf file.

I would like to have amanda cross-reference barcodes to tape labels,
perhaps with an inventory control to verify barcode/amlabel mapping.  if
amand needs a certain label, it tells the chager to load barcode #n.

//cut from amanda.conf
tpchanger chg-scsi
tapedev 0   # the abstract changer config in the changer file
changerfile /usr/local/etc/amanda/changer/changer.conf

//my changer.conf contents:
number_configs 1
eject 0 
sleep 140
cleanmax 20
changerdev /dev/sg1

config 0
drivenum0
dev /dev/nst1
startuse0
havebarcode 1
enduse  12
cleancart   13
statfile/var/amanda/DailySet2/tape0-slot
cleanfile   /var/amanda/DailySet2/tape0-clean
usagecount  /var/amanda/DailySet2/totaltime
tapestatus  /var/amanda/DailySet2/tapestatus
labelfile   /usr/local/etc/amanda/changer/labelfile

//sample contents of a chg-scsi*debug file
chg-scsi: debug 1 pid 1218 ruid 11600 euid 11600 start time Wed Jun 19 15:27:48 
2002
chg-scsi: $Id: chg-scsi.c,v 1.1 2002/06/18 18:25:41 jason0 Exp $
ARG [0] : /usr/local/libexec/chg-scsi
ARG [1] : -slot
ARG [2] : current
Number of configurations: 1
Tapes need eject: No
Inv. auto update: No
barcode reader  : Yes
Tapes need sleep: 140 seconds
Cleancycles : 20
Changerdevice   : /dev/sg1
Labelfile   : /usr/local/etc/amanda/changer/labelfile
Tapeconfig Nr: 0
  Drivenumber   : 0
  Startslot : 0
  Endslot   : 12
  Cleanslot : 13
  Devicename: /dev/nst1
  changerident  : none
  SCSITapedev   : none
  tapeident : none
  statfile  : /var/amanda/DailySet2/tapestatus
  Slotfile  : /var/amanda/DailySet2/tape0-slot
  Cleanfile : /var/amanda/DailySet2/tape0-clean
  Usagecount: /var/amanda/DailySet2/totaltime
 # START SenseHandler
Ident = [L500 632], function = [generic]
# START GenericSenseHandler
# START DecodeSense
GenericSenseHandler : Sense Keys
Extended Sense 
ASC   29
ASCQ  00
Sense key 06
Unit Attention
Sense2Action START : type(8), ignsense(0), sense(06), asc(29), ascq(00)
Sense2Action generic start :
Sense2Action generic END : no match for generic   return - -1/Default for SENSE
_UNIT_ATTENTION
# STOP GenericSenseHandler
 STOP SenseHandler

-- 

~~~
Jason Brooks ~ (503) 641-3440 x1861
  Direct ~ (503) 924-1861
System / Network Administrator 
Wind River Systems
8905 SW Nimbus ~ Suite 255  
Beaverton, Or 97008





Re: Backing up PostgreSQL?

2002-06-21 Thread Paul Jakma

On Sat, 15 Jun 2002, Greg A. Woods wrote:

 Unless you can do a global database lock at a point where there are no
 open transactions and no uncommitted data in-core, the only way to
 guarantee a consistent database on disk is to close all the database
 files.  Even then if you're very paranoid and worried about a hard crash
 at a critical point during the snapshot operation you'll want to first
 flush all OS buffers too, which means first unmounting and then
 remounting the filesystem(s) (and possibly even doing whatever is
 necessary to flush buffers in your hardware RAID system).
 
 It really_ is best to just use pg_dump and back up the result.

it's probably best, yes.

but even here, pg_dump will not dump updates that happen after the 
dump starts.

read up on pgsql's versioning system - there are no guarantees about
data it (eg pg_dump) returns being the latest available - only that it
is consistent.

 I agree, but we're not discussing what you and I do, but rather what
 random John Doe DBA does.  There's ample quantity of suggestion out
 in the world already that makes it possible he will turn off fsync
 for performance reasons

their tough luck really. :)

 Now you're getting a little out of hand.  A journaling filesystem is
 a piling of one set of warts ontop of another.  Now you've got a
 situation where even though the filesystem might be 100% consistent
 even after a catastrophic crash, the database won't be.  There's no
 need to use a journaling filesystem with PostgreSQL 

eh? there is great need - this is the only way to guarantee that when 
postgresql does operations (esp on its own application level logs) 
that the operation will either:

- be completely carried out

or

- not carried out at all

 either full mirroring or full level 5 protection).  Indeed there are
 potentially performance related reasons to avoid journaling
 filesystems!

if they're any good they should have better synchronous performance 
over normal unix fs's. (and synchronous perf. is what a db is 
interested in).

 This is true.  However before you give that fact 100% weight in your
 decision process you need to do a lot more risk assessment and
 disaster planning to understand whether or not the tradeoffs
 inherent will not take away from your real needs.
 
 I still believe it really Really REALY is best to just use pg_dump
 and back up the result.  

ok. whether it is best or not is a matter for debate and, perhaps, 
context. however, it is not on to state that use of snapshots is 
/never/ to be considered.

--paulj




RE: 2.4.3 Tapeless file-driver pure-disk backups

2002-06-21 Thread Steve Follmer


Here's the shell script I'm testing for the flip-flop disk backup. This
lets you take e.g. a 120GB disk and do full backups alternating between
two directories each night. No tapes. No holding directory. Pure disk.
Requires 2.4.3 and up. 

#!/bin/sh
# must have already created /backup/sfbk0 and /backup/sfbk1
# amanda knows only to back up to /backup/data
# point this to alternate directories on alternate days
# you always have yesterdays full backup, plus today a second full is
generated
d=`date +%j`   # get julian day number this year
dm=$(($d%2)) # mod 2 to alternate directories
rm /backup/data  # must remove or else ln puts link in subdir
cd /backup  
ln -s sfbk0$dm data  
/usr/local/sbin/amcheck $1
/usr/local/sbin/amdump $1

amanda.conf has no holding disk defined, and uses these definitions:
tapedev file:/backup 
tapetype DISKSAVE
labelstr ^sfbk[0-9][0-9]*$ 
define tapetype DISKSAVE  {
   comment Fake tape description for save to disk
   length 58 gbytes
   filemark 0 kbytes
   speed 2000 mbytes
}

There is some site-specific stuff in the above, which should be
customized, e.g. path name to amanda programs, backups in /backup,
labelstr, etc. Be sure to create and label the sfbk00 and sfbk01 ahead
of time; go into the directory and don't use full path names
when you label. (Unless you change your labelstr to want that).

What this is all supposed to do is, backup up to 58GB, with full backups
every night (using crontab), and flip-flop between the two directories
on odd/even days. Seems to work so far. The parameter to the script is
the name of the backup profile. The only glitch is you will lose a
backup on new years eve (except on leap years) when Julian day 365 rolls
to day 1.

Comments invited. It should also be possible to use a similar script to
create a series of full backups. It should also be possible to use the
autochanger facility and get some incremental backups into the mix as a
recent post described. But this is pretty simple and seems like what I
need for my small home network.

-Steve

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED]] On Behalf Of Steve Follmer
Sent: Thursday, June 20, 2002 12:43 PM
To: [EMAIL PROTECTED]
Subject: 2.4.3 Tapeless file-driver pure-disk backups



Firstly, I want to thank everyone who solved the problem Mohamed and I
were having. With 2.4.3 you can use tapedev file:/backup for
direct-to-disk backup (without resorting to older fake-out techniques of
loading up the holding disk). But remember that you must amlabel the
directory that /backup/data is or is linked to, as a nod to Amanda's
origins back in the days of tapes, punch cards, and paper tape.

I'm still a little confused though and feel amanda is still a bit mired
in a tape-based mindset. Like perhaps many amanda users, I'm new to
amanda, and simply have a small home LAN I want to back up. And with
Fry's selling 120GB drives for $109, and CD+-RW still confused,
disk-to-disk backup is an idea whose time has come. I am prepared to
believe that Amanda is wonderful (the Samba backup was a snap!), but it
seems a bit old fashioned compared to say, Second Copy 2000.

What happens is if I just have one file:/backup/data directory, it gets
erased and a full backup takes place every night. This might be
acceptable but I'm exposed for the time that the backup takes place. Why
doesn't amanda simply add incrementals to the one existing tape
(file)?

I probably must do what an earlier Bickle post described (more detail in
the egroup)... To rotate 'tapes', i recommend creating sub-directories
and symbolically linking them to a 'data' subdirectory, which amlabel et
al will look for. The directories are labelled like tapes. I name the
subdirectories according to date, then i have a shell script which
creates a new directory every night, makes the symlink, labels the tape
then runs the backup.

I will try this, but won't I then end up with a long series of full
backups? (In other words, I'm guessing that Bickle sets the tapecycle to
1, and Amanda doesn't know to do any incrementals, since the several
tapes are created behind amanda's back). I can work with this but it
wastes disk space. Maybe my shell script can just alternate between 2
full images. I guess that is the best compromise. I will set the
tapecycle to 1, and not even define a holding disk. Amanda will think it
just has 1 tape and always do full backups. The script will flip-flop
the full backup from one directory to another and back again. For my
small home LAN this is acceptable since the network is quiet all night.
(Though still are incrementals too much to ask?)

When I get this working, I'll try to get both of these new techniques
(Bickle series, Follmer flip-flop) submitted to the FAQ. I wonder if
there is a way to use the Hogge posting on 2.4.2 autochangers to create
a third technique with autochangers for use with 2.4.3 tapeless
file-driver pure-disk backups. I also humbly suggest that the typical
amanda 

Re: Backing up PostgreSQL?

2002-06-21 Thread Paul Jakma

On Thu, 20 Jun 2002, Greg A. Woods wrote:


  Where does it say that close/open will flush metadata?
 
 That's how the unix filesystem works.  UTSL.

that's an implementation issue - you cant rely on it.

portable synchronous metadata is a murky issue on plain unix
filesystems. maybe above works, maybe it just works, maybe you need to
open the directory and fsync() it.

--paulj





Re: 2.4.3 Tapeless file-driver pure-disk backups

2002-06-21 Thread Jon LaBadie

On Fri, Jun 21, 2002 at 10:00:58AM -0700, Steve Follmer wrote:
 
 I for one am glad that amanda has grown beyond its unix/tape roots to
 support samba/windows and to support file: disk backups. This does not
 have to be sacrificed, in adding even more robust support for file: and
 also, some SWAT like GUI. And this would make amanda more useful for
 more people, if the amanda community wants to go in that direction.

 Though there will be conflicts; appending is bad on tape, good on disk.
 Explicitly specifying incremental backups is bad for large networks, but
 I would desire it for my small network. That said, certainly there are
 windows and mac clients in the large commercial/university installations
 that characterize the bulk of amanda users.
 
 I admit that the majority of existing amanda users use tape, but its a
 biased legacy sample. But let us not ignore the possible users
 represented by the exponential growth in home networks, linux, and cheap
 disks; a growing audience of users like myself, for whom disk backup is
 better than none, who don't have the money to sink into a tape system,
 and who don't see enough upside (what are the odds that 2 drives in
 different machines will die on the same day) on changing and labeling
 tapes and shipping them off to the abandoned limestone mine. 

Amanda was originally written by, and is currently maintained by,
a surprisingly small total number of talented people.  Their objective,
as I understand it, is to provide quality backup services for their own
use, not for ours.  For the most part they do administer large sites
that use tape.  Wide-spread, increasing adoption of amanda on small
systems is not a requirement of that objective.

We small users are benefitors of their efforts.  Changes we see as highly
desireable, or immensely practical in our environments may be of little
value in the environment of the amanda developers.  As they are not developing
a commercial product, increased sales is not an incentive.  In fact,
increased usage actually causes them to devote more of their freely given
time to supporting the new users.

This does not imply new features and niceties for small users do not make it
into amanda.  But the 'primary' use of amanda at large sites, as you indicate,
can not be sacrificed.  File: disk backups are one example, but be aware that
it is a pretty recent addition to amanda and is probably undergoing teething
pains.  (I have not used it, so I say that from ignorance)

You indicate two desireable features, GUI and appending.  In general, GUI's
are wonderful things for end-users/desktop environments.  They hold fewer
benefits for large installations.  So any impetus and development effort
on a GUI is unlikely to come from the traditional amanda developers.  That
does not mean you, or someone else with a similar desire, can't contribute
their time freely and develop such an interface and contribute it to the
project.  Many would love to see that.

Appending is a topic that comes up often (weekly? daily? :).  As you seem to
accept appending is bad for tape, it might only be used for disk file-based
backups.  I'm not at all sure why you feel appending is good for disks.
Even if it is a good thing for file-based backups, you have to consider the
effort of revising not just the backup software, but also the recovery and
all the administrative software to handle this special case.  The alternative,
which has been adopted, is to make the disk look like a tape.  Thus no
appending to disk file backups either.

 I appreciate your advice about the autochanger and will explore that.

It will probably meet your needs, just in a different way than you think is right.

jl
-- 
Jon H. LaBadie  [EMAIL PROTECTED]
 JG Computing
 4455 Province Line Road(609) 252-0159
 Princeton, NJ  08540-4322  (609) 683-7220 (fax)



Re: GNU Tar for SunOS 4.1.3 Amanda client

2002-06-21 Thread Frank Smith

NFS can work, but if you didn't already have exports and mounts
on the machines then not all of the needed daemons would have
been running (the rc scripts check for exports  mounts and only
start certain daemons if they exist.  You can start the daemons
manually (look for what the rc scripts skip in 'if' tests).

tar and rsh should work, but rsh may be disabled in inetd.conf
or blocked by firewalls, tcpwrappers/ipchains/iptables or not
allow access by root.  Try rsh-ing to the SunOS box from your
linux box, SunOS was a lot more permissive by default than
current OS's.

Do you have any free space on the SunOS box so that you could
tar directories and then FTP them over?  If you use tar's
compression option and can delete directories after you FTP
them, you should be able to start with smaller directories
and work your way up through the big ones.

Good luck,
Frank


--On Friday, June 21, 2002 11:39:06 -0500 Jonathan R. Johnson 
[EMAIL PROTECTED] wrote:

 Dear Amanda users,

 We have a legacy Sun Sparc II server (SunOS 4.1.3) that was, until
 recently, our only file server with a tape drive (an Exabyte 8200, 2.3
 Gb capacity).  We're in the process of centralizing files onto a RedHat
 7.2 server running Amanda 2.4.2p2 (from the RPM -- no problems) with a
 Sony AIT-1 (SDX-300C) tape drive.

 Unfortunately, getting files off the Sun is proving to be tricky, due
 in large part to its outdated networking software.  I briefly tried to
 use NFS to mount a server file system on the Sun and copy over the
 files, but the NFS mount didn't work right...not my strong suit,
 anyway.  I've also tried various combinations of tar and rsh, to no
 avail so far.

 For now, I had hoped to just configure Amanda on it, even though this
 server will be taken off line as soon as its files have been securely
 set up on the new file server.  My first hurdle, though, is that the
 GNU tar 1.13.25 from ftp://alpha.gnu.org/gnu/tar is not compiling
 properly (we compiled and installed gcc 2.3.3 a long time ago),
 encountering parsing errors in intl/loadmsgcat.c.  Maybe this isn't the
 simplest way to take our files off this server...

 I'd love some suggestions here.  Ideally someone just knows where I can
 get tar 1.13.19 or newer precompiled for SunOS 4.1.3.  Otherwise, I'd
 probably have to recompile gcc with updated sources, etc.  In the short
 term, I suppose I could try using the existing dump software.

 We need to back this thing up before it dies!!  :-)

 Regards,

   Jonathan

 --
  /   Jonathan R. Johnson   | Every word of God is flawless. \
  |Minnetonka Software, Inc.| -- Proverbs 30:5 |
  \ [EMAIL PROTECTED] |  My own words only speak for me. /



--
Frank Smith[EMAIL PROTECTED]
Systems Administrator Voice: 512-374-4673
Hoover's Online Fax: 512-374-4501



Re: 2.4.3 Tapeless file-driver pure-disk backups

2002-06-21 Thread José

Hello to all;

--- Jon LaBadie [EMAIL PROTECTED] wrote:
 can not be sacrificed.  File: disk backups are one
 example, but be aware that
 it is a pretty recent addition to amanda and is
 probably undergoing teething
 pains.  (I have not used it, so I say that from
 ignorance)
 

We were using the  tape to disk feature until
recently bougth a tape unit and i can say it works
like a charm; The restore and the backup worked
without a problem and then migrating from one
configuration to the another was pretty easy.

We were backing up our clearcase and filesystem stuff
in that way for at least 5 months.

JV.

=
José Vicente Nuñez Zuleta ([EMAIL PROTECTED])
Newbreak System Administrator (http://www.newbreak.com)
Office Phone:   203-355-1511, 203-355-1510
Fax:203-355-1512
Cellular Phone: 203-570-7078

__
Do You Yahoo!?
Yahoo! - Official partner of 2002 FIFA World Cup
http://fifaworldcup.yahoo.com



Re: Backing up PostgreSQL?

2002-06-21 Thread Greg A. Woods

[ On Friday, June 21, 2002 at 18:37:20 (+0100), Paul Jakma wrote: ]
 Subject: Re: Backing up PostgreSQL?

 On Thu, 20 Jun 2002, Greg A. Woods wrote:
 
   Where does it say that close/open will flush metadata?
  
  That's how the unix filesystem works.  UTSL.
 
 that's an implementation issue - you cant rely on it.

Apparently you can on anything but linux, and I'm rather stunned by the
situation with the latter

-- 
Greg A. Woods

+1 416 218-0098;  [EMAIL PROTECTED];  [EMAIL PROTECTED];  [EMAIL PROTECTED]
Planix, Inc. [EMAIL PROTECTED]; VE3TCP; Secrets of the Weird [EMAIL PROTECTED]



ext2 and mail (Was: Backing up PostgreSQL?)

2002-06-21 Thread Martín Marqués

On Jue 20 Jun 2002 17:17, Ragnar Kjørstad wrote:


 This issue has been discussed in depth on the reiserfs and reiserfs-dev
 lists; I propse you subscribe or browse the archives for more
 information. In particular there is a thread about filesystem-features
 required for mailservers, and there is a post from Wietse where he
 writes:

 ext2fs isn't a great file system for mail. Queue files are short-lived,
 so mail delivery involves are a lot of directory operations.  ext2fs
 has ASYNCHRONOUS DIRECTORY updates and can lose a file even when
 fsync() on the file succeeds.

Sorry, but I think Wietse was talking about ext3, not ext2.

ext3 has asyncronous updates, as it's a journal FS. Not an expert on FS, but I 
read Wietse's mail on the Postfix mailling list.

-- 
Porqué usar una base de datos relacional cualquiera,
si podés usar PostgreSQL?
-
Martín Marqués  |[EMAIL PROTECTED]
Programador, Administrador, DBA |   Centro de Telematica
   Universidad Nacional
del Litoral
-



Re: Backing up PostgreSQL?

2002-06-21 Thread Martín Marqués

On Vie 21 Jun 2002 17:33, Greg A. Woods wrote:
 [ On Friday, June 21, 2002 at 18:37:20 (+0100), Paul Jakma wrote: ]

  Subject: Re: Backing up PostgreSQL?
 
  On Thu, 20 Jun 2002, Greg A. Woods wrote:
Where does it say that close/open will flush metadata?
  
   That's how the unix filesystem works.  UTSL.
 
  that's an implementation issue - you cant rely on it.

 Apparently you can on anything but linux, and I'm rather stunned by the
 situation with the latter

Other UNIX? If you could rely on it, why does Informix (and other database 
servers, but this is the one I used) still use it's own FS to write the 
database files?

-- 
Porqué usar una base de datos relacional cualquiera,
si podés usar PostgreSQL?
-
Martín Marqués  |[EMAIL PROTECTED]
Programador, Administrador, DBA |   Centro de Telematica
   Universidad Nacional
del Litoral
-



Re: Backing up PostgreSQL?

2002-06-21 Thread Greg A. Woods

[ On Friday, June 21, 2002 at 18:15:28 (+0100), Paul Jakma wrote: ]
 Subject: Re: Backing up PostgreSQL?

 On Fri, 14 Jun 2002, Greg A. Woods wrote:
 
  That's irrelevant from PostgreSQL's point of view.  There's no sure
  way to tell the postgresql process(es) to make the on-disk database
  image consistent before you create the snapshot.
 
 if it's pg 7.1+ then it has write-ahead-logging, and the on-disk state 
 /is/ guaranteed to be recoverable to a consistent state.
 
 in which case snapshot+backup of snapshot is a valid way to backup 
 pgsql.

Yes, I agree with this, assuming you're talking about an increment to
the currently available release that puts you up ahead to some mythical
future vapour-ware.  The ability to reliably restore consistency in such
a backup copy of the files not only depends on write-ahead-logging
support, but also on properly functioning REDO _and_ UNDO functions.

Note though that currently the UNDO operation is currently still
unimplemented in PostgreSQL-7.2.1 -- thus my original concern, and along
with other issues why I say that pg_dump really is the only best way to
do a backup of a pgsql database.

I.e. PostgreSQL releases up to, and including, 7.2.1 do not provide a
way to guarantee a database is always recoverable to a consistent state.
Current releases do not have a built-in way to remove changes already
made to data pages by uncomitted transactions.  From what I've read and
heard from pgsql developers you sholdn't hold your breath for this
feature either -- it probably requires a re-design of the
W.A.L. internals to realise in any reasonable implementation.

When you restore a database from backups you really do want to just
immediately start the database engine and know that the restored files
have full integrity.  You realy don't want to have to rely on W.A.L.

When you restore a database from backups during a really bad disaster
recovery procedure you sometimes don't even want to have to depend on
the on-disk-file format being the same.  You want to be able to load the
data into an arbitrary version of the software on an arbitrary platform.

-- 
Greg A. Woods

+1 416 218-0098;  [EMAIL PROTECTED];  [EMAIL PROTECTED];  [EMAIL PROTECTED]
Planix, Inc. [EMAIL PROTECTED]; VE3TCP; Secrets of the Weird [EMAIL PROTECTED]



Re: ext2 and mail (Was: Backing up PostgreSQL?)

2002-06-21 Thread Brandon D. Valentine

On Fri, 21 Jun 2002, [iso-8859-1] Martín Marqués wrote:

On Jue 20 Jun 2002 17:17, Ragnar Kjørstad wrote:

 In particular there is a thread about filesystem-features
 required for mailservers, and there is a post from Wietse where he
 writes:

 ext2fs isn't a great file system for mail. Queue files are short-lived,
 so mail delivery involves are a lot of directory operations.  ext2fs
 has ASYNCHRONOUS DIRECTORY updates and can lose a file even when
 fsync() on the file succeeds.

Sorry, but I think Wietse was talking about ext3, not ext2.

ext3 has asyncronous updates, as it's a journal FS. Not an expert on FS, but I
read Wietse's mail on the Postfix mailling list.

Negative.  ext2 by default on every linux distro I've ever encountered
is mounted async.  You /can/ mount ext2 filesystems synchronously but
performance falls through the floor, the earth's crust and well into the
mantle.

-- 
Brandon D. Valentine [EMAIL PROTECTED]
Computer Geek, Center for Structural Biology

This isn't rocket science -- but it _is_ computer science.
- Terry Lambert on [EMAIL PROTECTED]




RE: 2.4.3 Tapeless file-driver pure-disk backups

2002-06-21 Thread Steve Follmer


Helpful to know the overall world order context. I can live with it,
especially now that I know and understand it.

There was a recent posting from someone using a SAN to for file-driver
pure-disk backup. Just as disks are cheaper for small users they are
also cheaper for large users. A SAN, combined with a WAN (perhaps less
used at night), can provide geographic diversity for the backups. But
I'll have to wait and see if this becomes popular at large sites.

FWIW I understand Solaris 9 put a lot of effort into GUI, so that is a
counter example suggesting that Sun feels that GUIs are useful to large
installations. Probably what is happening is, if you spend several years
writing and maintaining the source code for a program, you hardly need a
GUI.

All I was thinking with appending was that then you wouldn't even need
the flip-flop script I recently posted but could just have a bone simple
amanda.conf that pointed amanda at a single BFD (big disk) and let it
rip until the disk was full. I guess what I'd envision, if anyone knows
of it, is an open source Second Copy 2000 that runs on unix, has a GUI,
stores in file system format for browsing/restore, and uses smbclient
for windoze boxes. 

That said, I agree that amanda meets my needs. I like it. The support
for windows clients seems a little light (e.g. no exclude lists (since
smbclient requires gnutar)) and I would be surprised if the core
developers develop for a windows-free enviroment but more power to them.

-Steve

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED]] On Behalf Of Jon LaBadie
Sent: Friday, June 21, 2002 11:18 AM
To: [EMAIL PROTECTED]
Subject: Re: 2.4.3 Tapeless file-driver pure-disk backups


On Fri, Jun 21, 2002 at 10:00:58AM -0700, Steve Follmer wrote:
 
 I for one am glad that amanda has grown beyond its unix/tape roots to 
 support samba/windows and to support file: disk backups. This does not

 have to be sacrificed, in adding even more robust support for file: 
 and also, some SWAT like GUI. And this would make amanda more useful 
 for more people, if the amanda community wants to go in that 
 direction.

 Though there will be conflicts; appending is bad on tape, good on 
 disk. Explicitly specifying incremental backups is bad for large 
 networks, but I would desire it for my small network. That said, 
 certainly there are windows and mac clients in the large 
 commercial/university installations that characterize the bulk of 
 amanda users.
 
 I admit that the majority of existing amanda users use tape, but its a

 biased legacy sample. But let us not ignore the possible users 
 represented by the exponential growth in home networks, linux, and 
 cheap disks; a growing audience of users like myself, for whom disk 
 backup is better than none, who don't have the money to sink into a 
 tape system, and who don't see enough upside (what are the odds that 2

 drives in different machines will die on the same day) on changing and

 labeling tapes and shipping them off to the abandoned limestone mine.

Amanda was originally written by, and is currently maintained by, a
surprisingly small total number of talented people.  Their objective, as
I understand it, is to provide quality backup services for their own
use, not for ours.  For the most part they do administer large sites
that use tape.  Wide-spread, increasing adoption of amanda on small
systems is not a requirement of that objective.

We small users are benefitors of their efforts.  Changes we see as
highly desireable, or immensely practical in our environments may be of
little value in the environment of the amanda developers.  As they are
not developing a commercial product, increased sales is not an
incentive.  In fact, increased usage actually causes them to devote more
of their freely given time to supporting the new users.

This does not imply new features and niceties for small users do not
make it into amanda.  But the 'primary' use of amanda at large sites, as
you indicate, can not be sacrificed.  File: disk backups are one
example, but be aware that it is a pretty recent addition to amanda and
is probably undergoing teething pains.  (I have not used it, so I say
that from ignorance)

You indicate two desireable features, GUI and appending.  In general,
GUI's are wonderful things for end-users/desktop environments.  They
hold fewer benefits for large installations.  So any impetus and
development effort on a GUI is unlikely to come from the traditional
amanda developers.  That does not mean you, or someone else with a
similar desire, can't contribute their time freely and develop such an
interface and contribute it to the project.  Many would love to see
that.

Appending is a topic that comes up often (weekly? daily? :).  As you
seem to accept appending is bad for tape, it might only be used for disk
file-based backups.  I'm not at all sure why you feel appending is good
for disks. Even if it is a good thing for file-based backups, you have

Re: Backing up PostgreSQL?

2002-06-21 Thread Ragnar Kjørstad

Remember I've only been talking about the backed up files being in a
self-consistent state and not requiring roll-back or roll-forward of
any transaction logs after restore.

If you don't want your database to do roll-back or roll-forward
backup-snapshots will work for you. Neither will power-failures. And in
theory a clean shutdown could also leave the database in a state where
it required roll-forward on restart, but my guess is that's not the
case.


 In order to have consistent (from a database and application
 perspective) files on a filesystem backup of the database files, you
 MUST -- ABSOLUTELY MUST -- find some mechanism to synchronise between
 the database and the filesystem for the duration of the time it takes to
 make the filesystem backp, whether thats the few seconds it takes to
 create a snapshot, or the many minutes it takes to make a verbatim
 backup using 'dump' or whatever.  If all contingencies are to be covered
 the only sure way to do this is to shut down the database engine and
 ensure it has closed all of its files.  There must be no writes to the
 filesystem by the database process(es) during the time the backup or
 snapshot being made.  None.  Not to the database files, nor to the
 transaction log.  None whatsoever.

There are no writes to the filesystem at all while the snapshot is in
progress. The LVM-driver will block all data-access until it is
completed. If not, it wouldn't be a snapshot, would it?

 With great care and a separate backup of the database transaction log
 taken after the filesystem backup it may be possible to re-build
 database consistency after a restore, but I wouldn't ever want to risk
 having to do that in a disaster recovery scenario.  I would either want
 a guaranteed self-consistent filesystem copy of the database files, or a
 properly co-ordinated pg_dump of the database contents (and preferrably
 I want both, and both representing the exact same state, though here
 there's more leeway for using a transaction log to record db state
 changes between one form of backup and the other :-)

Huh? Why would you want a seperate backup of the database transaction
log? The log is stored in a file together with the rest of the database,
and will be included in the snapshot and the backup.

If you didn't backup the transaction-log at the exact same time you
backed up the rest of the files it would not work - it would be
inconsistent and there could be data missing.

 In the end if your DBA and/or developers have not taken into account the
 need to shut down the database for at least a short time on a regular
 basis in order to obtain good backups then you may have more serious
 problems on your hands (and you should find a new and more experienced
 DBA too! ;-).  The literature is full of ways of making secure backups
 of transactions -- but such techniques need to be considered in the
 design of your systems.  For example it's not unheard of in financial
 applications to run the transaction log direct to a hardware-mirrored
 tape drive, pausing the database engine (but not stopping it) only long
 enough to change tapes, and immediately couriering one tape to a secure
 off-site location.  Full backups with the database shut down are then
 taken only on much longer intervals and disaster recovery involves
 replaying all the transactions since the last full backup.  There are
 also tricks you can play with RAID-1 which are much faster and
 potentially safer than OS-level filesystem snapshots (and of course such
 tricks don't require snapshot support, which is still relatively rare
 and unproven in many of the systems where it is available).  These
 tricks allow you to get away with shutting down the database only so
 long as it takes to swap the mirror disk from one set to another, at
 which point you can make a leisurely backup of the quiescent side of the
 mirror.  Then the RAID hardware can do the reconstruction of the mirror
 while the database remains live.

Splitting a RAID-1 has the exact same properties that a filesystem
snapshot (like if the filesystem is online it must be done atomicly or
with the access to the device blocked (this is just another way of
saying the database is paused)). The only differences are operational,
like:
* splitting the mirror is faster (say 1ms instead of 1s)
* a mirror requires 2x data-size, while snapshots will require less
  unless all the data is changed during backup
* a mirror will have to be resyncronized
* a split mirror has no write-penality


  Just to make sure there is no (additional) confusion here; what I'm
  saying is:
  1. Meta-data must be updated properly. This is obvious and 
 shouldn't require futher explanation...
  2. non-journaling filesystems (e.g. ext2 on linux) do update
 the inode-metadata on fsync(), but they do not update the
 directory. 
 
 The Unix Fast File System is not a log/journaling filesystem.  However
 it does not suffer the problems you're worried about.

That 

Re: Backing up PostgreSQL?

2002-06-21 Thread Ragnar Kjørstad

On Fri, Jun 21, 2002 at 05:17:35PM -0400, Greg A. Woods wrote:
 Yes, I agree with this, assuming you're talking about an increment to
 the currently available release that puts you up ahead to some mythical
 future vapour-ware.  The ability to reliably restore consistency in such
 a backup copy of the files not only depends on write-ahead-logging
 support, but also on properly functioning REDO _and_ UNDO functions.

It should only require REDO.
No changes are made to the actual database-files until the transaction
is commited, written to the WAL and fsynced. At this point there is no
longer a need for UNDO.

 I.e. PostgreSQL releases up to, and including, 7.2.1 do not provide a
 way to guarantee a database is always recoverable to a consistent state.
 Current releases do not have a built-in way to remove changes already
 made to data pages by uncomitted transactions.

But it doesn't do changes to the data-pages until the transaction is
commited.

If it had; your database would have been toast if you lost power. (or at
least not correct)


 When you restore a database from backups you really do want to just
 immediately start the database engine and know that the restored files
 have full integrity.  You realy don't want to have to rely on W.A.L.

I really don't see why relying on WAL is any different from relying on
other postgresql features - and it is hard to run a postgresql database
without relying on postgresql

 When you restore a database from backups during a really bad disaster
 recovery procedure you sometimes don't even want to have to depend on
 the on-disk-file format being the same.  You want to be able to load the
 data into an arbitrary version of the software on an arbitrary platform.

Yes; unfortenately this is not even possible with pg_dump. (it is better
than a filesystem backup in this regard, but there are still
version-incompabilities that have to be fixed manually)



-- 
Ragnar Kjørstad
Big Storage



Re: Backing up PostgreSQL?

2002-06-21 Thread Ragnar Kjørstad

On Fri, Jun 21, 2002 at 06:28:59PM +0100, Paul Jakma wrote:
  Now you're getting a little out of hand.  A journaling filesystem is
  a piling of one set of warts ontop of another.  Now you've got a
  situation where even though the filesystem might be 100% consistent
  even after a catastrophic crash, the database won't be.  There's no
  need to use a journaling filesystem with PostgreSQL 
 
 eh? there is great need - this is the only way to guarantee that when 
 postgresql does operations (esp on its own application level logs) 
 that the operation will either:
 
 - be completely carried out
 or
 - not carried out at all

Journaling filesystems doesn't provide this guarantee in general,
because the transactional-interface is not provided to userspace. The
only thing the filesystem guarantees is that filesystem-operations are
carried out completely or not at all.

If a non-journaling filesystem crashes while rename() is in progress,
the file may be present in two directories or none (depending on
implementation). If you create a file, write to it and then crash, the
file may be gone from the directory. _Theese_ are the problems solved by
journaling filesystems.

Luckily postgresql implements it's own system (WAL) to get the same
feature (atomic updates) on the database-level.


[ There are actually some work underway to export a transactional
filesystem-API to userspace. When this is completed, an application
could tell the filesystem what operations are part of an transaction,
and have atomic updates even to multiple files :-) ]

  either full mirroring or full level 5 protection).  Indeed there are
  potentially performance related reasons to avoid journaling
  filesystems!
 
 if they're any good they should have better synchronous performance 
 over normal unix fs's. (and synchronous perf. is what a db is 
 interested in).

Syncrounous metadata updates (create/rename ++): yes - they should be
faster. But postgresql doesn't do many of those.

Syncrounous data-updates (write/append): no - because postgresql
already do the writes to a log so there are no seeks involved in the
sync writes. (the writes to the actual files happens asyncrounous).


So there is a theoretical improvement, but it's not likely to show up on
a typical SQL-benchmark...




-- 
Ragnar Kjørstad
Big Storage



Re: use of changer

2002-06-21 Thread Jason Brooks

Hello,

Here is the output from mtx and loaderinfo.  Note that though loaderinfo 
doesn't show a barcode reader, mtx's output clearly shows a barcode
reader.  I can confirm that those VolumeTags are accurate.


--mtx---
[streak : changer]$ mtx inquiry
Product Type: Medium Changer
Vendor ID: 'ATL '
Product ID: 'L500 632'
Revision: '0026'
Attached Changer: No
[streak : changer]$ mtx status 
  Storage Changer /dev/changer:1 Drives, 13 Slots ( 0 Import/Export )
Data Transfer Element 0:Full (Storage Element 1 Loaded)
  Storage Element 1:Empty
  Storage Element 2:Full :VolumeTag=RBN066  
  Storage Element 3:Full :VolumeTag=RBN067  
  Storage Element 4:Full :VolumeTag=RBN068  
  Storage Element 5:Full :VolumeTag=RBN069  
  Storage Element 6:Full :VolumeTag=RBN070  
  Storage Element 7:Full :VolumeTag=RBN071  
  Storage Element 8:Full :VolumeTag=RBN072  
  Storage Element 9:Full :VolumeTag=RBN073  
  Storage Element 10:Full :VolumeTag=RBN074  
  Storage Element 11:Full :VolumeTag=RBN075  
  Storage Element 12:Full :VolumeTag=RBN076  
  Storage Element 13:Full :VolumeTag=RBN077  


-loaderinfo
[streak : jason0]$ loaderinfo -f /dev/sg1
Product Type: Medium Changer
Vendor ID: 'ATL '
Product ID: 'L500 632'
Revision: '0026'
Attached Changer: No
Bar Code Reader: No
EAAP: Yes
Number of Medium Transport Elements: 1
Number of Storage Elements: 13
Number of Import/Export Element Elements: 0
Number of Data Transfer Elements: 1
Transport Geometry Descriptor Page: Yes
Invertable: No
Device Configuration Page: Yes
Can Transfer: No



On Fri, Jun 21, 2002 at 01:25:05PM -0400, Nicholas Berry wrote:
 Hi.
 
 The sense data says a SCSI device/bus or power on reset occurred.
 
 All is not well.  I assume you're using Linux, from the /dev/sg1 reference. Get the 
mtx package installed, and see what that makes of your changer.
 
 Nik
 
 
 
  Jason Brooks [EMAIL PROTECTED] 06/20/02 06:36PM 
 Hello,
 
 How do I get amanda to use the barcode reader on my brandnew
 scsi-changer?  (it's an independant changer with its own scsi-generic
 interface)
 
 the changer appears to work otherwise, though I haven't actually begun
 testing backups on it...
 
 the ./docs/TAPE.CHANGER file contains the following:
 
   To initialize the label mapping you can also do an amtape 
   config show.  This will put the mapping for all tapes in the 
   magazine into the mapping file.
 
 This doesn't work for me.  enclosed is my changer.conf file and the
 relevent entries from my amanda.conf file.
 
 I would like to have amanda cross-reference barcodes to tape labels,
 perhaps with an inventory control to verify barcode/amlabel mapping.  if
 amand needs a certain label, it tells the chager to load barcode #n.
 
 //cut from amanda.conf
 tpchanger chg-scsi
 tapedev 0   # the abstract changer config in the changer file
 changerfile /usr/local/etc/amanda/changer/changer.conf
 
 //my changer.conf contents:
 number_configs 1
 eject 0 
 sleep 140
 cleanmax 20
 changerdev /dev/sg1
 
 config 0
 drivenum0
 dev /dev/nst1
 startuse0
 havebarcode 1
 enduse  12
 cleancart   13
 statfile/var/amanda/DailySet2/tape0-slot
 cleanfile   /var/amanda/DailySet2/tape0-clean
 usagecount  /var/amanda/DailySet2/totaltime
 tapestatus  /var/amanda/DailySet2/tapestatus
 labelfile   /usr/local/etc/amanda/changer/labelfile
 
 //sample contents of a chg-scsi*debug file
 chg-scsi: debug 1 pid 1218 ruid 11600 euid 11600 start time Wed Jun 19 15:27:48 
 2002
 chg-scsi: $Id: chg-scsi.c,v 1.1 2002/06/18 18:25:41 jason0 Exp $
 ARG [0] : /usr/local/libexec/chg-scsi
 ARG [1] : -slot
 ARG [2] : current
 Number of configurations: 1
 Tapes need eject: No
 Inv. auto update: No
 barcode reader  : Yes
 Tapes need sleep: 140 seconds
 Cleancycles : 20
 Changerdevice   : /dev/sg1
 Labelfile   : /usr/local/etc/amanda/changer/labelfile
 Tapeconfig Nr: 0
   Drivenumber   : 0
   Startslot : 0
   Endslot   : 12
   Cleanslot : 13
   Devicename: /dev/nst1
   changerident  : none
   SCSITapedev   : none
   tapeident : none
   statfile  : /var/amanda/DailySet2/tapestatus
   Slotfile  : /var/amanda/DailySet2/tape0-slot
   Cleanfile : /var/amanda/DailySet2/tape0-clean
   Usagecount: /var/amanda/DailySet2/totaltime
  # START SenseHandler
 Ident = [L500 632], function = [generic]
 # START GenericSenseHandler
 # START DecodeSense
 GenericSenseHandler : Sense Keys
 Extended Sense 
 ASC   29
 ASCQ  00

attempt to use chg-zd-mtx...

2002-06-21 Thread Jason Brooks

Hello again,

I created another amanda configuration and set it up to use chg-zd-mtx
since mtx apparently showed accurate information.

Here is the result of just running chg-zd-mtx -info
-- none could not determine current slot

and here is the chg-zd-mtx*debug file

chg-zd-mtx: debug 1 pid 16486 ruid 11600 euid 11600 start time Fri Jun 21 15:42:
55 2002
15:42:55 Arg info:
 $# = 1
 $0 = /usr/local/libexec/chg-zd-mtx
 $1 = -info
15:42:55 Running: /usr/local/sbin/mtx status
15:42:55 Exit code: 0
 Stdout:
  Storage Changer /dev/sg1:1 Drives, 13 Slots ( 0 Import/Export )
Data Transfer Element 0:Full (Storage Element 1 Loaded)
  Storage Element 1:Empty
  Storage Element 2:Full :VolumeTag=RBN066  
  Storage Element 3:Full :VolumeTag=RBN067  
  Storage Element 4:Full :VolumeTag=RBN068  
  Storage Element 5:Full :VolumeTag=RBN069  
  Storage Element 6:Full :VolumeTag=RBN070  
  Storage Element 7:Full :VolumeTag=RBN071  
  Storage Element 8:Full :VolumeTag=RBN072  
  Storage Element 9:Full :VolumeTag=RBN073  
  Storage Element 10:Full :VolumeTag=RBN074  
  Storage Element 11:Full :VolumeTag=RBN075  
  Storage Element 12:Full :VolumeTag=RBN076  
  Storage Element 13:Full :VolumeTag=RBN077  
15:42:55 SLOTLIST - firstslot set to 2
15:42:55 SLOTLIST - lastslot set to 13
15:42:55 Config info:
 firstslot = 2
 lastslot = 13
 cleanslot = -1
 cleancycle = -1
 offline_before_unload = 0
 unloadpause = 0
 autoclean = 0
 autocleancount = 99
 havereader = 0
 driveslot = 1
 poll_drive_ready = 3
 max_drive_wait = 120
15:42:55 SETUP- current slot -1 less than 2 ... resetting to 2
15:42:55 Exit (2) - none could not determine current slot
chg-zd-mtx: pid 16615 finish time Fri Jun 21 15:42:56 2002


What do you think of this?
On Fri, Jun 21, 2002 at 01:25:05PM -0400, Nicholas Berry wrote:
 Hi.
 
 The sense data says a SCSI device/bus or power on reset occurred.
 
 All is not well.  I assume you're using Linux, from the /dev/sg1 reference. Get the 
mtx package installed, and see what that makes of your changer.
 
 Nik
 
 
 
  Jason Brooks [EMAIL PROTECTED] 06/20/02 06:36PM 
 Hello,
 
 How do I get amanda to use the barcode reader on my brandnew
 scsi-changer?  (it's an independant changer with its own scsi-generic
 interface)
 
 the changer appears to work otherwise, though I haven't actually begun
 testing backups on it...
 
 the ./docs/TAPE.CHANGER file contains the following:
 
   To initialize the label mapping you can also do an amtape 
   config show.  This will put the mapping for all tapes in the 
   magazine into the mapping file.
 
 This doesn't work for me.  enclosed is my changer.conf file and the
 relevent entries from my amanda.conf file.
 
 I would like to have amanda cross-reference barcodes to tape labels,
 perhaps with an inventory control to verify barcode/amlabel mapping.  if
 amand needs a certain label, it tells the chager to load barcode #n.
 
 //cut from amanda.conf
 tpchanger chg-scsi
 tapedev 0   # the abstract changer config in the changer file
 changerfile /usr/local/etc/amanda/changer/changer.conf
 
 //my changer.conf contents:
 number_configs 1
 eject 0 
 sleep 140
 cleanmax 20
 changerdev /dev/sg1
 
 config 0
 drivenum0
 dev /dev/nst1
 startuse0
 havebarcode 1
 enduse  12
 cleancart   13
 statfile/var/amanda/DailySet2/tape0-slot
 cleanfile   /var/amanda/DailySet2/tape0-clean
 usagecount  /var/amanda/DailySet2/totaltime
 tapestatus  /var/amanda/DailySet2/tapestatus
 labelfile   /usr/local/etc/amanda/changer/labelfile
 
 //sample contents of a chg-scsi*debug file
 chg-scsi: debug 1 pid 1218 ruid 11600 euid 11600 start time Wed Jun 19 15:27:48 
 2002
 chg-scsi: $Id: chg-scsi.c,v 1.1 2002/06/18 18:25:41 jason0 Exp $
 ARG [0] : /usr/local/libexec/chg-scsi
 ARG [1] : -slot
 ARG [2] : current
 Number of configurations: 1
 Tapes need eject: No
 Inv. auto update: No
 barcode reader  : Yes
 Tapes need sleep: 140 seconds
 Cleancycles : 20
 Changerdevice   : /dev/sg1
 Labelfile   : /usr/local/etc/amanda/changer/labelfile
 Tapeconfig Nr: 0
   Drivenumber   : 0
   Startslot : 0
   Endslot   : 12
   Cleanslot : 13
   Devicename: /dev/nst1
   changerident  : none
   SCSITapedev   : none
   tapeident : none
   statfile  : /var/amanda/DailySet2/tapestatus
   Slotfile  : /var/amanda/DailySet2/tape0-slot
   Cleanfile : /var/amanda/DailySet2/tape0-clean
   Usagecount: /var/amanda/DailySet2/totaltime
  # START SenseHandler
 

Amanda, clearcase and Veritas

2002-06-21 Thread Jason Brooks

Hello,

I am about to build a wrapper script as a stand-in for vxdump.  this
wrapper will
a) lock all of the clearcase vobs
b) mount a snapshot volume
c) unlock the vobs
d) run vxdump on that snapshot volume
e) unmount the snapshot volume.

Has anyone already done this, and what sorts of problems have you had?

thanks...

--jason

-- 

~~~
Jason Brooks ~ (503) 641-3440 x1861
  Direct ~ (503) 924-1861
System / Network Administrator 
Wind River Systems
8905 SW Nimbus ~ Suite 255  
Beaverton, Or 97008



Re: Backing up PostgreSQL?

2002-06-21 Thread Greg A. Woods

[ On Friday, June 21, 2002 at 18:00:19 (-0300), Martín Marqués wrote: ]
 Subject: Re: Backing up PostgreSQL?

 On Vie 21 Jun 2002 17:33, Greg A. Woods wrote:
  [ On Friday, June 21, 2002 at 18:37:20 (+0100), Paul Jakma wrote: ]
 
   Subject: Re: Backing up PostgreSQL?
  
   On Thu, 20 Jun 2002, Greg A. Woods wrote:
 Where does it say that close/open will flush metadata?
   
That's how the unix filesystem works.  UTSL.
  
   that's an implementation issue - you cant rely on it.
 
  Apparently you can on anything but linux, and I'm rather stunned by the
  situation with the latter
 
 Other UNIX? If you could rely on it, why does Informix (and other database 
 servers, but this is the one I used) still use it's own FS to write the 
 database files?

Because there are tremendous performance advantages to using RAW I/O if
you're prepared to go to the trouble of dealing with all the niggling
details

(some of these advantages are supposedly less visible on modern hardware
and modern systems, but they sure as heck were plainly visible even a
few years ago)

-- 
Greg A. Woods

+1 416 218-0098;  [EMAIL PROTECTED];  [EMAIL PROTECTED];  [EMAIL PROTECTED]
Planix, Inc. [EMAIL PROTECTED]; VE3TCP; Secrets of the Weird [EMAIL PROTECTED]



Re: Backing up PostgreSQL?

2002-06-21 Thread Greg A. Woods

[ On Saturday, June 22, 2002 at 00:07:05 (+0200), Ragnar Kjørstad wrote: ]
 Subject: Re: Backing up PostgreSQL?

 On Fri, Jun 21, 2002 at 05:17:35PM -0400, Greg A. Woods wrote:
  Yes, I agree with this, assuming you're talking about an increment to
  the currently available release that puts you up ahead to some mythical
  future vapour-ware.  The ability to reliably restore consistency in such
  a backup copy of the files not only depends on write-ahead-logging
  support, but also on properly functioning REDO _and_ UNDO functions.
 
 It should only require REDO.
 No changes are made to the actual database-files until the transaction
 is commited, written to the WAL and fsynced. At this point there is no
 longer a need for UNDO.

Hmmm possibly.  I'm not that intimately familiar with the current
WAL implementation, though what I have stated comes directly from the
7.2.1 manual.  If I'm wrong then so is the manual.  :-)


 But it doesn't do changes to the data-pages until the transaction is
 commited.
 
 If it had; your database would have been toast if you lost power. (or at
 least not correct)

And so I assume it could be -- and so the manual claims it could be too

As I read it the manual claims the future implementation of UNDO will
have the future benefit of avoiding this danger.


  When you restore a database from backups you really do want to just
  immediately start the database engine and know that the restored files
  have full integrity.  You realy don't want to have to rely on W.A.L.
 
 I really don't see why relying on WAL is any different from relying on
 other postgresql features - and it is hard to run a postgresql database
 without relying on postgresql

well, the WAL implementation is new, acknowledged to be incomplete, and
so far as I can tell requires additional work on startup

Indeed re-loading a pg_dump will take lots more time, but that's why I
want my DB backups to be done both as a ready-to-roll set of file images
as well as a pg_dump  :-)

  When you restore a database from backups during a really bad disaster
  recovery procedure you sometimes don't even want to have to depend on
  the on-disk-file format being the same.  You want to be able to load the
  data into an arbitrary version of the software on an arbitrary platform.
 
 Yes; unfortenately this is not even possible with pg_dump. (it is better
 than a filesystem backup in this regard, but there are still
 version-incompabilities that have to be fixed manually)

Indeed -- but so goes the entire world of computing!  ;-)

-- 
Greg A. Woods

+1 416 218-0098;  [EMAIL PROTECTED];  [EMAIL PROTECTED];  [EMAIL PROTECTED]
Planix, Inc. [EMAIL PROTECTED]; VE3TCP; Secrets of the Weird [EMAIL PROTECTED]



Re: Backing up PostgreSQL?

2002-06-21 Thread Greg A. Woods

[ On Friday, June 21, 2002 at 23:59:27 (+0200), Ragnar Kjørstad wrote: ]
 Subject: Re: Backing up PostgreSQL?

 Remember I've only been talking about the backed up files being in a
 self-consistent state and not requiring roll-back or roll-forward of
 any transaction logs after restore.
 
 If you don't want your database to do roll-back or roll-forward
 backup-snapshots will work for you. Neither will power-failures. And in
 theory a clean shutdown could also leave the database in a state where
 it required roll-forward on restart, but my guess is that's not the
 case.

Actually I'd assume the opposite -- that even a clean shutdown of the
engine could leave an uncommitted transaction in the database image and
thus might require a log UNDO (not a log REDO).

The difficulty of course being those pesky long-running transactions,
and the fact that you can't always shut down the clients first (and thus
hopefully trigger a proper undo of the open transactions before the
database engine itself is shut down).

 There are no writes to the filesystem at all while the snapshot is in
 progress. The LVM-driver will block all data-access until it is
 completed. If not, it wouldn't be a snapshot, would it?

I realize that -- but it doesn't mean the snapshot couldn't begin
between two inter-related writes to two separate files (or even two
separate but inter-related writes to two separate pages in the same
file).  If both writes are necessary to result in a consistent DB then
the snapshot will necessarily be inconsistent by definition.  If you
want the snapshot to be in a self-consistent state you have to somehow
pause the database at a point where there are no open transactions and
make sure the database engine has issued writes for every bit of data
for all committed transactions (and potentially even that it has closed
all its open files so that FS metadata is also updated).  The best way
to do this seems to be to shut down the DB engine, and it's certainly
the only application-independent procedure I can think of.

 Huh? Why would you want a seperate backup of the database transaction
 log? The log is stored in a file together with the rest of the database,
 and will be included in the snapshot and the backup.

Yeah, but I'm assuming that I'll have to manually UNDO any uncommitted
transactions, as per the 7.2.1 manual.

 If you didn't backup the transaction-log at the exact same time you
 backed up the rest of the files it would not work - it would be
 inconsistent and there could be data missing.

indeed -- however I've been trying to think of a way to find open
transactions in an incomplete transaction log.  If an open transaction
was not opened in the current log then I've got to look for it to be
closed to know that it was incomplete (and thus needs to be undone) at
the time the FS snapshot was taken.  I was probably barking up the wrong
tree with that idea though

-- 
Greg A. Woods

+1 416 218-0098;  [EMAIL PROTECTED];  [EMAIL PROTECTED];  [EMAIL PROTECTED]
Planix, Inc. [EMAIL PROTECTED]; VE3TCP; Secrets of the Weird [EMAIL PROTECTED]



Re: Backing up PostgreSQL?

2002-06-21 Thread Michael H.Collins

I thought i just read that postgresql on ext3 outran oracle on raw
devices.

~Because there are tremendous performance advantages to using RAW I/O if
~you're prepared to go to the trouble of dealing with all the niggling
~details
~
~(some of these advantages are supposedly less visible on modern
~hardware and modern systems, but they sure as heck were plainly visible
~even a few years ago)
~
~-- 
-- 


 (o_
 //\ 
 V_/_ 
Michael H. Collins  Admiral, Penguinista Navy
http://www.mdrconsult.com   http://www.lrsehosting.com/
http://kpig.com http://beta.rawdeal.org