Re: Backing up PostgreSQL?

2002-06-13 Thread Ragnar Kjørstad

On Thu, Jun 13, 2002 at 02:13:39PM -0500, Kirk Strauser wrote:
> I just finished reading `Unix Backup and Recovery' and realized that there
> are quite a few holes in my disaster recovery plan.  In particular, I'm
> looking for a way to backup PostgreSQL without taking the database offline.
> I'm assuming that whatever method I use will involve writing a wrapper
> script to prepare the backup, launch Amanda, then clean up afterward, but
> that's about as much as I've guessed right now.

Alternatively to the pg_dump approach others have suggested you can
snapshot your filesystem and then use regular backup.

The easiest way to snapshot the filesystem is to use a logical volume
manager (LVM or EVMS on linux) and then do:
1. take database offline
2. take snapshot
3. take database online
4. backup from snapshot
5. remove snapshot


-- 
Ragnar Kjørstad
Big Storage



Re: Backing up PostgreSQL?

2002-06-14 Thread Ragnar Kjørstad

On Thu, Jun 13, 2002 at 04:57:16PM -0500, Kirk Strauser wrote:
> 
> At 2002-06-13T21:26:22Z, Ragnar Kjørstad <[EMAIL PROTECTED]> writes:
> 
> > The easiest way to snapshot the filesystem is to use a logical volume
> > manager (LVM or EVMS on linux) and then do:
> > 1. take database offline
> > 2. take snapshot
> > 3. take database online
> > 4. backup from snapshot
> > 5. remove snapshot
> 
> I'm on FreeBSD-STABLE right now, so that's unfortunately not an option at
> the moment.  I'm interested that you include steps #1 and #3, though.  On
> FreeBSD-CURRENT, the snapshot is atomic.  There wouldn't be a need to stop
> or restart any services.  Is it different on Linux?

snapshot is atomic on linux as well, but by shutting the database down
first you allow postgresql to shut down cleanly.

If fsync is enabled and so on you should still be able to start with a
consistant database, so I suppose it's not strictly required.



-- 
Ragnar Kjørstad
Big Storage



Re: Backing up PostgreSQL?

2002-06-14 Thread Ragnar Kjørstad

On Thu, Jun 13, 2002 at 05:33:31PM -0500, Brandon D. Valentine wrote:
> >The easiest way to snapshot the filesystem is to use a logical volume
> >manager (LVM or EVMS on linux) and then do:
> >1. take database offline
> >2. take snapshot
> >3. take database online
> >4. backup from snapshot
> >5. remove snapshot
> 
> I would like to comment that while this is a possible way to backup your
> database, it's not the way I would recommend going about it.  There are
> a couple of caveats:
> 
> 1) In order to take a filesystem snapshot you must have enough diskspace
> elsewhere to contain the filesystem snapshot.  If your database resides
> on a large filesystem with other data[0] then you're unlikely to want to
> deal with caching an entire snapshot until amanda backs it up.

You need extra space with both approaches (pg_dump and snapshot). Which
solution requires the most space will depend on many factors, e.g. how
much you write you your database. If it's mostly read-only, the snapshot
will not require much space.

> 2) Backing up a database by grabbing the actual files off of the disk
> can introduce problems if trying to restore onto, for instance, an
> upgraded version of Postgres, which might have changed the ondisk
> representation slightly.  There are also problems if you migrate to a
> different architecture since things like byte ordering can change.  By
> generating a database dump with pg_dump you insure that you have a
> portable, plain text file full of valid query commands which can be read
> into any future version of Postgres and possibly even into other RDMBS
> products provided you choose a pg_dump format which is standards
> complaint enough.

Yes, this is a backup-solution, not a migration-solution.

pg_dumps can not always be imported into newer postgresql versions
without modifications.

> 3) If your snapshot code is not atomic it means you must take your
> database server down everytime you make the snapshot, which on a large
> filesystem could be a non-trivial amount of time.  With pg_dump you're
> just dumping the tables via the standard Postgres interface so you've
> got no issues with doing it on a running database.

Hmm, is the pg_dump consistent?
IOW, is it done in a single transaction? (even for multiple databases?)
If yes, would a very long-running pg_dump not cause problems for the
running server? I know postgresql doesn't lock whole tables, but it
means that if data changes postgresql needs to keep two branches, and it
will require extra diskspace and I suppose also introduce overhead to
other processes?

Of course, if the database is mostly read-only this is just a minor
problem, but that's true for snapshot-backups as well.



-- 
Ragnar Kjørstad
Big Storage



Re: Backing up PostgreSQL?

2002-06-14 Thread Ragnar Kjørstad

On Fri, Jun 14, 2002 at 04:32:20PM +0100, Niall O Broin wrote:
> > 1) Make a snapshot
> > 2) Use dump to back up that completely static filesystem image
> > 3) Remove the snapshot
> 
> This is NOT guaranteed to work - it may, if you're lucky. By doing this
> you're guaranteeing that the database files, no matter how active the
> database, are frozen in time via the snapshot. But the big issue that you're
> failing to address here is that any one point in time the database files are
> not internally consistent. 

They're always consistent. 

If they hadn't been, you would not be able to recover from a crash /
powerfailure.



-- 
Ragnar Kjørstad
Big Storage



Re: Backing up PostgreSQL?

2002-06-15 Thread Ragnar Kjørstad

On Fri, Jun 14, 2002 at 09:15:15PM -0400, Greg A. Woods wrote:
> [ On Saturday, June 15, 2002 at 00:45:12 (+0200), Ragnar Kjørstad wrote: ]
> > Subject: Re: Backing up PostgreSQL?
> >
> > snapshots (LVM snapshots) are not "supposedly nearly instantaneous", but
> > "instantaneous". All write-access to the device is _locked_ while the
> > snapshot is in progress (the process takes a few milliseconds, maybe a
> > second on a big system), and there are _no_ races. That's the whole
> > point of the snapshot!
> 
> That's irrelevant from PostgreSQL's point of view.  There's no sure way
> to tell the postgresql process(es) to make the on-disk database image
> consistent before you create the snapshot.  The race condition is
> between the user-level process and the filesystem.  The only sure way to
> guarantee a self-consistent backup is to shut down the process so as to
> ensure all the files it had open are now closed.  PostgreSQL makes no
> claims that all data necessary to present a continually consistent view
> of the DB will be written in a single system call.  In fact this is
> impossible since the average database consistes of many files and you
> can only write to one file at a time through the UNIX system interface.

Yes it does, and no it's not impossible.
see http://www.postgresql.org/idocs/index.php?wal.html

> Yes there are other ways to recover consistency using other protection
> mechanisms maintained by PostgreSQL, but you don't want to be relying on
> those when you're doing backups -- you barely want to rely on those when
> you're doing crash recovery!

There is certainly a tradeoff. It's always a good idea to check the
validity of ones backups, and this is even more important in cases like
this were the process is relatively complicated.

> If doing a snapshot really is that fast then there's almost no excuse
> _not_ to stop the DB -- just do it!  Your DB downtime will not be
> noticable.

Stopping the database means closing all the connections, and if you have
multiple applications doing long overlapping transactions that don't
recover well from shutting down the database, then you have a problem.

> > To postgresql (or any other application) the rollback of an snapshot
> > (or the backup of a snapshot) will be exactly like recovering from a
> > crash. Database-servers need to write the data to disk (and fsync)
> > before the transaction is completed. In practise they don't actually
> > write it to the database-files but to a log, but that doesn't change the
> > point. 
> > 
> > So, the only advantage of shutting down the database is that it doesn't
> > have to recover like from a crash.
> 
> Newer releases of PostgreSQL don't always use fsync().  I wouldn't trust
> recovery to be consistent without any loss implicitly.  

The newest release of postgresql always use fsync (on it's log) unless
you specificly turn it off. You shouldn't do that if you care about your
data.


> Because
> PostgreSQL uses the filesystem (and not raw disk) the only way to be
> 100% certain that what's written to disk is a consistent view of the DB
> is to close all the open DB files.

The only requirement on the filesystem is that it is journaling as well,
so it's always kept in a consistant state like the postgresql-database
is..


> You don't want the state of your backups to appear as if the system had
> crashed -- you want them to be fully self-consistent. 

They _are_ fully self-consistant. I don't disagree that ideally you
would want a clean shutdown, but it's a tradeoff.

> At least that's
> what you want if you care about your data _and_ your application, and
> you care about getting a restored system back online ASAP.  

Restoring a snapshot is the fastest possible way of getting the system
back online, and even a tape-backup of the snapshot will be faster than
importing the database.



-- 
Ragnar Kjørstad
Big Storage



Re: Backing up PostgreSQL?

2002-06-16 Thread Ragnar Kjørstad
ts of writes going on, of course)
* generality
  This approach works for _all_ applications, not just databases
* consistancies between multiple databases
  (AFAIK the pg_dumpall doesn't take an atomic dump of all databases,
  so if you have a weird frontend that uses multiple databases and
  expect them to be consistant)
* No additional CPU-load on the database-server 



The space-problem with pg_dump is not a fundamental pg_dump problem BTW.
If someone wrote a amanda plugin to backup the database directly instead
of writing it to file first, there is no additional space requirement.
It should be possible to write a pg_dump plugin that works just like tar
and dump, and enables you to backup the database directly from amanda.
(it would also eliminate the need for wrapper-scripts or cron-jobs that
do the pg_dump independently from amanda.

Of course the "directory" argument doesn't make sense for pg_dump, but
one could use a syntax like "/database-name" or "/database-name/table"
to specify what to back up.



-- 
Ragnar Kjørstad
Big Storage



Re: Backing up PostgreSQL?

2002-06-20 Thread Ragnar Kjørstad

[ This thread is becomming offtoppic and I suspect highly irrelevant to
most users on this list. I propse you remove the CC or move it to a
relevant list like reiserfs or postgresql-admin if you wish to reply ]

On Thu, Jun 20, 2002 at 01:11:09PM -0400, Greg A. Woods wrote:
> [ On Wednesday, June 19, 2002 at 23:53:14 (+0200), Ragnar Kjørstad wrote: ]
> > Subject: Re: Backing up PostgreSQL?
> >
> > By this definition postgresql is consistant at all times
> 
> That's simply not possible to be true.  PostgreSQL uses multiple files
> in the filesystem namespace to contain its data -- sometimes even
> multiple files per table.  It is literally impossible, with the POSIX
> file I/O interfaces, to guarantee that concurrent writes to multiple
> files will all complete at the same time.  Remember I've only been
> talking about the backed up files being in a self-consistent state and
> not requiring roll-back or roll-forward of any transaction logs after
> restore.

Puh - we've been through this already! Postgresql doesn't need this
guarantee, because it writes to it's log to avoid this very problem!

> > Not at all. There are multiple levels of consistancy and in order to be
> > safe from corruption you need to think of all of them. The WAL protects
> > you from database inconsistancies, the journaling filesystem from
> > filesystem inconsistancies and a if the RAID is doing write-back caching
> > it must have battery-backed cached.
> 
> Yeah, but you are still making invalid claims about what those different
> levels of consistency imply w.r.t. the consistency of backed up copies
> of the database files.

"You're wrong" is simply not a convincing line of argument.

> > Where does it say that close/open will flush metadata?
> 
> That's how the unix filesystem works.  UTSL.

Here is the close-code on linux:

asmlinkage long sys_close(unsigned int fd)
{
struct file * filp;
struct files_struct *files = current->files;

write_lock(&files->file_lock);
if (fd >= files->max_fds)
goto out_unlock;
filp = files->fd[fd];
if (!filp)
goto out_unlock;
files->fd[fd] = NULL;
FD_CLR(fd, files->close_on_exec);
__put_unused_fd(files, fd);
write_unlock(&files->file_lock);
return filp_close(filp, files);

out_unlock:
write_unlock(&files->file_lock);
return -EBADF;
}

int filp_close(struct file *filp, fl_owner_t id)
{
int retval;

if (!file_count(filp)) {
printk(KERN_ERR "VFS: Close: file count is 0\n");
return 0;
}
retval = 0;
if (filp->f_op && filp->f_op->flush) {
lock_kernel();
retval = filp->f_op->flush(filp);
unlock_kernel();
}
fcntl_dirnotify(0, filp, 0);
locks_remove_posix(filp, id);
fput(filp);
return retval;
}

As you can see data is only flushed if filp->f_op->flush() is set, and
if you look in fs/ext2/file.c you will see that struct file_operations
ext2_file_operations doesn't define this operation.

I'd quote the open-code as well, but it's much bigger - you'll find it
in the kernel source if you're really interested to find out.


This issue has been discussed in depth on the reiserfs and reiserfs-dev
lists; I propse you subscribe or browse the archives for more
information. In particular there is a thread about filesystem-features
required for mailservers, and there is a post from Wietse where he
writes:

"ext2fs isn't a great file system for mail. Queue files are short-lived,
so mail delivery involves are a lot of directory operations.  ext2fs
has ASYNCHRONOUS DIRECTORY updates and can lose a file even when
fsync() on the file succeeds."


Just to make sure there is no (additional) confusion here; what I'm
saying is:
1. Meta-data must be updated properly. This is obvious and 
   shouldn't require futher explanation...
2. non-journaling filesystems (e.g. ext2 on linux) do update
   the inode-metadata on fsync(), but they do not update the
   directory. 

As postgreSQL and other databases does not create new files very often,
it will _mostly_ be able to recover from a crash on a non-journaling
filesystem - but there is no guarantee. It's an easy mistake to forget
that postgreSQL actually creates files both when new tables/indexes are
created and when they get too big (I've done it myself in the past - it
doesn't make it anymore right)



> > If you read my posts carefully you will find that I've never claimed
> > that filesystem consistency equals database consistency.
> 
> You have.  You have confused the meanings and impli

Re: Backing up PostgreSQL?

2002-06-21 Thread Ragnar Kjørstad
planation...
> > 2. non-journaling filesystems (e.g. ext2 on linux) do update
> >the inode-metadata on fsync(), but they do not update the
> >directory. 
> 
> The Unix Fast File System is not a log/journaling filesystem.  However
> it does not suffer the problems you're worried about.

That depends on what UFS-variation you're refering to, but it may very
well be true for some. (e.g. UFS in solaris8 does optional logging)


> Wasn't this question originally about FreeBSD anyway?

I suppose it was.


-- 
Ragnar Kjørstad
Big Storage



Re: Backing up PostgreSQL?

2002-06-21 Thread Ragnar Kjørstad

On Fri, Jun 21, 2002 at 05:17:35PM -0400, Greg A. Woods wrote:
> Yes, I agree with this, assuming you're talking about an increment to
> the currently available release that puts you up ahead to some mythical
> future vapour-ware.  The ability to reliably restore consistency in such
> a backup copy of the files not only depends on write-ahead-logging
> support, but also on properly functioning REDO _and_ UNDO functions.

It should only require REDO.
No changes are made to the actual database-files until the transaction
is commited, written to the WAL and fsynced. At this point there is no
longer a need for UNDO.

> I.e. PostgreSQL releases up to, and including, 7.2.1 do not provide a
> way to guarantee a database is always recoverable to a consistent state.
> Current releases do not have a built-in way to remove changes already
> made to data pages by uncomitted transactions.

But it doesn't do changes to the data-pages until the transaction is
commited.

If it had; your database would have been toast if you lost power. (or at
least not correct)


> When you restore a database from backups you really do want to just
> immediately start the database engine and know that the restored files
> have full integrity.  You realy don't want to have to rely on W.A.L.

I really don't see why relying on WAL is any different from relying on
other postgresql features - and it is hard to run a postgresql database
without relying on postgresql

> When you restore a database from backups during a really bad disaster
> recovery procedure you sometimes don't even want to have to depend on
> the on-disk-file format being the same.  You want to be able to load the
> data into an arbitrary version of the software on an arbitrary platform.

Yes; unfortenately this is not even possible with pg_dump. (it is better
than a filesystem backup in this regard, but there are still
version-incompabilities that have to be fixed manually)



-- 
Ragnar Kjørstad
Big Storage



Re: Backing up PostgreSQL?

2002-06-21 Thread Ragnar Kjørstad

On Fri, Jun 21, 2002 at 06:28:59PM +0100, Paul Jakma wrote:
> > Now you're getting a little out of hand.  A journaling filesystem is
> > a piling of one set of warts ontop of another.  Now you've got a
> > situation where even though the filesystem might be 100% consistent
> > even after a catastrophic crash, the database won't be.  There's no
> > need to use a journaling filesystem with PostgreSQL 
> 
> eh? there is great need - this is the only way to guarantee that when 
> postgresql does operations (esp on its own application level logs) 
> that the operation will either:
> 
> - be completely carried out
> or
> - not carried out at all

Journaling filesystems doesn't provide this guarantee in general,
because the transactional-interface is not provided to userspace. The
only thing the filesystem guarantees is that filesystem-operations are
carried out completely or not at all.

If a non-journaling filesystem crashes while rename() is in progress,
the file may be present in two directories or none (depending on
implementation). If you create a file, write to it and then crash, the
file may be gone from the directory. _Theese_ are the problems solved by
journaling filesystems.

Luckily postgresql implements it's own system (WAL) to get the same
feature ("atomic" updates) on the database-level.


[ There are actually some work underway to export a transactional
filesystem-API to userspace. When this is completed, an application
could tell the filesystem what operations are part of an transaction,
and have "atomic" updates even to multiple files :-) ]

> > either full mirroring or full level 5 protection).  Indeed there are
> > potentially performance related reasons to avoid journaling
> > filesystems!
> 
> if they're any good they should have better synchronous performance 
> over normal unix fs's. (and synchronous perf. is what a db is 
> interested in).

Syncrounous metadata updates (create/rename ++): yes - they should be
faster. But postgresql doesn't do many of those.

Syncrounous data-updates (write/append): no - because postgresql
already do the writes to a log so there are no seeks involved in the
sync writes. (the writes to the actual files happens asyncrounous).


So there is a theoretical improvement, but it's not likely to show up on
a typical SQL-benchmark...




-- 
Ragnar Kjørstad
Big Storage



Re: Backing up PostgreSQL?

2002-06-22 Thread Ragnar Kjørstad

On Fri, Jun 21, 2002 at 08:43:27PM -0400, Greg A. Woods wrote:
> > There are no writes to the filesystem at all while the snapshot is in
> > progress. The LVM-driver will block all data-access until it is
> > completed. If not, it wouldn't be a snapshot, would it?
> 
> I realize that -- but it doesn't mean the snapshot couldn't begin
> between two inter-related writes to two separate files (or even two
> separate but inter-related writes to two separate pages in the same
> file).  

In which case postgreSQL must rely on it's log on startup to finnish the
set of write-operations.

> If both writes are necessary to result in a consistent DB then
> the snapshot will necessarily be inconsistent by definition. 

Well yes, if you define "consistent" DB to mean you don't have to use
the WAL. I find this definition a little strange, as what I care about
is if my data is still there (and still correct) when I restore the
database.

Anyway - we've covered this so let's not go there again.


> > Huh? Why would you want a seperate backup of the database transaction
> > log? The log is stored in a file together with the rest of the database,
> > and will be included in the snapshot and the backup.
> 
> Yeah, but I'm assuming that I'll have to manually UNDO any uncommitted
> transactions, as per the 7.2.1 manual.

It says the aborted transactions will occupy space - it doesn't say that
you need to manually UNDO them. It doesn't say specificly that you
_don't_, but it's the only thing that makes sense; primarely because
it's impossible to do manual UNDO. (what if a transaction updated an
index but didn't get around to update the table, or the other way around
- how do you fix that manually?)



-- 
Ragnar Kjørstad
Big Storage



Re: Backing up PostgreSQL?

2002-06-22 Thread Ragnar Kjørstad

On Fri, Jun 21, 2002 at 08:30:44PM -0400, Greg A. Woods wrote:
> > It should only require REDO.
> > No changes are made to the actual database-files until the transaction
> > is commited, written to the WAL and fsynced. At this point there is no
> > longer a need for UNDO.
> 
> Hmmm possibly.  I'm not that intimately familiar with the current
> WAL implementation, though what I have stated comes directly from the
> 7.2.1 manual.  If I'm wrong then so is the manual.  :-)

I suppose I ought to make it clear that I'm not a postgreSQL developer
(haven't even read the code) and my statement is purely based on the
manual and and general database knowledge.

The manual _does_ state that the main benefit from the WAL is
consistency, and it wouldn't be if it made changes to the real
database-files before the transaction commited. 

I totally agree the manual could be clearer on this point.

> > I really don't see why relying on WAL is any different from relying on
> > other postgresql features - and it is hard to run a postgresql database
> > without relying on postgresql
> 
> well, the WAL implementation is new, acknowledged to be incomplete, and
> so far as I can tell requires additional work on startup
> 
> Indeed re-loading a pg_dump will take lots more time, but that's why I
> want my DB backups to be done both as a ready-to-roll set of file images
> as well as a pg_dump  :-)

Yes, the more redundant backup-solutions the better :)



-- 
Ragnar Kjørstad
Big Storage



Re: Backing up PostgreSQL?

2002-06-22 Thread Ragnar Kjørstad

On Fri, Jun 21, 2002 at 09:01:17PM -0400, Michael H.Collins wrote:
> I thought i just read that postgresql on ext3 outran oracle on raw
> devices.

Yes, it's funny how all database engines are faster than all other
database engines, isn't it?

> ~Because there are tremendous performance advantages to using RAW I/O if
> ~you're prepared to go to the trouble of dealing with all the niggling
> ~details
> ~
> ~(some of these advantages are supposedly less visible on modern
> ~hardware and modern systems, but they sure as heck were plainly visible
> ~even a few years ago)

There are (As far as I can think of) two aspects of filesystem vs device
access for database-performance:
1. There is some overhead in doing the operations through the
   filesystem. I think this is neglectable with an extent-based
   filesystem; for inode-based filesystems it may be messurable, but
   I doubt it's an important factor.

2. Caching. 
   Usually a read operation on a file will cause the file to be cached
   in RAM, and if the database-engine caches the same data you're
   wasting RAM and slowing down the system. Modern operating system
   changes this in two ways:
   1. The caching is better, so it's possible for a db-engine to
  rely on the os-cache instead of implementing it's own. 
   2. It's possible to turn off OS cache with raw-io.

There are possible some performance advantages of going through the
filesystem as well. For instance some filesystems use knowledge about
the physical hardware (such as the number of spindels and the chunk-size
of the RAID) to optimize IO. Unless the db-engine duplicate theese
features the OS may be able to do the IO more effectively.

The main reason for switching to using a filesystem is operational
though. I once heard that one of the major db-companies (Oracle?) were
going to start doing this on the default configuration because it would
eliminate all the problems with clueless users that "found" a free
partition and started using it for something else :)



-- 
Ragnar Kjørstad
Big Storage



Re: Sony LIB-81 with AIT2 (SDX500) config ?

2002-08-30 Thread Ragnar Kjørstad

On Sat, Aug 31, 2002 at 01:09:49AM +0200, Martin Schmidt wrote:
> Hi,
> 
> as anyone configured amanda/Linux with a Sony Library LIB-81 and a AIT2-Drive 
> (SDX-500) already ?
> 
> I have got it running using mtx and tar, but amanda does not really like it, 
> the chg-zd-mtx reports errors, which cause I suspect in my config's:
> ---
> backup:/etc/amanda/hne # /usr/lib/amanda/chg-zd-mtx -info
> /usr/lib/amanda/chg-zd-mtx: [: -eq: unary operator expected

Did you try:
bash -x /usr/lib/amanda/chg-zd-mtx -info?

It should output the commands beeing executed so it should be possible
to tell what's wrong.



-- 
Ragnar Kjørstad



Re: "data write: File too large" ???

2001-08-14 Thread Ragnar Kjørstad

On Tue, Aug 14, 2001 at 09:05:25AM -0500, Katrinka Dall wrote:
>  FAILED AND STRANGE DUMP DETAILS:
>  
> /-- xx.p /dev/sdb1 lev 0 FAILED ["data write: File too large"]

Does it fail after backing up 2 Gigabyte?

It sounds like you don't have Large File Support (LFS).

> sendbackup: start [x.x.xxx.x.com:/dev/sdb1 level 0]
> sendbackup: info BACKUP=/bin/tar
> sendbackup: info RECOVER_CMD=/usr/bin/gzip -dc |/bin/tar -f... -
> sendbackup: info COMPRESS_SUFFIX=.gz
> sendbackup: info end
> \ 
> 
>   Now, I know that this isn't an issue of not having enough space on the
> tape or holding disk, both are in excess of 35G.  Some of the things I
> have tried are, upgrading tar on the server that is failing, upgrading
> the backup server from RedHat 6.2 to 7.1, and using every available
> version of Amanda.  Currently I am using Amanda-2.4.2p2.  The client
> that I'm having these problems on is a RedHat 5.1 (kernel 2.0.36)
> machine.

Redhat 7.1 should include a kernel, libraries and utilities with LFS.
Did you install some of the utilities manually, or were they all from
RedHat 7.1? (e.g. amanda?)

If so, you need to recompile this on your RH71 system, to make them
support > 2Gb files.


-- 
Ragnar Kjorstad



Re: help :: DEC 5.1 + amanda 2.4.2p2

2001-08-14 Thread Ragnar Kjørstad

On Tue, Aug 14, 2001 at 10:34:58AM -0400, Rivera, Edwin wrote:
> i've been trying to get amanda to backup one of my DEC boxes for the past
> couple of days, but i can't seem to get it going on the 5.1 flavor.  works
> fine on my 4.0f and 4.0d boxes.  am i doing something weird?.. i've tried
> all different combos for disklist entries, but i can't seem to get it going:

Did you apply the advfs patch?
You can find it in on the "Amanda patches page".


-- 
Ragnar Kjorstad
Big Storage