/var/mail directory permissions

2001-04-28 Thread Drew Raines


I've searched the archives for something related to this problem, but
haven't been successful.  I also have tried upgrading amanda to a
recent version, but the configs don't seem to be compatible going from
2.4.1p1 to 2.4.2p2 (question in and of itself).

Here's the situation.  Solaris 7, amanda 2.4.1p1.  I have 21 assorted
partitions being backed-up on two hosts.  I have a scheme going with a
6 disk DDS changer where I do a level 0 on weekends and incrementals
during the week.  Everything runs fine, no error messages.. and
everything gets written to tape.  Since a couple months ago, though, a
directory underneath my mail partition has not been written aside
from its name.  I've checked the logs everytime and amanda thinks life
is hunky-dorie.

I've determined the problem to be somewhat related to permissions.
The /export/data/mail (linked to /var/mail) is such

  drwxrwsrwt   /export/data/mail

The directory "folders" (IMAP folders) underneath that *was*

  drwxr-s---   /export/data/mail/folders

and wasn't getting written to tape.  I changed it to

  drwxrwsrwt

and only the data one level underneath was written, meaning some other
permission problem is occuring (I guess).  I don't see why the sticky
bit or being setgid would affect it.

I write all that to ask:  Is it indeed a permission problem?  Can I
not tell amanda to just write every friggin thing on a disk whether it
likes it or not?  It used to write it just fine with no worries...
what could I have changed that would cause that problem?

Forgive my newbieness... I'd be grateful for some answers.



-- 
Andrew A. Raines  | [EMAIL PROTECTED] | +1 615 343 5853 
program in human genetics | vanderbilt university medical center



Re: Linus Torvald's opinion on Dump.

2001-04-28 Thread Jamie Bowden

On Sat, 28 Apr 2001, Gerhard den Hollander wrote:

:* Daniel David Benson <[EMAIL PROTECTED]> (Fri, Apr 27, 2001 at 06:27:11PM -0700)
:
:> Why not?  Have you seen/had problems with recent versions of gnu tar, or
:> Sun's bundled tar?  I have seen problems with versions of tars 
:
:Unless gnutar is the latest (1.13.19 I think.
:Chances ae it will screw you over backward (that is SEGV) whern you really
:need it to read stuff back from tape.

Any version of tar works above the filesystem level, which is slow and
problematic in it's own ways many others here have already brought up.

There's also another issue I'll bring up in response to your next comment.

:Sun tar has other porblems (try using suntar on the latest qt snapshot).

I thankfully don't deal with Solaris anymore.  Irix's xfsdump is fast, if
a little odd for those not used to it.  As for what the esteemed Master
Torvald's said I haven't read it, but if it's on the level of all the
latest shit he's been spewing I don't have any interest in doing so 
either.  Those who feel the need to sent me hate mail because I refuse to
worhip Linus may feel free to FOAD at their earliest convenience.

Jamie Bowden

-- 
"It was half way to Rivendell when the drugs began to take hold"
Hunter S Tolkien "Fear and Loathing in Barad Dur"
Iain Bowen <[EMAIL PROTECTED]>



/*
 * Amanda, The Advanced Maryland Automatic Network Disk Archiver
 * Copyright (c) 1991-1998 University of Maryland at College Park
 * All Rights Reserved.
 *
 * Permission to use, copy, modify, distribute, and sell this software and its
 * documentation for any purpose is hereby granted without fee, provided that
 * the above copyright notice appear in all copies and that both that
 * copyright notice and this permission notice appear in supporting
 * documentation, and that the name of U.M. not be used in advertising or
 * publicity pertaining to distribution of the software without specific,
 * written prior permission.  U.M. makes no representations about the
 * suitability of this software for any purpose.  It is provided "as is"
 * without express or implied warranty.
 *
 * U.M. DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE, INCLUDING ALL
 * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS, IN NO EVENT SHALL U.M.
 * BE LIABLE FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
 * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION
 * OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
 * CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
 *
 * Authors: the Amanda Development Team.  Its members are listed in a
 * file named AUTHORS, in the root directory of this distribution.
 */
/* 
 * $Id: sendsize.c,v 1.97.2.13 2000/10/11 02:08:26 martinea Exp $
 *
 * send estimated backup sizes using dump
 */

#include "amanda.h"
#include "pipespawn.h"
#include "amandates.h"
#include "getfsent.h"
#include "version.h"

#ifdef SAMBA_CLIENT
#include "findpass.h"
#endif

#ifdef HAVE_SETPGID
#  define SETPGRP   setpgid(getpid(), getpid())
#  define SETPGRP_FAILED() do { \
dbprintf(("setpgid(%ld,%ld) failed: %s\n",  \
  (long)getpid(), (long)getpid(), strerror(errno)));\
} while(0)

#else /* () line 0 */
#if defined(SETPGRP_VOID)
#  define SETPGRP   setpgrp()
#  define SETPGRP_FAILED() do { \
dbprintf(("setpgrp() failed: %s\n", strerror(errno)));  \
} while(0)

#else
#  define SETPGRP   setpgrp(0, getpid())
#  define SETPGRP_FAILED() do { \
dbprintf(("setpgrp(0,%ld) failed: %s\n",\
  (long)getpid(), strerror(errno)));\
} while(0)

#endif
#endif

typedef struct level_estimates_s {
time_t dumpsince;
int estsize;
int needestimate;
} level_estimate_t;

typedef struct disk_estimates_s {
struct disk_estimates_s *next;
char *amname;
char *dirname;
char *exclude;
char *program;
int spindle;
level_estimate_t est[DUMP_LEVELS];
} disk_estimates_t;

disk_estimates_t *est_list;

#define MAXMAXDUMPS 16

int maxdumps = 1, dumpsrunning = 0;
char *host; /* my hostname from the server */

/* local functions */
int main P((int argc, char **argv));
void add_diskest P((char *disk, int level, char *exclude, int spindle, char *prog));
void calc_estimates P((disk_estimates_t *est));
void free_estimates P((disk_estimates_t *est));
void dump_calc_estimates P((disk_estimates_t *));
void smbtar_calc_estimates P((disk_estimates_t *));
void gnutar_calc_estimates P((disk_estimates_t *));
void generic_calc_estimates P((disk_estimates_t *));



int main(argc, argv)
int argc;
char **argv;
{
int level, new_maxdumps, spindle;
char *prog, *disk, *dumpdate, *exclude = NULL;
disk_estimates_t *est;
disk_estimates_t *est_prev;
char *li

Re: Linus Torvald's opinion on Dump.

2001-04-28 Thread Daniel David Benson



> Unless gnutar is the latest (1.13.19 I think.
> Chances ae it will screw you over backward (that is SEGV) whern you really
> need it to read stuff back from tape.
> 
> Sun tar has other porblems (try using suntar on the latest qt snapshot).

Yeah definitely.  I suspect, though, that the qt snapshot is tarred
via gtar and not Sun's.  This is where I have seen problems as well.

> since i use reiserfs, I have to use tar, and have never had problems with
> it.

Is anyone working on a dump equivalent for reiserfs?

> It cuts 4 - 5 hours of my dumptimes.

Tar definitely sucks in this respect.  At my last job I had to use
tar for some SAMBA backups.  Definitely slowed things down a lot.

-Dan





Re: Linus Torvald's opinion on Dump.

2001-04-28 Thread Gerhard den Hollander

* Daniel David Benson <[EMAIL PROTECTED]> (Fri, Apr 27, 2001 at 06:27:11PM -0700)

> Why not?  Have you seen/had problems with recent versions of gnu tar, or
> Sun's bundled tar?  I have seen problems with versions of tars 

Unless gnutar is the latest (1.13.19 I think.
Chances ae it will screw you over backward (that is SEGV) whern you really
need it to read stuff back from tape.

Sun tar has other porblems (try using suntar on the latest qt snapshot).

Havoing said that,
since i use reiserfs, I have to use tar, and have never had problems with
it.


Configuring amanada to use tar iso dump is a matter of changing the key in
your dumplist.

As for tar being twice as slow as dump,
just use the modified sendsize.c which uses calcsize iso tar to estimate
dumpsizes.

It cuts 4 - 5 hours of my dumptimes.

Kind regards,
 --
Gerhard den Hollander   Phone +31-10.280.1515
Technical Support Jason Geosystems BV   Fax   +31-10.280.1511
   (When calling please note: we are in GMT+1)
[EMAIL PROTECTED]  POBox 1573
visit us at http://www.jasongeo.com 3000 BN Rotterdam  
JASON...#1 in Reservoir CharacterizationThe Netherlands

  This e-mail and any attachment is/are intended solely for the named
  addressee(s) and may contain information that is confidential and privileged.
   If you are not the intended recipient, we request that you do not
 disseminate, forward, distribute or copy this e-mail message.
  If you have received this e-mail message in error, please notify us
   immediately by telephone and destroy the original message.


/*
 * Amanda, The Advanced Maryland Automatic Network Disk Archiver
 * Copyright (c) 1991-1998 University of Maryland at College Park
 * All Rights Reserved.
 *
 * Permission to use, copy, modify, distribute, and sell this software and its
 * documentation for any purpose is hereby granted without fee, provided that
 * the above copyright notice appear in all copies and that both that
 * copyright notice and this permission notice appear in supporting
 * documentation, and that the name of U.M. not be used in advertising or
 * publicity pertaining to distribution of the software without specific,
 * written prior permission.  U.M. makes no representations about the
 * suitability of this software for any purpose.  It is provided "as is"
 * without express or implied warranty.
 *
 * U.M. DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE, INCLUDING ALL
 * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS, IN NO EVENT SHALL U.M.
 * BE LIABLE FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
 * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION
 * OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
 * CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
 *
 * Authors: the Amanda Development Team.  Its members are listed in a
 * file named AUTHORS, in the root directory of this distribution.
 */
/* 
 * $Id: sendsize.c,v 1.97.2.13 2000/10/11 02:08:26 martinea Exp $
 *
 * send estimated backup sizes using dump
 */

#include "amanda.h"
#include "pipespawn.h"
#include "amandates.h"
#include "getfsent.h"
#include "version.h"

#ifdef SAMBA_CLIENT
#include "findpass.h"
#endif

#ifdef HAVE_SETPGID
#  define SETPGRP   setpgid(getpid(), getpid())
#  define SETPGRP_FAILED() do { \
dbprintf(("setpgid(%ld,%ld) failed: %s\n",  \
  (long)getpid(), (long)getpid(), strerror(errno)));\
} while(0)

#else /* () line 0 */
#if defined(SETPGRP_VOID)
#  define SETPGRP   setpgrp()
#  define SETPGRP_FAILED() do { \
dbprintf(("setpgrp() failed: %s\n", strerror(errno)));  \
} while(0)

#else
#  define SETPGRP   setpgrp(0, getpid())
#  define SETPGRP_FAILED() do { \
dbprintf(("setpgrp(0,%ld) failed: %s\n",\
  (long)getpid(), strerror(errno)));\
} while(0)

#endif
#endif

typedef struct level_estimates_s {
time_t dumpsince;
int estsize;
int needestimate;
} level_estimate_t;

typedef struct disk_estimates_s {
struct disk_estimates_s *next;
char *amname;
char *dirname;
char *exclude;
char *program;
int spindle;
level_estimate_t est[DUMP_LEVELS];
} disk_estimates_t;

disk_estimates_t *est_list;

#define MAXMAXDUMPS 16

int maxdumps = 1, dumpsrunning = 0;
char *host; /* my hostname from the server */

/* local functions */
int main P((int argc, char **argv));
void add_diskest P((char *disk, int level, char *exclude, int spindle, char *prog));
void calc_estimates P((disk_estimates_t *est));
void free_estimates P((disk_estimates_t *est));
void dump_calc_estimates P((disk_estimates_t *));
void smbt

Re: Linus Torvald's opinion on Dump.

2001-04-28 Thread John R. Jackson

[ This is way too long.  Sorry.  --JJ ]

"Carey Jung" <[EMAIL PROTECTED]> writes:

>From the gtar man page:
>
>OTHER OPTIONS
>   --atime-preserve
>  don't change access times on dumped files

Which, in turn, causes the ctime to be changed, which makes all the
files look like they need to be dumped again next time.

>Carey

Jens Bech Madsen <[EMAIL PROTECTED]> writes:

>Well, that can be avoided too. I remount my data partitions prior to
>backup to noatime (this also speeds up the estimating process a bit).

Before this goes too far (I should have clarified my first comment),
I actually more or less agree with Linus and the other posts that said
the same thing here.  Using dump on a mounted file system is problematic
because of the way it works.  But just saying "use tar" has its own
issues.  If you can live with them, fine.  If you can work around some
of them, fine.  But to just say "dump bad, tar good" is not right.

In your specific example, you've still traded off having valid access
time information for using tar.  If tar cannot update the access time,
neither can anything else, so while you're doing the backup, any access
time updates are lost.

Even if some particular OS provided an option to reset the access
time without resetting anything else, you still have a race condition.

If you want access time to be accurate, tar (or any other program that
uses the standard system I/O interface) is going to be a problem, one
way or another.

One possibly solution would be an "I'm a backup program" flag for the
process that tells the file system layers to not update metadata (such
as access time) when this process is doing I/O.  But now you're talking
about very specific OS support.

>Jens Bech Madsen

Christoph Scheeder <[EMAIL PROTECTED]> writes:

I agree with most of what you said, except:

>if you need to backup an active filesystem use a program like tar 
>which is designed to do that.

Using tar does not guarantee files are consistent on the resulting image.
Consider a file being rewritten at the time tar is running.  Some of the
data is still in the application, some is "on disk" where tar can get it.
And the break is at an OS (e.g. stdio buffer) boundary, which probably
has nothing to do with it being internally consistent.

A specific example: You're editing a fairly large source file and write
out the result.  The stdio library (or whatever) writes one data block
to the existing file, leaving the remainder with the previous data, then
is put to sleep due to kernel scheduling beyond its control.  Tar kicks
in and gets all the blocks.  You do the restore later and end up with
a mixed result.

I don't know that "the editor" works this way (it may truncate the file at
the beginning, for instance).  But the basic idea/problem is always there.

Turned around (tar starts first and then pauses), tar probably isn't
too happy with the file being truncated while it was being read, either.
You again end up with something other than useful.  Yes, the next backup
run should pick it up, but the expectation is that a restore of a given
set of images gives you back a good system.

>Christoph

"Anthony A. D. Talltree" <[EMAIL PROTECTED]> writes:

>One way is to mirror the volume, then break off a mirror and back up
>that.  Another is to use a volume snapshot, which on an active
>filesystem may take rather a long time.

Same problem as above.  The file system (mirror or snapshot) can be
inconsistent in an individual file at the block level.  Mirrors are
worse because they don't necessarily even have the data that was in the
system buffers.  You're back to the dump problem.

My point through all this is that backups involve tradeoffs, regardless
of what program is used.  You need to know what those issues are, what
you can tolerate, and how to get a backup program to live within those
boundaries.

If you can take your system down and boot from CD so all the disks are
unmounted, dump will do a perfectly good job.  If you can't do this
(and very few people can any more), then you have to know what you're
giving up and how to deal with the consequences.

As others have said, it's all a matter of knowing what the programs do,
what the limits are and how to use them properly.

John R. Jackson, Technical Software Specialist, [EMAIL PROTECTED]



Re: Linus Torvald's opinion on Dump.

2001-04-28 Thread Patrick Michael Kane

* Jesper Holm Olsen ([EMAIL PROTECTED]) [010428 08:47]:
> 
> At 08:50 28-04-01 -0500, you wrote:
> 
> >I've seen ads for the commercial and pricey backup packages from
> >Syncsoft, Veritas and so on which claim no problems with live backups
> >on *nix or NT. I suppose they have some way of write-locking files,
> >copy to memory, then releasing the lock, but how could these utils
> >work at the block rather than file level?
> 
> Veritas file-system (VxFS) can make what they call a 'snapshot' of a file
> system. The idea is to take a "snapshot" of a filesystem and mount it as
> readonly on another device. Whenever a block on the original filesystem is
> altered the old one is copied to the snapshot and thus keeping this in the
> state it was in when mounted. This is for example used to backup using
> vxdump. Unfortunatly this is not yet available on Linux as far as I know -
> only Solaris and HPUX :(
> 
> A quick search on google revails that someone is working on this feature
> for Linux as well: http://lwn.net/2001/0308/a/snapfs.php3

Linux LVM supports snapshots.  It's in 2.4 kernels, although it needs some
patching to make it usable.

Best,
-- 
Patrick Michael Kane
<[EMAIL PROTECTED]>



Re: Linus Torvald's opinion on Dump.

2001-04-28 Thread Mitch Collinsworth


> Veritas file-system (VxFS) can make what they call a 'snapshot' of a file
> system. The idea is to take a "snapshot" of a filesystem and mount it as
> readonly on another device. Whenever a block on the original filesystem is
> altered the old one is copied to the snapshot and thus keeping this in the
> state it was in when mounted. This is for example used to backup using
> vxdump. Unfortunatly this is not yet available on Linux as far as I know -
> only Solaris and HPUX :(
> 
> A quick search on google revails that someone is working on this feature
> for Linux as well: http://lwn.net/2001/0308/a/snapfs.php3

This is a reasonably accurate description of how the backup system for
the AFS filesystem works.  And AFS is available for Linux, both as a
commercial product from IBM/Transarc, and as an open source project
called OpenAFS.

-Mitch




Re: Linus Torvald's opinion on Dump.

2001-04-28 Thread Anthony A. D. Talltree

>A quick search on google revails that someone is working on this feature
>for Linux as well: http://lwn.net/2001/0308/a/snapfs.php3

It'll probably work about as well as anything else in Linux land.



Re: Linus Torvald's opinion on Dump.

2001-04-28 Thread Jesper Holm Olsen


At 08:50 28-04-01 -0500, you wrote:

>I've seen ads for the commercial and pricey backup packages from
>Syncsoft, Veritas and so on which claim no problems with live backups
>on *nix or NT. I suppose they have some way of write-locking files,
>copy to memory, then releasing the lock, but how could these utils
>work at the block rather than file level?

Veritas file-system (VxFS) can make what they call a 'snapshot' of a file
system. The idea is to take a "snapshot" of a filesystem and mount it as
readonly on another device. Whenever a block on the original filesystem is
altered the old one is copied to the snapshot and thus keeping this in the
state it was in when mounted. This is for example used to backup using
vxdump. Unfortunatly this is not yet available on Linux as far as I know -
only Solaris and HPUX :(

A quick search on google revails that someone is working on this feature
for Linux as well: http://lwn.net/2001/0308/a/snapfs.php3



-- 
Jesper Holm Olsen,   Graduate student & UNIX System Administrator at the
 Dept. of Computer Science, University of Copenhagen
Guitarist in the c64-band PRESS PLAY ON TAPE: www.hybrisnemesis.com/ppot
Email: [EMAIL PROTECTED]  Homepage: www.diku.dk/students/dunkel



Re: Linus Torvald's opinion on Dump.

2001-04-28 Thread Anthony A. D. Talltree

>I've seen ads for the commercial and pricey backup packages from
>Syncsoft, Veritas and so on which claim no problems with live backups
>on *nix or NT. I suppose they have some way of write-locking files,
>copy to memory, then releasing the lock, but how could these utils
>work at the block rather than file level?

One way is to mirror the volume, then break off a mirror and back up
that.  Another is to use a volume snapshot, which on an active
filesystem may take rather a long time.



Re: Linus Torvald's opinion on Dump.

2001-04-28 Thread C. Chan

Also Sprach Christoph Scheeder:

> Hi,
> if i read this statement of Linus correct the only thing he is saying
> is:
> 
> don't run dump on a filesystem which could be active.
> 
> That isn't realy new. 
> 
> it is known very well that dump has problems with active filesystems, 
> as it bypasses the normal way data gets read and written to disk.
> 
> If you need a dump image to be nearly 100% reliable you have to mount 
> the filesystem readonly or to unmount it completly before doing the
> dump.
> Only then you can be shure all caches have been flushed to disk, and
> nobody will overwrite data while it gets backuped.
> 
> if someone goes the the so called normal way and runs dump on a
> read/write
> mounted filesystem he is indeed playing a dangerous game if he trusts
> his
> backup without verifying all needed data is in the dump.
> 
> so Linus oppinion is completly correct.
> 
> using a program for a purpose it is not designed for is dangerous and 
> might fail.
> 
> if you need to backup an active filesystem use a program like tar 
> which is designed to do that.
> 
> Christoph
> 

I use both vendor dump and GNU tar, the former for platform specific
partitions like /usr, /opt and the latter for data. I don't think either
tar or dump are meant to deal with active filesystems, especially ones
which are highly active.

Isn't the issue here that if you run dump on an active filesystem
it will back up an inconsistent image possibly w/o notification whereas tar
will usually give up on an active file and emit an error message?

Since I use dump to back up system partitions, I don't think an
inconsistent image is a problem with my use of dump, because the changes
are occurring in /var/spool or /var/tmp. And I want tar to fail
rather than create an inconsistent file because of the nature of the data.

I've seen ads for the commercial and pricey backup packages from
Syncsoft, Veritas and so on which claim no problems with live backups
on *nix or NT. I suppose they have some way of write-locking files,
copy to memory, then releasing the lock, but how could these utils
work at the block rather than file level?

--
C. Chan < [EMAIL PROTECTED] > 
Finger [EMAIL PROTECTED] for PGP public key.




Re: Linus Torvald's opinion on Dump.

2001-04-28 Thread Christoph Scheeder

Hi,
if i read this statement of Linus correct the only thing he is saying
is:

don't run dump on a filesystem which could be active.

That isn't realy new. 

it is known very well that dump has problems with active filesystems, 
as it bypasses the normal way data gets read and written to disk.

If you need a dump image to be nearly 100% reliable you have to mount 
the filesystem readonly or to unmount it completly before doing the
dump.
Only then you can be shure all caches have been flushed to disk, and
nobody will overwrite data while it gets backuped.

if someone goes the the so called normal way and runs dump on a
read/write
mounted filesystem he is indeed playing a dangerous game if he trusts
his
backup without verifying all needed data is in the dump.

so Linus oppinion is completly correct.

using a program for a purpose it is not designed for is dangerous and 
might fail.

if you need to backup an active filesystem use a program like tar 
which is designed to do that.

Christoph

Tanniel Simonian schrieb:
> 
> Anyone read from the kernel.org newsgroup a message from Linus Torvald
> about dump?
> 
> here is his message:
> 
> "Note that dump simply won't work reliably at all even in 2.4.x: the
> buffer cache and the page cache (where all the actual data is) are not
> coherent. This is only going to get even worse in 2.5.x, when the
> directories are moved into the page cache as well."
> 
> "So anybody who depends on "dump" getting backups right is already playing
> russian rulette with their backups.  It's not at all guaranteed to get the
> right results - you may end up having stale data in the buffer cache that
> ends up being "backed up"."
> 
> "Right now, the cpio/tar/xxx solutions are definitely the best ones, and
> will work on multiple filesystems (another limitation of "dump"). Whatever
> problems they have, they are still better than the _guaranteed_(*)  data
> corruptions of "dump"."
> 
> "Dump was a stupid program in the first place. Leave it Behind"
> 
> He then finally notes:
> 
> (*) Dump may work fine for you a thousand times. But it _will_ fail under
> the right circumstances. And there is nothing you can do about it.
> 
> 
> 
> okay I've run dump THOUSANDS + 1 times, but have yet to have a problem
> with Amanda and especially restores. It has saved my ass repeatedly,
> especially when a hard disk has failed and everything has been running
> from memory for days.
> 
> So please I need opinions. I've already rigged amanda to work with my
> exabyte EZ17 drive. I don't think I can rig it anymore to work with Tar.
> 
> latz,
> 
> Tanniel Simonian
> Programmer Analyst III
> University of California Riverside, Libraries.



Re: Linus Torvald's opinion on Dump.

2001-04-28 Thread Jens Bech Madsen

"John R. Jackson" <[EMAIL PROTECTED]> writes:

> >Or you could just use gtar
> 
> As long as you don't mind altering the last access time of every file
> that is backed up.
Well, that can be avoided too. I remount my data partitions prior to
backup to noatime (this also speeds up the estimating process a bit).

> 
> >-Dan
> 
> John R. Jackson, Technical Software Specialist, [EMAIL PROTECTED]

Jens Bech Madsen
-- 
Jens Bech Madsen
The Stibo Group, Denmark



RE: Linus Torvald's opinion on Dump.

2001-04-28 Thread Carey Jung

> 
> >Or you could just use gtar
> 
> As long as you don't mind altering the last access time of every file
> that is backed up.
> 

>From the gtar man page:

OTHER OPTIONS
   --atime-preserve
  don't change access times on dumped files

Carey