Re: Backup systems

2023-09-05 Thread David Christensen

On 9/5/23 07:34, Michael Kjörling wrote:

On 4 Sep 2023 13:57 -0700, from dpchr...@holgerdanske.com (David Christensen):

* I am using zfs-auto-snapshot(8) for snapsnots.  Are you using rsnapshot(1)
for snapshots?


No. I'm using ZFS snapshots on the source, but not for backup
purposes. (I have contemplated doing that, but it would increase
complexity a fair bit.) The backup target is not snapshotted at the > block 
storage or file system level; however, rsync --link-dest uses
hardlinks to deduplicate whole files.



+1 for complexity of ZFS backups via snapshots and replication.


My question was incongruous, as "snapshot" has different meanings for 
ZFS and rsnapshot(1):


*   https://docs.oracle.com/cd/E18752_01/html/819-5461/ftyue.html

snapshot

A read-only copy of a file system or volume at a given point in
time.

*   https://rsnapshot.org/rsnapshot/docs/docbook/rest.html

Using rsnapshot, it is possible to take snapshots of your
filesystems at different points in time.


As I understand your network topology and backup strategy, it appears 
that you are using rsnapshot(1) for snapshots (in the rsnapshot(1) sense 
of the term).




* du(1) of the backup file system matches ZFS properties 'referenced' and
'usedbydataset'.


This would be expected, depending on exact specifics (what data du
traverses over and what your ZFS dataset layout is). To more closely
match the the _apparent_ size of the files, you'd look at e.g.
logicalreferenced or logicalused.


* I am unable to correlate du(1) of the snapshots to any ZFS properties --
du(1) reports much more storage than ZFS 'usedbysnapshots', even when scaled
by 'compressratio'.


This would also be expected, as ZFS snapshots are copy-on-write and
thus in effect only bookkeep a delta, whereas du counts the apparent
size of all files accessible under a path and ZFS snapshots allow
access to all files within the file system as they appeared at the
moment the snapshot was created. There are nuances and caveats
involved but, as a first approximation, immediately after taking a ZFS
snapshot the size of the snapshot is zero (plus a small amount of
metadata overhead for the snapshot itself) regardless of the size of
the underlying dataset, and the apparent size of the snapshot grows as
changes are made to the underlying dataset which cause some data to be
referenced only by the snapshot.

In general, ZFS disk space usage accounting for snapshots is really
rather non-intuitive, but it does make more sense when you consider
that ZFS is a copy-on-write file system and that snapshots largely
boil down to an atomic point-in-time marker for dataset state.



Okay.  My server contains one backup ZFS file system for each host on my 
network.  So, the 'logicalreferenced', 'logicalused', and 
'usedbysnapshots' properties I posted for one host's backup file system 
are affected by the ZFS pool aggregate COW, compression, and/or 
deduplcation features.




(In ZFS, a dataset can be either a file system optionally exposed at a
directory mountpoint or a volume exposed as a block device.)



I try to use ZFS vocabulary per the current Oracle WWW documentation 
(but have found discrepancies).  I wonder if ZFS-on-Linux and/or OpenZFS 
have diverged (e.g. 'man zfs' on Debian, etc.):


https://docs.oracle.com/cd/E18752_01/html/819-5461/ftyue.html

"A generic name for the following ZFS components: clones, file
systems, snapshots, and volumes."


David



Re: Backup systems

2023-09-05 Thread Michael Kjörling
On 4 Sep 2023 13:57 -0700, from dpchr...@holgerdanske.com (David Christensen):
> * I am using zfs-auto-snapsnot(8) for snapsnots.  Are you using rsnapsnot(1)
> for snapshots?

No. I'm using ZFS snapshots on the source, but not for backup
purposes. (I have contemplated doing that, but it would increase
complexity a fair bit.) The backup target is not snapshotted at the
block storage or file system level; however, rsync --link-dest uses
hardlinks to deduplicate whole files.


> * du(1) of the backup file system matches ZFS properties 'referenced' and
> 'usedbydataset'.

This would be expected, depending on exact specifics (what data du
traverses over and what your ZFS dataset layout is). To more closely
match the the _apparent_ size of the files, you'd look at e.g.
logicalreferenced or logicalused.


> * I am unable to correlate du(1) of the snapshots to any ZFS properties --
> du(1) reports much more storage than ZFS 'usedbysnapshots', even when scaled
> by 'compressratio'.

This would also be expected, as ZFS snapshots are copy-on-write and
thus in effect only bookkeep a delta, whereas du counts the apparent
size of all files accessible under a path and ZFS snapshots allow
access to all files within the file system as they appeared at the
moment the snapshot was created. There are nuances and caveats
involved but, as a first approximation, immediately after taking a ZFS
snapshot the size of the snapshot is zero (plus a small amount of
metadata overhead for the snapshot itself) regardless of the size of
the underlying dataset, and the apparent size of the snapshot grows as
changes are made to the underlying dataset which cause some data to be
referenced only by the snapshot.

In general, ZFS disk space usage accounting for snapshots is really
rather non-intuitive, but it does make more sense when you consider
that ZFS is a copy-on-write file system and that snapshots largely
boil down to an atomic point-in-time marker for dataset state.

(In ZFS, a dataset can be either a file system optionally exposed at a
directory mountpoint or a volume exposed as a block device.)

-- 
Michael Kjörling 🔗 https://michael.kjorling.se
“Remember when, on the Internet, nobody cared that you were a dog?”



Re: Backup systems

2023-09-04 Thread David Christensen

On 9/4/23 00:53, Michael Kjörling wrote:

On 3 Sep 2023 14:20 -0700, from dpchr...@holgerdanske.com (David Christensen):

Without seeing a console session, I am unsure what you mean by "physically
stored", "total logical (excluding effects of compression) data", and "hot
current logical data ... (excluding things like ZFS snapshots and
compression)".


"Physically stored" is how much data, after compression and including
file system metadata, is actually written to disk and necessary for
all data to be accessible; it's the relevant metric for whether I need
to add disk space.

"Logical" is the sum of all apparent file sizes as visible to userland
utilities e.g. through stat(2).

Something like `dd if=/dev/zero of=$(mktemp) bs=1M count=1M` would
result in a large logical size but, because of compression, a very
small amount of physically stored data.

"Hot" is perhaps better referred to as the "current" data set; since
snapshots (and earlier backups) can include data which has since been
deleted, and is thus no longer current but still exists on disk.



What partitioning scheme, volume manager, file system, compression, etc., do
you use on your backup server?


ZFS within LUKS containers. If I recall correctly, the backup pool is
set to use zstd compression.



I had thought you were using rsnapsnot/ rsync --link-dest, but you also
mention ZFS snapshots.  Please clarify.


Mostly ZFS with a rotating snapshot schedule on the source (the root
file system is ext4); copied using rsync --link-dest (through
rsnapshot) to a ZFS file system which doesn't use snapshots on the
backup target. Most of the ZFS file systems are set up to use
compression; there are a few where I know _a priori_ that the data is
in effect completely incompressible so there's no point in using CPU
to even try to compress that data, so those have compression turned
off.

(In ZFS, creating a file system is barely any more involved than
creating a directory, and all file systems come out of the same "pool"
which is a collection of >=1 storage devices set up with some
particular method of redundancy, possibly none. In more traditional
*nix parlace, a *nix file system is conceptually closer to a ZFS
pool.)

Hopefully this is more clear.



So for backup storage:

* We are both using ZFS with default compression.

* You are using 'rsync --link-dest' (via rsnapshot(1)) for deduplication 
and I am using ZFS for deduplication.


Related:

* I am using zfs-auto-snapsnot(8) for snapsnots.  Are you using 
rsnapsnot(1) for snapshots?



Here are the current backups for my current daily driver:

2023-09-04 13:26:15 toor@f3 ~
# zfs get -o property,value 
compression,compressratio,dedup,logicalreferenced,logicalused,refcompressratio,referenced,used,usedbydataset,usedbysnapshots 
p3/backup/taz.tracy.holgerdanske.com

PROPERTY   VALUE
compressionlz4
compressratio  2.14x
dedup  verify
logicalreferenced  6.59G
logicalused48.7G
refcompressratio   1.83x
referenced 3.89G
used   23.4G
usedbydataset  3.89G
usedbysnapshots19.5G

2023-09-04 13:26:36 toor@f3 ~
# ls -1 /var/local/backup/taz.tracy.holgerdanske.com/.zfs/snapshot | wc -l
 186

2023-09-04 13:27:15 toor@f3 ~
# du -hs /var/local/backup/taz.tracy.holgerdanske.com/ 
/var/local/backup/taz.tracy.holgerdanske.com/.zfs

3.9G/var/local/backup/taz.tracy.holgerdanske.com/
722G/var/local/backup/taz.tracy.holgerdanske.com/.zfs

2023-09-04 13:28:02 toor@f3 ~
# crontab -l
 9 3 * * * /usr/local/sbin/zfs-auto-snapshot -k d 40
21 3 1 * * /usr/local/sbin/zfs-auto-snapshot -k m 99
27 3 1 1 * /usr/local/sbin/zfs-auto-snapshot -k y 99


Observations:

* du(1) of the backup file system matches ZFS properties 'referenced' 
and 'usedbydataset'.


* I am unable to correlate du(1) of the snapshots to any ZFS properties 
-- du(1) reports much more storage than ZFS 'usedbysnapshots', even when 
scaled by 'compressratio'.



David



Re: Backup systems

2023-09-04 Thread Michael Kjörling
On 3 Sep 2023 14:20 -0700, from dpchr...@holgerdanske.com (David Christensen):
>> 8.07 TiB physically stored on one backup drive holding 174 backups;
>> 11.4 TiB total logical (excluding effects of compression) data on the
>> source; 7.83 TiB hot current logical data on the source (excluding
>> things like ZFS snapshots and compression).
> 
> Without seeing a console session, I am unsure what you mean by "physically
> stored", "total logical (excluding effects of compression) data", and "hot
> current logical data ... (excluding things like ZFS snapshots and
> compression)".

"Physically stored" is how much data, after compression and including
file system metadata, is actually written to disk and necessary for
all data to be accessible; it's the relevant metric for whether I need
to add disk space.

"Logical" is the sum of all apparent file sizes as visible to userland
utilities e.g. through stat(2).

Something like `dd if=/dev/zero of=$(mktemp) bs=1M count=1M` would
result in a large logical size but, because of compression, a very
small amount of physically stored data.

"Hot" is perhaps better referred to as the "current" data set; since
snapshots (and earlier backups) can include data which has since been
deleted, and is thus no longer current but still exists on disk.


> What partitioning scheme, volume manager, file system, compression, etc., do
> you use on your backup server?

ZFS within LUKS containers. If I recall correctly, the backup pool is
set to use zstd compression.


> I had thought you were using rsnapsnot/ rsync --link-dest, but you also
> mention ZFS snapshots.  Please clarify.

Mostly ZFS with a rotating snapshot schedule on the source (the root
file system is ext4); copied using rsync --link-dest (through
rsnapshot) to a ZFS file system which doesn't use snapshots on the
backup target. Most of the ZFS file systems are set up to use
compression; there are a few where I know _a priori_ that the data is
in effect completely incompressible so there's no point in using CPU
to even try to compress that data, so those have compression turned
off.

(In ZFS, creating a file system is barely any more involved than
creating a directory, and all file systems come out of the same "pool"
which is a collection of >=1 storage devices set up with some
particular method of redundancy, possibly none. In more traditional
*nix parlace, a *nix file system is conceptually closer to a ZFS
pool.)

Hopefully this is more clear.

-- 
Michael Kjörling 🔗 https://michael.kjorling.se
“Remember when, on the Internet, nobody cared that you were a dog?”



Re: Backup systems

2023-09-03 Thread David Christensen

On 9/3/23 03:02, Michael Kjörling wrote:

8.07 TiB physically stored on one backup drive holding 174 backups;
11.4 TiB total logical (excluding effects of compression) data on the
source; 7.83 TiB hot current logical data on the source (excluding
things like ZFS snapshots and compression).

Which by your way of calculating seems to work out to an about 246:1
savings compared to simply keeping every single copy in full and
uncompressed.



Without seeing a console session, I am unsure what you mean by 
"physically stored", "total logical (excluding effects of compression) 
data", and "hot current logical data ... (excluding things like ZFS 
snapshots and compression)".



What partitioning scheme, volume manager, file system, compression, 
etc., do you use on your backup server?



I had thought you were using rsnapsnot/ rsync --link-dest, but you also 
mention ZFS snapshots.  Please clarify.



David



Re: Backup systems

2023-09-03 Thread Michael Kjörling
On 2 Sep 2023 14:49 -0700, from dpchr...@holgerdanske.com (David Christensen):
> So, 693 GB backup size, 98 backups, 67 TB apparent total backup storage, and
> 777 GB actual total backup storage.  So, a savings of about 88:1.
> 
> What statistics are other readers seeing for similar use-cases and their
> backup solutions?

8.07 TiB physically stored on one backup drive holding 174 backups;
11.4 TiB total logical (excluding effects of compression) data on the
source; 7.83 TiB hot current logical data on the source (excluding
things like ZFS snapshots and compression).

Which by your way of calculating seems to work out to an about 246:1
savings compared to simply keeping every single copy in full and
uncompressed. (Which would require almost 2 PB of storage.) But this
figure is a bit exaggerated since there are parts of the backups that
I prune after a while while keeping the rest of that backup, so let's
be very generous and call it maybe a 100:1 savings in practice.

Which is still pretty good for something that only does raw copying
with whole-file deduplication.

I have a wide mix of file sizes and content types; everything from
tiny Maildir message files through photos in the tens of megabytes
range to VM disk image files in the tens of gigabytes range, ranging
from highly compressible to essentially incompressible, and ranging
from files that practically never change after I initially store them
to ones that change all the time.

-- 
Michael Kjörling 🔗 https://michael.kjorling.se
“Remember when, on the Internet, nobody cared that you were a dog?”



Re: Backup systems

2023-09-02 Thread David Christensen

On 9/2/23 15:26, Michel Verdier wrote:

On 2023-09-02, David Christensen wrote:


What statistics are other readers seeing for similar use-cases and
their backup solutions?


I have 83 backups resulting to 130% of data. So a ratio of 63:1.



Nice.



But because of performance limitation I don't use compression on
backup server.



What partitioning scheme, volume manager, file system, etc., do you use 
on your backup server?



What is the performance limitation?


If you wanted compression on the backup server, how would you implement it?



And also rsync deduplication is less consuming than zfs.



Please define your system and metrics.


David



Re: Backup systems

2023-09-02 Thread David Christensen

On 9/2/23 12:15, Michel Verdier wrote:

On 2023-09-02, Stefan Monnier wrote:


I switched to Bup a few years ago and saw a significant reduction in the
size of my backups that is partly due to the deduplication *between*
machines (I backup several Debian machines to the same backup
repository) as well as because the deduplication occurs even when I move
files around (most obvious when I move directories filled with large
files like videos or music).


I setup deduplication between hosts with rsnapshot as you do. But it was
a small gain in my case as the larger part was users data, logs and the
like. So always different between hosts. I gain only on system
files. Mainly /etc as I don't backup binaries and libs.
I almost never move large directories. But if needed it's easy to move it
also in rsnapshot directories.



I have a SOHO LAN:

* My primary workstation is Debian Xfce on a 60GB  2.5" SATA SSD with 1G 
boot, 1G swap, and 12G root partitions.  It has one user (myself) with 
minimal home data (e-mail and CVS working directories).  I backup boot 
and root.


* I keep the vast majority of my data on a FreeBSD server with Samba and 
the CVS repository (via SSH) on a ZFS stripe of two mirrors containing 
two 3TB 3.5" SATA HDD's each (e.g. 6TB RAID10).  I backup the Samba data.


* I run rsync(1) and homebrew shell/ Perl scripts on the server to 
backup the various LAN sources to backup destination file system tree on 
the server.  I have enabled ZFS compression on the pool and enabled 
deduplication on the backup tree.



I ran some statistics for the daily driver backups in March.  The 
results were 4.9 GB backup size, 258 backups, 1.2 TB apparent total 
backup storage, and 29.0 GB actual total backup storage.  So, a savings 
of about 42:1:


https://www.mail-archive.com/debian-user@lists.debian.org/msg789807.html


Today, I collected some statistics for the backups of my data on the 
file server:


2023-09-02 14:10:30 toor@f3 ~
# du -hsx /jail/samba/var/local/samba/dpchrist
693G/jail/samba/var/local/samba/dpchrist

2023-09-02 14:11:09 toor@f3 ~
# ls /jail/samba/var/local/samba/dpchrist/.zfs/snapshot | wc -l
  98

2023-09-02 14:13:50 toor@f3 ~
# du -hs /jail/samba/var/local/samba/dpchrist/.zfs/snapshot
 67T/jail/samba/var/local/samba/dpchrist/.zfs/snapshot

2023-09-02 14:19:24 toor@f3 ~
# zfs get 
compression,compressratio,dedup,used,usedbydataset,usedbysnapshots 
p3/ds2/samba/dpchrist | sort

NAME   PROPERTY VALUE  SOURCE
p3/ds2/samba/dpchrist  compression  lz4inherited from p3
p3/ds2/samba/dpchrist  compressratio1.02x  -
p3/ds2/samba/dpchrist  dedupoffdefault
p3/ds2/samba/dpchrist  used 777G   -
p3/ds2/samba/dpchrist  usedbydataset693G   -
p3/ds2/samba/dpchrist  usedbysnapshots  84.2G  -


So, 693 GB backup size, 98 backups, 67 TB apparent total backup storage, 
and 777 GB actual total backup storage.  So, a savings of about 88:1.



What statistics are other readers seeing for similar use-cases and their 
backup solutions?



David



Re: Backup systems

2001-11-16 Thread Karsten M. Self
on Thu, Nov 15, 2001 at 12:28:17PM +, Ross Burton ([EMAIL PROTECTED]) wrote:
> Hi,
> 
> I am looking for a cheap backup system for my machine.  A recent scare
> regarding my hard drive ("is it a 75GXP?") forced me to think about
> backup policy.
> 
> I'd love to own a Jaz drive but at the moment I can't afford one. 

You could afford it less if you bought it.  Jaz is a piece of shit.
Expensive shit.  Iomega sucks.

> However, I do have a CD-RW in my machine.

Helpful, but for larger filesystems, you're going to do a lot of disk
swapping.

I recommend tape.  Highly.

> This is what I want to do in an ideal work:
> 
> I have a script I run every month.  It will examine every _user_ file
> (not system) and see what has changed since the last backup.  These
> files will be written to an ISO image which I can burn onto a CD, and
> the index of files=>locations updated.  Every few months I'll do a
> completely new set of CDs and throw away the old ones.  Basically, I
> want an incremental backup procedure which generates ISO images and will
> generate an index for me.  If I want to retrieve a single file it can
> tell me what CD its on.  If I want to do an entire restore I can just
> give it every CD and it will extract the lot.
> 
> Anyone seen anything like this?  For the moment I'll make do with taring
> up ~/ and putting that on CDs.
> 
> Which brings me to my next question.  I'm not up on CD filesystems.  Is
> there a filesystem for CDs which supports all of the unix features, i.e.
> long file names, permissions, owner/group etc.  Can I burn a ext2 image
> onto a CD if I will only access it in Linux?

My suggestions:

http://kmself.home.netcom.com/Linux/FAQs/backups.html

Peace.

-- 
Karsten M. Selfhttp://kmself.home.netcom.com/
 What part of "Gestalt" don't you understand? Home of the brave
  http://gestalt-system.sourceforge.net/   Land of the free
   Free Dmitry! Boycott Adobe! Repeal the DMCA! http://www.freesklyarov.org
Geek for Hire http://kmself.home.netcom.com/resume.html


pgpljFFW0Ubsg.pgp
Description: PGP signature


Re: Backup systems

2001-11-15 Thread Daniel Farnsworth Teichert
This may not be exactly what you want, but then again it may.
I've been looking at a program called Mondo which does most
of what you're talking about (and can span CD's, if one isn't
big enough, &c.). The homepage is

http://www.microwerks.net/~hugo

...or something like that. I've had limited success getting
it to work on Debian, so help would be appreciated (it is,
however, being actively maintained and developed, so that
helps; and I haven't played with it much recently).

A Debian package of this program would, IMHO, be a wonderful
thing.

Anyway, HTH.

  --Daniel

On Thu, Nov 15, 2001 at 12:28:17PM +, Ross Burton wrote:
> Hi,
> 
> I am looking for a cheap backup system for my machine.  A recent scare
> regarding my hard drive ("is it a 75GXP?") forced me to think about
> backup policy.
> 
> I'd love to own a Jaz drive but at the moment I can't afford one. 
> However, I do have a CD-RW in my machine.
> 
> This is what I want to do in an ideal work:
> 
> I have a script I run every month.  It will examine every _user_ file
> (not system) and see what has changed since the last backup.  These
> files will be written to an ISO image which I can burn onto a CD, and
> the index of files=>locations updated.  Every few months I'll do a
> completely new set of CDs and throw away the old ones.  Basically, I
> want an incremental backup procedure which generates ISO images and will
> generate an index for me.  If I want to retrieve a single file it can
> tell me what CD its on.  If I want to do an entire restore I can just
> give it every CD and it will extract the lot.
> 
> Anyone seen anything like this?  For the moment I'll make do with taring
> up ~/ and putting that on CDs.
> 
> Which brings me to my next question.  I'm not up on CD filesystems.  Is
> there a filesystem for CDs which supports all of the unix features, i.e.
> long file names, permissions, owner/group etc.  Can I burn a ext2 image
> onto a CD if I will only access it in Linux?
> 
> Thanks for any help,
> Ross



Re: Backup systems

2001-11-15 Thread Alvin Oga

hi ya ross

a cdrw can hold about 600Mb ??? or a 20Gb disk of regular files...
if your 20Gb hard disk is all mpeg files... it probably wont fit into cdrw
- a bigger hard disk of compressed data files wont fit into cdrweither
- a writable dvd gets you up to 2 or 4Gb of "backup" space...

- you should always perform "incremental backups" since the last full
  backup... else if you lose a day or two of incremental backups...
  you will NOT be able to restore the system anymore - you'd be missing
  a file or two or more... hopefully non-critical files..
- i span weekly 30-day incremental backups across 4 weekly full
backups to minimize problems with "opps" in the network and backup
media and forgetful admins

- for several example backup scripts to cdrw...

http://www.Linux-Backup.net/app.gwif.html

c ya
alvin
http://www.Linux-1U.net ... 500Gb 1U Raid5 ...


On 15 Nov 2001, Ross Burton wrote:

> Hi,
> 
> I am looking for a cheap backup system for my machine.  A recent scare
> regarding my hard drive ("is it a 75GXP?") forced me to think about
> backup policy.
> 
> I'd love to own a Jaz drive but at the moment I can't afford one. 
> However, I do have a CD-RW in my machine.
> 
> This is what I want to do in an ideal work:
> 
> I have a script I run every month.  It will examine every _user_ file
> (not system) and see what has changed since the last backup.  These
> files will be written to an ISO image which I can burn onto a CD, and
> the index of files=>locations updated.  Every few months I'll do a
> completely new set of CDs and throw away the old ones.  Basically, I
> want an incremental backup procedure which generates ISO images and will
> generate an index for me.  If I want to retrieve a single file it can
> tell me what CD its on.  If I want to do an entire restore I can just
> give it every CD and it will extract the lot.
> 
> Anyone seen anything like this?  For the moment I'll make do with taring
> up ~/ and putting that on CDs.
> 
> Which brings me to my next question.  I'm not up on CD filesystems.  Is
> there a filesystem for CDs which supports all of the unix features, i.e.
> long file names, permissions, owner/group etc.  Can I burn a ext2 image
> onto a CD if I will only access it in Linux?
> 
> Thanks for any help,
> Ross
> --
> Ross Burton   mail: [EMAIL PROTECTED]
>jabber: [EMAIL PROTECTED]
>  PGP Fingerprint: 1A21 F5B0 D8D0 CFE3 81D4 E25A 2D09 E447 D0B4 33DF
> 



Re: Backup systems: opinons wanted

1997-12-03 Thread Gary L. Hennigan
Udjat the BitMeister <[EMAIL PROTECTED]> wrote:
> I use bru 2000 and I am _very_ happy with it.
> I also us cpio to copy my whole filesystem(/) to a different partition
> (/snapshot) to have a online read-only backup of files.
> You could get by with cpio but I like the tape verify and features of bru
> 2000. Take a look at www.estinc.com

Well the price is certainly right! 

I can offer up my own opinons:

dump - I really like the interface when you do a restore. Being able
to navigate the tape in a directory structure and picking and choosing
the files/directories you want to restore. Unfortunately, I also run
Win95 on my PC and enjoy being able to back up all my paritions,
including FAT, under Linux. In fact, I don't even make my SCSI devices
visible to Win95. dump, at least on my machine, won't back up FAT
partitions.

cpio and variants - Never used 'em, at least not for backups.

GNU tar - It works great and you can take the tapes to just about any
other machine with the same type of drive and yank whatever you want
off. I don't like the fact that it insists on backing up empty
directories and directories that contain files that haven't changed
since the last backup. I haven't delved much into the incremental/full
capabilities that are built into tar, I generally use a script that
records the date of a particular backup and then use the "--newer"
argument to do incrementals based on the date my script records.

My main gripe with all of the backup utilities I've used is you have
to manually come up with some method to get decent redundancy. For
example, I use two sets of tapes, each set has a single full backup
tape (or tapes if a full backup required more than a single tape) and
a couple of incremental tapes associated with that full backup
tape. This is pretty much standard procedure for doing backups on a
LAN, yet every place I worked as a sysadmin had their own "script" to
keep track of what set of tapes was next and what was on previous sets 
of tapes, etc. Does anyone know of a better solution? My script works
fine, but it's a royal pain to have to keep track of what tapes have
what, and what incremental tape I need to use next. Of course I could
make my script more elaborate but my main purpose in life isn't to
write a perfect backup script, especially if I could find an existing
piece of software that already does this!

Any suggestions?
Gary Hennigan


--
TO UNSUBSCRIBE FROM THIS MAILING LIST: e-mail the word "unsubscribe" to
[EMAIL PROTECTED] . 
Trouble?  e-mail to [EMAIL PROTECTED] .


Re: Backup systems: opinions wanted

1997-12-03 Thread tibor simko
> "manoj" == Manoj Srivastava <[EMAIL PROTECTED]> writes:

manoj> I guess I would like to hear abot dump vs afio.

i am using afio to back up the important parts of the system, with the
aid of a small script similar to those at /usr/doc/afio/examples, and
am quite happy with it.

my backup medium is fast (iomega jaz drive), so perfomance issues
aren't in question.  as far as the reliability is concerned, i use
only "afio -r" right after backups.  i've had no problems up to now...
-- 
[EMAIL PROTECTED]





--
TO UNSUBSCRIBE FROM THIS MAILING LIST: e-mail the word "unsubscribe" to
[EMAIL PROTECTED] . 
Trouble?  e-mail to [EMAIL PROTECTED] .


Re: Backup systems: opinions wanted

1997-12-03 Thread Udjat the BitMeister...

I use bru 2000 and I am _very_ happy with it.
I also us cpio to copy my whole filesystem(/) to a different partition
(/snapshot) to have a online read-only backup of files.
You could get by with cpio but I like the tape verify and features of bru
2000. Take a look at www.estinc.com


On 3 Dec 1997, Manoj Srivastava wrote:

> Hi,
> 
>   How do people backup their machines? What packages do you use?
>  How do the backends (dd, dump/restore tar afio/cpio) compare wrt
>  reliability/ease of use? [dd is just for completeness.] I would like
>  a full backup, so I guess tar is out as a backend (can't handle
>  special files). I guess I would like to hear abot dump vs afio. 
> 
> __
> |
>Tar  | cpio / afio
> |__
> can't handle special files. |may get confused with multiple
> |hard links.  
> |
> One copy of hard-linked files, but  |Many copies of hardlinked files, 
> can retrieve file using that one|but can be restored using any 
> name only.  |of the names
> |
> Uses checksums. |No checksums
> |
> stops at first sign of corruption   | Skips over corrupted area
> |
> Blocked to start on a record|
> boundary|
> |
> headers always 512 bytes|Efficient use of space for headers
> |
> __
>  
> 
>   The last time I dealt with backups, I was backing up 30
>  machines remotely to a tape drive like the moster ones in all the
>  70's movies, using a mess of home grown scripts and dump/restore. 
> 
>   I'd rather not have to re-write the scripts (haven't things
>  gotten easier in the last decade?), so I'm now looking for backup
>  solutions where I don't have to write the scripts. I have come up
>  with the following (based entirely on the descriptions)
> __
>  Amanda: Powerful. Reassuringly, it seems to use dump/restore, which I
>  understand. Knows which tape and where on the tape to look
>  for to restore a file (I like that). Cons: Overkill for a
>  single machine.
> 
>  afbackup: Again, client server, which I don't need; says it should be
>easy to use on just one machine. goes to end of tape
>automatically. Hmm. tape marks written (I assume that's
>what the description is trying to say). No idea what the
>backend is -- afio?
> 
>  dump:An old friend. I used to do tower of hanoi backups -- has
>   dump levels, is integrated in (even fstab format caters to
>   dump/restore). Requires book keeping. Reliability of Linux
>   dump? 
> 
>  tob: tar/afio. full/differential/incremental backups, determines
>size beforehand
> 
>  floppybackup: Well, I have a tape.
> 
>  taper:   selection using mouseless commander? recursively selected
>   dirs are supported? This does not sound like what I need to
>   backup several *partitions*.
> __
> 
>
>   manoj
> 
> -- 
>  "When the going gets weird, the weird turn pro..." Dr. Hunter
>  S. Thompson
> Manoj Srivastava  <[EMAIL PROTECTED]> 
> Key C7261095 fingerprint = CB D9 F4 12 68 07 E4 05  CC 2D 27 12 1D F5 E8 6E
> 
> 
> --
> TO UNSUBSCRIBE FROM THIS MAILING LIST: e-mail the word "unsubscribe" to
> [EMAIL PROTECTED] . 
> Trouble?  e-mail to [EMAIL PROTECTED] .
> 
> 

--
  Enter any 11-digit prime number to continue...
,, /
   ( ">  ___
  _(-})  B I T B U R N   A C C E S SSystem Administrator
.'  ^^   http://www.bitburn.org/   mailto: [EMAIL PROTECTED]
`->


--
TO UNSUBSCRIBE FROM THIS MAILING LIST: e-mail the word "unsubscribe" to
[EMAIL PROTECTED] . 
Trouble?  e-mail to [EMAIL PROTECTED] .


Re: Backup systems: opinions wanted

1997-12-03 Thread Eloy A. Paris
Manoj Srivastava <[EMAIL PROTECTED]> wrote:

:   How do people backup their machines? What packages do you use?
:  How do the backends (dd, dump/restore tar afio/cpio) compare wrt
:  reliability/ease of use? [dd is just for completeness.] I would like
:  a full backup, so I guess tar is out as a backend (can't handle
:  special files). I guess I would like to hear abot dump vs afio. 

We use BRU here. It's a commercial product but we like it and its
quite good. Being commercial is the only drawback.

There is another tool called KBackup. I think it uses tar or cpio as
the backend. I haven't taken a look at KBackup lately but I am
planning to do so.

Regards,

E.-

-- 

Eloy A. Paris
Information Technology Department
Rockwell Automation de Venezuela
Telephone: +58-2-9432311 Fax: +58-2-9431645


--
TO UNSUBSCRIBE FROM THIS MAILING LIST: e-mail the word "unsubscribe" to
[EMAIL PROTECTED] . 
Trouble?  e-mail to [EMAIL PROTECTED] .


--
TO UNSUBSCRIBE FROM THIS MAILING LIST: e-mail the word "unsubscribe" to
[EMAIL PROTECTED] . 
Trouble?  e-mail to [EMAIL PROTECTED] .


Re: Backup systems: opinions wanted

1997-12-03 Thread Eloy A. Paris
Manoj Srivastava <[EMAIL PROTECTED]> wrote:

:   How do people backup their machines? What packages do you use?
:  How do the backends (dd, dump/restore tar afio/cpio) compare wrt
:  reliability/ease of use? [dd is just for completeness.] I would like
:  a full backup, so I guess tar is out as a backend (can't handle
:  special files). I guess I would like to hear abot dump vs afio. 

We use BRU here. It's a commercial product but we like it and its
quite good. Being commercial is the only drawback.

There is another tool called KBackup. I think it uses tar or cpio as
the backend. I haven't taken a look at KBackup lately but I am
planning to do so.

Regards,

E.-

-- 

Eloy A. Paris
Information Technology Department
Rockwell Automation de Venezuela
Telephone: +58-2-9432311 Fax: +58-2-9431645


--
TO UNSUBSCRIBE FROM THIS MAILING LIST: e-mail the word "unsubscribe" to
[EMAIL PROTECTED] . 
Trouble?  e-mail to [EMAIL PROTECTED] .