Hi,

David Christensen wrote:
> I have a SOHO file server with ~1 TB of data.  I would like archive the data
> by burning it to a series of optical discs organized by time (e.g. mtime).
> I expect to periodically burn additional discs in the future, each covering
> a span of time from the previous last disc to the then-current time.

I use my own software for making incremental multi-volume backups, based
on file timestamps (m and c), inode numbers, and content checksums.
  http://scdbackup.webframe.org/main_eng.html
  http://scdbackup.webframe.org/examples.html#incremental

The software and the texts are quite old. The proposed backup scheme
is not in use here any more.
Instead i have four independent backup families, each comprised of
level 0 to N with no repetitions below the current level N.
Further i have backups of the configuration and memorized file lists
on 4 CDs.

Level 0 fills dozens of BD-RE discs. The other levels fill at most one
BD-RE. Level N of each family exists in three copies which get larger
with each backup run of that level. Whenever this level BD threatens to
overflow, i archive the latest BD of that level and start level N+1.
That step is a bit bumpy, because i have to restore the file lists of
level N from CD after a backup has been planned but was not performed.
When overflow is foreseeable, i make a copy of the file lists on disk
before i start the planning run, or i simply start level N+1 without
waiting for the overflow.

I use scdbackup for the slowly growing bulk of my file collection.
The agile parts of my hard disk are only about 5 GB and get covered by
incremental multi-session backups of xorriso (which learned a lot about
incrementality from scdbackup). With zisofs compression i can put about
30 incremental backups of one DVD+RW or 250 backups on one BD-RE.
Day by day.


> The term "archive management system" comes to mind.

I would not attribute this title to scdbackup. It was created to scratch
my itch when hard disks grew much faster in capacity than the backup
media. (Also it was the motivation to start programming on ISO 9660
producers and burn programs.)

So it might be that you are better off with a more professional backup
system. :))

(Else we will probably have to read together
  http://scdbackup.webframe.org/cd_backup_planer_help
and my backup configurations to compose configurations for you.)

----------------------------------------------------------------------
About timestamps and incremental backup:

If you only go for mtime, them you miss changes of file attributes
which are indicated by ctime.
Even more, timestamps alone are not a reliable way to determine which
files are new at their current location in the directory tree.
If you move a file from one directory to the other, then the timestamps
of the file do _not_ get updated. Only the two involved directories get
new timestamps.
So when the backup tool encounters directories with young timestamps
it has to use other means to determine whether their data files were
moved. scdbackup uses recorded device and inode numbers, and as last
resort recorded MD5 sums for that purpose.

(Of course, content MD5 comparison is slow and causes high disk load,
compared to simple directory tree traversal with timestamps and inode
numbers. So scdbackup tries to avoid this when possible and allowed
by the -changetest_options in the backup configuration file.)


Have a nice day :)

Thomas

Reply via email to