On 3/29/2011 12:44, Chris Lee wrote:
On 29/03/11 16:29, Jon LaBadie wrote:
On Tue, Mar 29, 2011 at 10:26:29AM -0400, Chris Hoogendyk wrote:
On 3/29/11 10:00 AM, Charles Curley wrote:
On Tue, 29 Mar 2011 12:01:49 +0100
Chris Lee<cslee-l...@cybericom.co.uk> wrote:
I was just thinking about my virtual tapes and the chances of a
failed sector or two going un-noticed until I needed to restore my
data.
Modern hard drives handle bad sectors for you transparently. They swap
in a spare sector, without notifying you. The only way you will see a
report is if the hard drive runs out of spare sectors. If you see a
bad
sector report, you have worse problems than a bad backup. Go buy a
replacement drive immediately.
If you are concerned about the reliability of your hard drives, look
into smartmontools. It uses the drive's firmware to test and collect
data. Unfortunately sometimes the reports can be rather cryptic to the
non-hard-drive-literate.
Or go with ZFS on Solaris or FreeBSD or ... see
http://en.wikipedia.org/wiki/ZFS#Platforms.
See http://www.zdnet.com/blog/storage/zfs-data-integrity-tested/811
I think BtrFS also checksums data and metadata.
jon
All the file systems and hard drives only "fix" a problem if they
happen to read the data and find the problem.
What happens while those backups are just sitting there for months
with no one reading them, then we read it and find out there is just
not enough left to repair anything.
It only takes a stray cosmic particle to take out enough data to make
a good backup not good enough, and I would rather know about it before
I need it.
Maybe just reading all the bits every day is enough to get the hard
drive to swap out bad blocks, but is the data all still there? was the
damage enough to fool the error correction code?
now that I can read my backups for very little cost I am happy to
waste some processor cycles hashing them to be sure they are all still
happy.
Another use I have decided this hashing will be good for is keeping an
eye on all my old photos, and let me know if any of them have gone bad
so I can restore a recent backup.
Chris.
ZFS stores the integrity verification and repair data on disk, has
options to keep complete duplicates or just parity data, and performs
integrity checks before writing, after writing, and on demand (in the
form of a "scrub"), and works automatically without appreciable user
intervention. Supports automatic physical disk mirroring, and raid
5-themed raidz, which lacks the write hole. If you REALLY want to
ensure checksums are run directly after a backup, you can write the
following script:
run-backup.sh
#!/bin/bash
amdump $1
zfs scrub $2
# usage: run-backup [backup set] [zpool it's stored on]
and call "run-backup.sh DailySet1 backup-pool" from cron.