You have to make the best of what you have to work with needless to say. Are 
you making arguments im favor of long term optical storage? It doesn't seem so, 
but if you were, you lost me.

     On Tuesday, January 17, 2023, 05:32:33 AM EST, Peter Corlett via cctalk 
<cctalk@classiccmp.org> wrote:  
 
 On Tue, Jan 17, 2023 at 05:42:55AM +0000, Chris via cctalk wrote:
[...]
> The only answer that anyone can provide is redundancy. Keep 2 or 3 copies
> of everything on seperate external drives. Every 3 to 5 years buy new
> drives and transfer the data to them. Or just run checkdisk twice a year
> and wait for 1 drive to start popping errors. Replace it. Wait for other
> to fail. Then replace it.

If you mean CHKDSK.EXE, it's broadly equivalent to Unix fsck plus a surface
scan, and all fsck does is check and repair filesystem _metadata_. If the
metadata is corrupt then that's a good sign that the data itself is also
toast, but a successful verification of the metadata does not tell you
anything useful about the data itself.

The surface scan asks the drive to read each sector, and relies on the disk
correctly identifying sectors which have changed from when they were
written. This is almost always the case, but that "< 1 in 10¹⁴" in the
datasheet is still not zero. And that's before we consider dodgy SATA cables
and buggy disk controllers. (SAS won't save you either: what it gives in
increased quality, it takes away in extra complexity.)

On typical Windows desktop computers, the probability that something else
will go wrong and destroy the system is way higher than the raw error rate
of the disk, but on non-toy systems with many tens or hundreds of terabytes
of data, the probability of a disk lying rises uncomfortably-close to 1. A
good filesystem needs to defend against disks which do that. FAT and NTFS
are not good filesystems by that measure.

  

Reply via email to