On Fri, 09 Feb 2024 04:30:14 +0000
Richmond <dnomh...@gmx.com> wrote:

> So you need to store a lot of data and then verify that it has written
> with 'diff'.

Yeah.

I've been thinking about this. Yeah, I know: dangerous.

What I would do is write a function to write 4096 bytes of repeating
data, the data being the block number being written to. So the first
block is all zeros, the second all ones, etc.. For convenience they
would be 64 bit unsigned ints.

And, given the block number, a function to verify that block number N
is full of Ns and nothing else.

By doing it this way, we don't have to keep copies of what we've
written. We only have to keep track of which block got written to which
LBA so we can go back and check it later.

Now, divide the drive in half. Write block zero there. Divide the two
halves each in half, and write blocks one and two. Divide again, and
write blocks three through five. Etc., a nice binary division.

Every once in a while, I would go back and verify the blocks already
written. Maybe every time I subdivide again.

If we're really lucky, and the perpetrators really stupid, the 0th block
will fail, and we have instant proof that the drive is a failure.
We don't care why the drive is failing, only that the 0th block (which
is clearly not at the end of the drive) has failed.

Here's a conjecture: This was designed to get people who use FAT and
NTFS. I know that FAT starts writing at the beginning of the partition,
and goes from there. This is because floppy disks (remember them?) have
track 0 at the outside, which is far more reliable than the tracks at
the hub simply because each each flux reversal is longer. So the first
64G should be fine; only after you get past there do you see bad
sectors. I believe NTFS does similarly.

But I don't think that's what they're doing. Other operating systems
have put the root directory and file allocation table (or equivalent)
in the middle of the disk (for faster access), Apple DOS for one.
mkfs.extX write blocks all over the place.

I think that they are re-allocating sectors on the fly, regardless of
the LBA, until they run out of real sectors. So we write 64G+ of my
4096 byte blocks. It'll take a while, but who cares?

If Gibson is correct that these things only have 64 gig of real memory,
and my arithmetic is correct, we should start seeing failures after
writing 16777216 of my 4096 blocks. 

Of course, these things might allocate larger sectors than 4096 bytes.
In which case we'll hit the limit sooner.

-- 
Does anybody read signatures any more?

https://charlescurley.com
https://charlescurley.com/blog/

Reply via email to