What I do is write the entire disk and read it back. I wrote a program, which I should perhaps package, that writes mostly zeros but each block has the block number. on read it looks for that. however, I have essentially never, maybe never in doing these tests, had issues with a read returning a non-error status and the wrong data.
random writes can be very slow (because urandom has to generate). I would thus recommend: dd if=/dev/zero of=/dev/rsd4c bs=32k # actually try a few bs= all 2^N and find the smallest one such that # doubling doesn't really help. I tend to use 1m, but that's on # computers from >=2010 with at least 1G of system RAM. dd if=/dev/rsd4c of=/dev/null bs=32k of course be super careful not to zero the wrong device. That's a major hesitation for me in publishing code. Then maybe power it down for a few days, start it up, and read again. Another suggestion is to read first, before you write. When a block goes bad, the controller detects it and replaces it with a spare, but you get a forced error on the read until you write it. If this happens for a few blocks once in a great while it's sort of ok but once more than rare it's a sign of trouble and the disk is not going to be ok. I have a script that does reads via dd of all my on-line disks across all systems, and I run it every 2 months. Mostly they are ok; there is one old disk that needs some bad block fixups. I'd replace that disk normally but for reasons you are unlikey to share it's hard.