Nice experiment, but I hope you don't do that in production nor somewhere where data integrity is needed.

Your SAS drives, what does their maker claim about "nonrecoverable read errors per bits read" for example?

As an example let's look on 6TB seagate, 10^15 read bits per one nonrecoverable sector read. If I'm right, then basically you read whole drive around 20x and you get that error (statistically) and I guess you hope that maker's firmware programmers have not made critical mistakes and fw is able to report that error to your SAS card and that SAS card FW programmers were at least of the same quality and FW is able to survive sector read error and hand your read on another drive. And that's just one case. What if drive
does not detect corrupted sector hence does not report it correctly?

To me this looks like too much pray for luck. With such amount of data, I would stay with ZFS...

Good luck!
Karel

On 11/14/20 1:50 PM, Mischa wrote:
Hi All,

I am currently in the process of building a large filesystem with
12 x 6TB 3.5" SAS in raid6, effectively ~55TB of storage, to serve as a
central, mostly download, platform with around 100 concurrent
connections.

The current system is running FreeBSD with ZFS and I would like to
see if it's possible on OpenBSD, as it's one of the last two systems
on FreeBSD left. :)

Has anybody build a large filesystem using FFS2? Is it a good idea?
How does it perform? What are good tests to run?

Your help and suggestions are really appriciated!

Mischa


Reply via email to