Chris Zakelj wrote:
Travers Buda wrote:

 I can certainly see various drive makers pushing capacity
 irrespective of reliability.  Germane to this case, some of them
 reduce the reserve storage for bad sectors for that extra storage.

Going along with this, on a recent trip to my local computer megastore, I noticed that 1TB SATA drives are starting to hit the market. With RAID cards like arc(4) around, that makes it pretty easy to build really massive arrays. I'm no good at reading code, so I'm wondering if thought is being given on how to make the physical size (not filesystem... I totally understand why those should be kept small) limitation of http://www.openbsd.org/faq/faq14.html#LargeDrive a non-issue on 64-bit platforms (realizing, of course, that it's a lot harder than something like making an int into a double, since fdisk and so on would need to be made 64bit safe as well)?


Yeah!
Got a 500Gig eSATA mounted, 6 slices.
The problem is not how to address the drive,
the problem is to backup all that data.
That is, eventually, 4 gig per DVD, or XFS, or a cluster.
My main database I can't live without is 500Meg expected to grow 1 Gig by end-of-year. (one on 500).
Would take an average of 3 months to be reconstructed.
So, I still can burn DVDs for a while. When this will not be possible anymore, I'll go to clusters.
Going to clusters, I just need good hardware.
I don't envision an insanity as a terabytes large raids.
What would be a budget to recover a multi-terabytes array?
Don't answer, this is my pension plan.

Reply via email to