Hi,

On Saturday 11 May 2013 16:22:15 Dimitri Maziuk wrote:
> SuperMicro has a new 4U chassis w/ 72x3.5" drives (2/canister). You can
> double the number of drives. (With faster drives you may be getting
> close to chocking the expander backplane, though.)
Just checked their site and those are awesome. Did not ran into them before 
because they are not available / advertised yet in the Netherlands. Probably 
requesting a quote from a distributor would still be possible.

As for choking the backplane: That would just slow things down a bit, am I 
right?

Probably  I intent to write some management scripting, not to keep all the 
disk in the cluster all the time. When storage grows, I will let the script 
add disks / osd's to the cluster. The unused disks will we in stand-by mode / 
spun down. Probably well before the last disks are put in the cluster, I 
should consider re-investment and adding servers anyway.

> WD 3+TB drives don't have the option to turn off "advanced format" or
> whatever it's called: the part where they lie to the OS about sector
> size because they ran out of bits for some other counter (will they ever
> learn). In my tests iostat shows 10x i/o wait on "desktop" wd drives
> compared to seagates. Aligning partitions to 4096, 16384, or any other
> sector boundary didn't seem to make any difference.

Did not know that. Do you have any references. Does this also apply for the 
enterprise disks?

> So we quit buying wds. Consider seagates, they go to 4TB in both
> "enterprise" and desktop lines, too.
Pricing is about the same, so why not?

Another question: do you use desktop or enterprise disks in your cluster? I am 
having trouble finding a MTBFs for desktop drives. And if I find them, they are 
almost the same as enterprise drives. Is there a caveat in there? Is the 
failure test done is different conditions? (Not that you have to know that)

If the annual failure rate would be double, it would still be cheaper to use 
desktop drives in a large cluster, but I just like to know to be sure.

Thanks and regards,

Tim
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to