Our success is based on simplicity.  Software raid on direct attached disks 
with no add-on cards (ie. ensure MB's have intel pro 1000 nics, at least 6 sata 
ports and reliable cpu's etc).

Our first generation gear consisted of a super-micro MB, 2GB memory single dual 
core intel cpu's and 6x750GB direct attached disks in a white-box chassis 
running software raid 5.  That was over 3.5years ago and it will actually be 
decommissioned tomorrow.

2nd generation were the same boxes, just the latest super-micro MB.

3rd generation were SGI xe250's with 8x1TB direct attached disks with software 
raid5.

4th generation are SGI/Rackable systems with 12x2TB disks with an LSI/3ware 
hardware raid6 card.

We absolutely hammer our file system and it has stood the test of time.  I 
think our latest gear went in for ~$420/TB.


-- 
Dr Stuart Midgley
sdm...@gmail.com



On 23/04/2010, at 23:17 , Troy Benjegerdes wrote:

> Taking a break from my current non-computer related work.. 
> 
> My guess based on your success is your gear is not so much cheap, as
> *cost effective high MTBF commodity parts*. 
> 
> If you go for the absolute bargain basement stuff, you'll have problems
> as individual components flake out. 
> 
> If you spend way too much money on high-end multi-redundant whizbangs,
> you generally get two things.. redundancy, which in my mind often only
> serves to make the eventual failure worse, and high-quality, long MTBF
> components.
> 
> If you can get the high MTBF components without all the redudancy
> (and associated complexity nightmare), then you win.
> 
> 
> On Fri, Apr 23, 2010 at 05:42:30PM +0800, Stu Midgley wrote:
>> We run lustre on cheap off the shelf gear.  We have 4 generations of
>> cheapish gear in a single 300TB lustre config (40 oss's)
>> 
>> It has been running very very well for about 3.5 years now.
>> 
>> 
>>> Would lustre have issues if using cheap off the shell components or
>>> would people here think you need to have high end maskines with built in
>>> redundancy for everything?

_______________________________________________
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss

Reply via email to