On 12/07/2013 09:47 PM, Edward Ned Harvey (blu) wrote:
From: discuss-bounces+blu=nedharvey....@blu.org [mailto:discuss-
bounces+blu=nedharvey....@blu.org] On Behalf Of Greg Rundlett
(freephile)

  I think it's pretty obvious why it's not performing: user home directories
(where developers compile) should not be NFS mounted. [1]  The source
repositories themselves should also not be stored on a NAS.

For high performance hybrid distributed/monolithic environments, I've at a few 
companies, used systems that were generally interchangeable clones of each 
other, but each one has a local /scratch (or /no-backup) directory, and each 
one can access the others via NFS automount /scratches/machine1/ (or 
/no-backup-automount/machine1/)

If I had it do repeat now, I would look at ceph or gluster, to provide a 
unified namespace while leaving the underlying storage distributed.  But it 
will take quite a bit of experimentation / configuration to get the desired 
performance characteristics.  Autofs has the advantage of simplicity to 
configure.

I highly recommend moosefs instead of ceph or gluster

http://www.moosefs.org/


A problem I've seen IT folks (including myself, until I learned better) make 
over and over was:  They use raid5 or raid6 or raid-DP, believing they get 
redundancy plus performance, but when you benchmark different configurations, 
you find, they only perform well for large sequential operations.  They perform 
like a single disk (sometimes worse) when you have small random IO, which is 
unfortunately the norm.  I highly, strongly recommend, building your storage 
out of something more similar to Raid-10.  This performs much, much, much 
better for random IO, which is the typical case.

Also:

A single disk performance is about 1Gbit.  So you need to make your storage 
network something much faster.  The next logical step up would be 10Gb ether, 
but in terms of bang for buck, you get a LOT more if you go to Infiniband or 
Fibrechannel instead.
_______________________________________________
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss

_______________________________________________
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss

Reply via email to