Do you have a spec your working to meet?

The Bertmeister

On Oct 3, 2013, at 3:06 AM, Dan Egli <ddavide...@gmail.com> wrote:

> On Sept. 27, 2013 at 10:13 PM, Lloyd Brown wrote:
> 
>> Like so many things, it depends on your situation. 1GbE (and to a
> 
>> lesser extent 10GbE and 40GbE) still have higher latencies over iSCSI
> 
>> than a local disk (I don't personally know much about FCOE). If you can
> 
> 
> 
> Sorry about being so long in replying to this. I wasn't able to carve out
> any e-mail time in the last few days, so I'm only now catching up. :(
> 
> 
> 
> The latencies don't need to be too small for this project. I do know I was
> planning on implementing something similar at home (on a much smaller
> scale, of course) and was thinking that I'd get near local disk performance
> (for the most part) by hooking all the computers to a 10GbE network. The
> machines would still PXE boot and load an NFS root, but since the NFS would
> be piped over a 1.25Gigabyte per second (and yes I know it's not REALLY
> that fast) connection, it should be nearly as fast (provided the link isn't
> saturated) as a local disk. I figured that a SSD (fastest storage
> available) connected to a SATA3 (6Gb/S) controller would run out of room to
> read the data from the disk before the data pipe was full (since the 10GbE
> network is like 66% bigger than the controller's 6Gb/S data pipeline).
> 
> 
> 
>> But especially for a home-scale solution, I agree with you. 1GbE and
> 
>> iSCSI is a pretty nice and cheap solution. Been meaning to do that one
> 
>> one of these days. I'm not certain the pros/cons of iSCSI for Linux/BSD
> 
>> installs, vs an NFSroot solution, though.
> 
> 
> 
> That was my thought. iSCSI sounds nice on the one hand, but at the same
> time it sounds a bit more complicated and then not quite as fast or easy as
> a home-scale NFSroot setup. And especially with the Federal Government in
> their shutdown mode, I don't expect the guy I'm doing this for to be able
> to pull enough funding for a 10GbE or 40GbE setup. I fully expect them to
> simply use the 1GbE connections built into the motherboard. For my personal
> small network, I still think 10GbE would be best for speed, seeing as how
> the network cap is around 66% larger than the maximum data from the disk.
> Seems to me that the largest actual sustained throughput on a SATA3 that
> I've read about was around 550-600 MB/S. That was off of one of Samsung's
> new turbo SSDs I think (I'd have to go back and look for sure). Even
> cutting 50% out of the speed of the network, 10GbE becomes 5Gb/Sec. That's
> slightly faster than the SSD. Take an 8 second block. 5Gb/sec * 8 seconds =
> 5 Gigabytes in the 8 seconds. Now, 600MB * 8 seconds = 4800MB, 4.8GB in
> seconds. The network still has a 200MB/sec lead over the HDD's throughput.
> In theory, it sounds great. Now I need to test or read up on more before
> making such an implementation decision and buying hardware or configuring
> and installing software.  :)
> 
> 
> 
> Thanks for all the info though! I may look into iSCSI or FCOE. Depends on
> how easy it would be to configure with discreet NICs vs. the onboard
> network adapter (unless someone knows of a good LGA1150 board with a 10GbE
> socket vs. the usual 1GbE).
> 
> 
> 
> --- Dan
> 
> -----Legal Aide, Access on 9/28/2013 4:06 AM wrote:
> 
> 
> 
>> 
> 

/*
PLUG: http://plug.org, #utah on irc.freenode.net
Unsubscribe: http://plug.org/mailman/options/plug
Don't fear the penguin.
*/

Reply via email to