Like so many things, it depends on your situation.  1GbE (and to a
lesser extent 10GbE and 40GbE) still have higher latencies over iSCSI
than a local disk (I don't personally know much about FCOE).  If you can
tolerate those latencies, it's fine.  If you need the low latency
behavior, you still need to look at direct-attach (eg. SAS), or SAN
(mostly FC today).

As an aside, we're hearing rumblings at work from some of our vendors
about Infiniband making a play in the enterprise SAN market, competing
with FC.  We use it mostly for low latency inter-process communication,
but it can also do storage traffic (via SRP or iSER).  And since we're
talking 56 Gb/s today, with 100+ coming soon, and one-way latencies
easily under 5 us (depending on payload size, of course), it's pretty
compelling.

But especially for a home-scale solution, I agree with you.  1GbE and
iSCSI is a pretty nice and cheap solution.  Been meaning to do that one
of these days.  I'm not certain the pros/cons of iSCSI for Linux/BSD
installs, vs an NFSroot solution, though.

Lloyd Brown
Systems Administrator
Fulton Supercomputing Lab
Brigham Young University
http://marylou.byu.edu

On 09/27/2013 10:32 AM, John Nielsen wrote:
> FCOE and iSCSI are both technologies that blur the line between SAN and NAS. 
> The allow one host to access a disk or volume on another host as if it were 
> directly connected. Fibre Channel is a traditional SAN technology that 
> required a dedicated (and expensive) network. I only used it a few years ago 
> and 4Gbps was considered shiny. Now with 1Gbps Ethernet practically free and 
> 10Gbps Ethernet available it makes less sense to have a dedicated SAN network.

/*
PLUG: http://plug.org, #utah on irc.freenode.net
Unsubscribe: http://plug.org/mailman/options/plug
Don't fear the penguin.
*/

Reply via email to