Adam Levin wrote:

> I'd like to open a brief discussion on iSCSI.  Recently, we've had two 
> vendors tell us to abandon iSCSI.  We're not using it extensively -- just 
> investigating it for possible use in certain applications in the data 
> center and remote offices (primarily remote offices).
> 
> Cisco, naturally, is on the "iSCSI is dead, long live FCoE" bandwagon -- 
> hell, they're driving it.

We're being told the same.  However, all vendors we've talked to have 
grudgingly admitted that it's going to be at least a couple of years before 
FCoE actually arrives, and iSCSI can still be useful until then.

> In the meantime, we're exploring iSCSI.  We've got a lot of VMWare, which 
> doesn't handle iSCSI as nicely as I'd like.  They'll do failover, but they 
> don't load balance nicely over multiple links, which makes 10g DCE look 
> nice.

Today, for our VMWare installations, all that data is mounted from 
relatively low-end EMC arrays over 4GbFCSW.  However, those EMC devices 
don't have any additional software licenses on them, so we can't do 
de-duplication.  Moreover, EMC doesn't have a native de-duplication 
solution, they just OEM from someone else, and they recently changed which 
vendor they were OEM'ing from.  So, if you had the old de-dupe solution, you 
would now be screwed.  Finally, I think their de-dupe solution is more 
oriented towards backups as opposed to primary storage.

Contrariwise, NetApp does iSCSI out-of-the-box (no additional licenses 
required), their de-dupe solution was developed in-house and is 100% native, 
and intended for primary storage but also useful with backups.  With VMWare 
and using their own de-dupe solution, I've been told by multiple sources 
that they have actually seen, in the real world, two orders of magnitude 
reduction in the amount of disk space required.

Of course GigE isn't as fast as 4GbFCSW, and even 10GigE isn't necessarily 
that fast, even if the NIC includes a TCP Offload Engine (TOE).  Even if 
you're using NICs that implement 10GigE w/ TOE, you still have to implement 
a totally separate and parallel storage network, the only difference is that 
it's an IP-based storage network (with IP switches and routers) as opposed 
to FibreChannel (with FibreChannel switches, etc...).


Simply put, IP is a very, very heavy protocol on top of which to layer a 
highly latency-sensitive block storage protocol, and you have to throw a lot 
more hardware at it in order to get the same order of performance.

The same was true with Token Ring vs. Ethernet, and FDDI vs. 100Base-TX. 
But now we have GigE and that's probably faster than FDDI, and 10GigE is 
definitely faster than FDDI.

> So, I'm just curious what other people are doing with iSCSI, and what your 
> thoughts are on DCE/FCoE.  I think if you were building a data center from 
> scratch, DCE might make some sense, though the CNA cards are still 
> expensive.

We're in the process of building a new datacenter, and FCoE is the 
block-storage protocol we are designing towards.  But that's going to live 
side-by-side with iSCSI, and over here on the Unix side of the house we're 
most likely going to continue to use NetApps and NFS, since that brings us 
so many good benefits.

This is a multi-protocol world.  There is no one-size-fits-all solution.

-- 
Brad Knowles
<[email protected]>        If you like Jazz/R&B guitar, check out
LinkedIn Profile:                 my friend bigsbytracks on YouTube at
<http://tinyurl.com/y8kpxu>    http://preview.tinyurl.com/bigsbytracks
_______________________________________________
Discuss mailing list
[email protected]
http://lopsa.org/cgi-bin/mailman/listinfo/discuss
This list provided by the League of Professional System Administrators
 http://lopsa.org/

Reply via email to