I'd like to open a brief discussion on iSCSI.  Recently, we've had two 
vendors tell us to abandon iSCSI.  We're not using it extensively -- just 
investigating it for possible use in certain applications in the data 
center and remote offices (primarily remote offices).

Cisco, naturally, is on the "iSCSI is dead, long live FCoE" bandwagon -- 
hell, they're driving it.  To be honest, I'm not sure I see much value in 
FCoE (or, more specifically, DCE) in existing data centers.  It's still 
expensive to upgrade to it, from adding or switching to Nexus switches to 
adding CNA cards to every host.

We also recently spoke to IBM, and they told us that they have not seen 
any iSCSI implementations in production in the field (granted, this was 
one local sales engineer).

We're going to be meeting with Cisco in a few weeks, and they're going to 
really push their data center strategy.

In the meantime, we're exploring iSCSI.  We've got a lot of VMWare, which 
doesn't handle iSCSI as nicely as I'd like.  They'll do failover, but they 
don't load balance nicely over multiple links, which makes 10g DCE look 
nice.

In terms of small office, where we might deploy 2 machines running VMWare 
with a single shelf of disks, we could choose to attach those disks via 
FC, iSCSI or even SAS.  In this case, iSCSI is fine since most machines 
come with multiple ethernet ports, but of course it'd be slower than 4gb 
fibre.

So, I'm just curious what other people are doing with iSCSI, and what your 
thoughts are on DCE/FCoE.  I think if you were building a data center from 
scratch, DCE might make some sense, though the CNA cards are still 
expensive.

-Adam
_______________________________________________
Discuss mailing list
[email protected]
http://lopsa.org/cgi-bin/mailman/listinfo/discuss
This list provided by the League of Professional System Administrators
 http://lopsa.org/

Reply via email to