On Nov 16, 2010, at 4:04 PM, Tim Cook <t...@cook.ms> wrote:

> 
> 
> On Wed, Nov 17, 2010 at 7:56 AM, Miles Nordin <car...@ivy.net> wrote:
> >>>>> "tc" == Tim Cook <t...@cook.ms> writes:
> 
>    tc> Channeling Ethernet will not make it any faster. Each
>    tc> individual connection will be limited to 1gbit.  iSCSI with
>    tc> mpxio may work, nfs will not.
> 
> well...probably you will run into this problem, but it's not
> necessarily totally unsolved.
> 
> I am just regurgitating this list again, but:
> 
>  need to include L4 port number in the hash:
>  
> http://www.cisco.com/en/US/products/ps9336/products_tech_note09186a0080a963a9.shtml#eclb
>  port-channel load-balance mixed  -- for L2 etherchannels
>  mls ip cef load-sharing full     -- for L3 routing (OSPF ECMP)
> 
>  nexus makes all this more complicated.  there are a few ways that
>  seem they'd be able to accomplish ECMP:
>   FTag flow markers in ``FabricPath'' L2 forwarding
>   LISP
>   MPLS
>  the basic scheme is that the L4 hash is performed only by the edge
>  router and used to calculate a label.  The routing protocol will
>  either do per-hop ECMP (FabricPath / IS-IS) or possibly some kind of
>  per-entire-path ECMP for LISP and MPLS.  unfortunately I don't
>  understand these tools well enoguh to lead you further, but if
>  you're not using infiniband and want to do >10way ECMP this is
>  probably where you need to look.
> 
>  http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6817942
>  feature added in snv_117, NFS client connections can be spread over multiple 
> TCP connections
>  When rpcmod:clnt_max_conns is set to a value > 1
>  however Even though the server is free to return data on different
>  connections, [it does not seem to choose to actually do so] --
>  6696163 fixed snv_117
> 
>  nfs:nfs3_max_threads=32
>  in /etc/system, which changes the default 8 async threads per mount to
>  32.  This is especially helpful for NFS over 10Gb and sun4v
> 
>  this stuff gets your NFS traffic onto multiple TCP circuits, which
>  is the same thing iSCSI multipath would accomplish.  From there, you
>  still need to do the cisco/??? stuff above to get TCP circuits
>  spread across physical paths.
> 
>  
> http://virtualgeek.typepad.com/virtual_geek/2009/06/a-multivendor-post-to-help-our-mutual-nfs-customers-using-vmware.html
>    -- suspect.  it advises ``just buy 10gig'' but many other places
>       say 10G NIC's don't perform well in real multi-core machines
>       unless you have at least as many TCP streams as cores, which is
>       honestly kind of obvious.  lego-netadmin bias.
> 
> 
> 
> AFAIK, esx/i doesn't support L4 hash, so that's a non-starter.

For iSCSI one just needs to have a second (third or fourth...) iSCSI session on 
a different IP to the target and run mpio/mpxio/mpath whatever your OS calls 
multi-pathing.

-Ross

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to