"Tristan Ball" <tristan.b...@leica-microsystems.com> writes:

> According to the link bellow, VMWare will only use a single TCP session
> for NFS data, which means you're unlikely to get it to travel down more
> than one interface on the VMware side, even if you can find a way to do
> it on the solaris side.
>
> http://virtualgeek.typepad.com/virtual_geek/2009/06/a-multivendor-post-to-help-our-mutual-nfs-customers-using-vmware.html

Please read this site carefully.
inside is something about LACP/trunking.
if you wish to use more then 1Gbit do it as example this way:
- solaris host with 4 1Gbit interfaces
- aggregate this interfaces into aggr1
- enable a trunk or LACP on your switch for this ports
- assign *4 addresses* (or just more then one) to this aggr1
- setup you vmkernel network with as example 4 NICs
- check the setting for network balancing in esx
- enable a trunk or LACP on your switch for this ports
-- important for esxi use a static trunk not LACP because esxi is not
   capable doing LACP
- now mount the datastores this way:
- host: ipaddr1 share /share1
- host: ipaddr2 share /share2
- host: ipaddr3 share /share3
- host: ipaddr4 share /share4
voila:
- you have 4 TCP-Sessions
- you can have 1Gbit per datastore.

now before arguing that this ist to slow an you just want to get 4Gbit
per /share. The answer is no. this is not possible.
Why: read the 802.3ad specifications.
LACP uses only one link per MAC.
if you need more speed than 1Gbit / host (TCP-Session) you have to
switch your network environment to 10Gbit Ethernet.

example:
10GBE host <1 10Gbit cable> switchA < 4*1GBit trunk LACP > switchB <110Gbit 
cable> 10GBE host

the maximum possible speed between this 2 host will be 1Gbit.


>
> T
>
> -----Original Message-----
> From: zfs-discuss-boun...@opensolaris.org
> [mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Brent Jones
> Sent: Thursday, 2 July 2009 12:58 PM
> To: HUGE | David Stahl
> Cc: Steve Madden; zfs-discuss@opensolaris.org
> Subject: Re: [zfs-discuss] ZFS, ESX ,and NFS. oh my!
>
> On Wed, Jul 1, 2009 at 7:29 PM, HUGE | David Stahl<dst...@hugeinc.com>
> wrote:
>> The real benefit of the of using a separate zvol for each vm is the
>> instantaneous cloning of a machine, and the clone will take almost no
>> additional space initially. In our case we build a template VM and
> then
>> provision our development machines from this.
>> However the limit of 32 nfs mounts per esx machine is kind of a
> bummer.
>>
>>
>> -----Original Message-----
>> From: zfs-discuss-boun...@opensolaris.org on behalf of Steve Madden
>> Sent: Wed 7/1/2009 8:46 PM
>> To: zfs-discuss@opensolaris.org
>> Subject: Re: [zfs-discuss] ZFS, ESX ,and NFS. oh my!
>>
>> Why the use of zvols, why not just;
>>
>> zfs create my_pool/group1
>> zfs create my_pool/group1/vm1
>> zfs create my_pool/group1/vm2
>>
>> and export my_pool/group1
>>
>> If you don't want the people in group1 to see vm2 anymore just zfs
> rename it
>> to a different group.
>>
>> I'll admit I am coming into this green - but if you're not doing
> iscsi, why
>> zvols?
>>
>> SM.
>> --
>> This message posted from opensolaris.org
>> _______________________________________________
>> zfs-discuss mailing list
>> zfs-discuss@opensolaris.org
>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss>
>>
>> _______________________________________________
>> zfs-discuss mailing list
>> zfs-discuss@opensolaris.org
>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss>
>>
>
> Is there a supported way to multipath NFS? Thats one benefit to iSCSI
> is your VMware can multipath to a target to get more speed/HA...

-- 
disy Informationssysteme GmbH
Daniel Priem
Netzwerk- und Systemadministrator
Tel: +49 721 1 600 6000, Fax: -605, E-Mail: daniel.pr...@disy.net

Entdecken Sie "Lösungen mit Köpfchen"
auf unserer neuen Website: www.disy.net

Firmensitz: Erbprinzenstr. 4-12, 76133 Karlsruhe
Registergericht: Amtsgericht Mannheim, HRB 107964
Geschäftsführer: Claus Hofmann

-----------------------------
Environment . Reporting . GIS
-----------------------------

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to