Re: [zfs-discuss] mpxio load-balancing...it doesn't work??

2010-04-05 Thread Torrey McMahon
 Not true. There are different ways that a storage array, and it's 
controllers, connect to the host visible front end ports which might be 
confusing the author but i/o isn't duplicated as he suggests.


On 4/4/2010 9:55 PM, Brad wrote:

I had always thought that with mpxio, it load-balances IO request across your 
storage ports but this article 
http://christianbilien.wordpress.com/2007/03/23/storage-array-bottlenecks/ has 
got me thinking its not true.

The available bandwidth is 2 or 4Gb/s (200 or 400MB/s – FC frames are 10 bytes long 
-) per port. As load balancing software (Powerpath, MPXIO, DMP, etc.) are most of the 
times used both for redundancy and load balancing, I/Os coming from a host can take 
advantage of an aggregated bandwidth of two ports. However, reads can use only one path, 
but writes are duplicated, i.e. a host write ends up as one write on each host port.

Is this true?

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] mpxio load-balancing...it doesn't work??

2010-04-05 Thread Bob Friesenhahn

On Sun, 4 Apr 2010, Brad wrote:

I had always thought that with mpxio, it load-balances IO request 
across your storage ports but this article 
http://christianbilien.wordpress.com/2007/03/23/storage-array-bottlenecks/ 
has got me thinking its not true.


The available bandwidth is 2 or 4Gb/s (200 or 400MB/s – FC frames 
are 10 bytes long -) per port. As load balancing software 
(Powerpath, MPXIO, DMP, etc.) are most of the times used both for 
redundancy and load balancing, I/Os coming from a host can take 
advantage of an aggregated bandwidth of two ports. However, reads 
can use only one path, but writes are duplicated, i.e. a host write 
ends up as one write on each host port. 


Is this true?


This text seems strange and wrong since duplicating writes would 
result in duplicate writes to disks, which could cause corruption if 
the ordering was not perfectly preserved.  Depending on the storage 
array capabilities, MPXIO could use different strategies.  A common 
strategy is active/standby on a per-LUN level.  Even with 
active/standby, effective load sharing is possible if the storage 
array can be told to assign preference between a LUN and a port. 
That is what I have done with my own setup.  1/2 the LUNs have a 
preference for each port so that with all paths functional, the FC 
traffic is similar for each FC link.


--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] mpxio load-balancing...it doesn't work??

2010-04-05 Thread Brad
I'm wondering if the author is talking about cache mirroring where the cache 
is mirrored between both controllers.  If that is the case, is he saying that 
for every write to the active controlle,r a second write issued on the passive 
controller to keep the cache mirrored?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] mpxio load-balancing...it doesn't work??

2010-04-05 Thread Torrey McMahon
 The author mentions multipathing software in the blog entry. Kind of 
hard to mix that up with cache mirroring if you ask me.


On 4/5/2010 9:16 PM, Brad wrote:

I'm wondering if the author is talking about cache mirroring where the cache 
is mirrored between both controllers.  If that is the case, is he saying that for every 
write to the active controlle,r a second write issued on the passive controller to keep 
the cache mirrored?

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] mpxio load-balancing...it doesn't work??

2010-04-05 Thread Tim Cook
On Mon, Apr 5, 2010 at 8:16 PM, Brad bene...@yahoo.com wrote:

 I'm wondering if the author is talking about cache mirroring where the
 cache is mirrored between both controllers.  If that is the case, is he
 saying that for every write to the active controlle,r a second write issued
 on the passive controller to keep the cache mirrored?


He's talking about multipathing, he just has no clue what
he's talking about.  He specifically calls out applications that are
specifically used for multipathing.

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] mpxio load-balancing...it doesn't work??

2010-04-04 Thread Brad
I had always thought that with mpxio, it load-balances IO request across your 
storage ports but this article 
http://christianbilien.wordpress.com/2007/03/23/storage-array-bottlenecks/ has 
got me thinking its not true.

The available bandwidth is 2 or 4Gb/s (200 or 400MB/s – FC frames are 10 bytes 
long -) per port. As load balancing software (Powerpath, MPXIO, DMP, etc.) are 
most of the times used both for redundancy and load balancing, I/Os coming from 
a host can take advantage of an aggregated bandwidth of two ports. However, 
reads can use only one path, but writes are duplicated, i.e. a host write ends 
up as one write on each host port. 

Is this true?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] mpxio load-balancing...it doesn't work??

2010-04-04 Thread Tim Cook
On Sun, Apr 4, 2010 at 8:55 PM, Brad bene...@yahoo.com wrote:

 I had always thought that with mpxio, it load-balances IO request across
 your storage ports but this article
 http://christianbilien.wordpress.com/2007/03/23/storage-array-bottlenecks/has 
 got me thinking its not true.

 The available bandwidth is 2 or 4Gb/s (200 or 400MB/s – FC frames are 10
 bytes long -) per port. As load balancing software (Powerpath, MPXIO, DMP,
 etc.) are most of the times used both for redundancy and load balancing,
 I/Os coming from a host can take advantage of an aggregated bandwidth of two
 ports. However, reads can use only one path, but writes are duplicated, i.e.
 a host write ends up as one write on each host port. 

 Is this true?
 --



I have no idea what MPIO stack he's talking about, but I've never heard
anything operating like he's talking about.  Writes aren't duplicated on
each port.  The path a read OR write goes down depends on the host-side
mpio stack, and how you have it configured to load-balance.  It could be
simple round-robin, it could be based on queue depth, it could be most
recently used, etc. etc.

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss