Re: [Gluster-users] Interesting experiment

2009-08-20 Thread Moore, Michael
Part of the performance loss is that you cannot get the full 4 Gbit of 
bandwidth between 2 hosts.  Usually you are limited to the throughput of a 
single link between 2 hosts.  And if you are using a round-robin method of 
bonding, then you run into performance losses due to TCP packets coming in out 
of order.

- Mike

-Original Message-
From: gluster-users-boun...@gluster.org 
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Mickey Mazarick
Sent: Wednesday, August 19, 2009 4:19 PM
To: Nathan Stratton
Cc: gluster-users@gluster.org
Subject: Re: [Gluster-users] Interesting experiment

Just a note we initially tried to set up our storage network with bonded 
4 port gig E connections per client and storage node and it was still 
~1/3 the speed of infiniband.  There also appears to be more overhead in 
unwrapping data from packets even with jumbo frames set.

We did see about a 50% increase in throughput with 2 bonded gig ports, 
but not double the speed that you would expect. Make sure you use a 
trunking mechanism and not an active/passive configuration.

-Mic


Nathan Stratton wrote:
 On Wed, 19 Aug 2009, Hiren Joshi wrote:

 Is it worth bonding? This look like I'm maxing out the network
 connection.

 Yes, but you should also check out Infiniband.

 http://www.robotics.net/2009/07/30/infiniband/


 
 Nathan StrattonCTO, BlinkMind, Inc.
 nathan at robotics.net nathan at blinkmind.com
 http://www.robotics.nethttp://www.blinkmind.com
 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Interesting experiment

2009-08-19 Thread Hiren Joshi
 

 -Original Message-
 From: Liam Slusser [mailto:lslus...@gmail.com] 
 Sent: 18 August 2009 18:51
 To: Hiren Joshi
 Cc: gluster-users@gluster.org
 Subject: Re: [Gluster-users] Interesting experiment
 
 On Tue, Aug 18, 2009 at 3:05 AM, Hiren 
 Joshij...@moonfruit.com wrote:
  Hi,
 
  Ok, the basic setup is 6 bricks per server, 2 servers. 
 Mirror the six
  bricks and DHT them.
 
  I'm running three tests, dd 1G of zeros to the gluster 
 mount, dd 1000
  100k files and dd 1000 1M files.
 
  With 3M write-behind I get:
  0m35.460s for 1G file
  0m52.427s for 100k files
  1m37.209s for 1M files
 
  Then I added a 400M external journal to all the bricks, the 
 twist being
  the journals were made on a ram drive
 
  Running the same tests:
  0m33.614s for 1G file
  0m52.851s for 100k files
  1m31.693s for 1M files
 
 
  So why is it that adding an external journal (in the ram!) 
 seems to make
  no difference at all?
 
 I would imagine that most of your bottle neck is with the network and
 not the disks.  Modern raid disk storage systems are much quicker than
 gigabit ethernet.

You're right, the raid gives me great (SSD type) performance!

This is interesting, I'm on a gigabit network and it looks like it's
maxing out

when I dd a 1Gig file:
about 18 kbits/sec
When I dd 1000 1M files:
about 8 kbits/sec

Is it worth bonding? This look like I'm maxing out the network
connection.

 
 liam
 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Interesting experiment

2009-08-19 Thread Mickey Mazarick
Just a note we initially tried to set up our storage network with bonded 
4 port gig E connections per client and storage node and it was still 
~1/3 the speed of infiniband.  There also appears to be more overhead in 
unwrapping data from packets even with jumbo frames set.


We did see about a 50% increase in throughput with 2 bonded gig ports, 
but not double the speed that you would expect. Make sure you use a 
trunking mechanism and not an active/passive configuration.


-Mic


Nathan Stratton wrote:

On Wed, 19 Aug 2009, Hiren Joshi wrote:


Is it worth bonding? This look like I'm maxing out the network
connection.


Yes, but you should also check out Infiniband.

http://www.robotics.net/2009/07/30/infiniband/





Nathan StrattonCTO, BlinkMind, Inc.
nathan at robotics.net nathan at blinkmind.com
http://www.robotics.nethttp://www.blinkmind.com
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Interesting experiment

2009-08-18 Thread Liam Slusser
On Tue, Aug 18, 2009 at 3:05 AM, Hiren Joshij...@moonfruit.com wrote:
 Hi,

 Ok, the basic setup is 6 bricks per server, 2 servers. Mirror the six
 bricks and DHT them.

 I'm running three tests, dd 1G of zeros to the gluster mount, dd 1000
 100k files and dd 1000 1M files.

 With 3M write-behind I get:
 0m35.460s for 1G file
 0m52.427s for 100k files
 1m37.209s for 1M files

 Then I added a 400M external journal to all the bricks, the twist being
 the journals were made on a ram drive

 Running the same tests:
 0m33.614s for 1G file
 0m52.851s for 100k files
 1m31.693s for 1M files


 So why is it that adding an external journal (in the ram!) seems to make
 no difference at all?

I would imagine that most of your bottle neck is with the network and
not the disks.  Modern raid disk storage systems are much quicker than
gigabit ethernet.

liam
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users