Re: [Gluster-users] GlusterFS Preformance

2009-07-09 Thread Stephan von Krawczynski
On Thu, 9 Jul 2009 09:33:59 +0100
Hiren Joshi j...@moonfruit.com wrote:

  
 
  -Original Message-
  From: Stephan von Krawczynski [mailto:sk...@ithnet.com] 
  Sent: 09 July 2009 09:08
  To: Liam Slusser
  Cc: Hiren Joshi; gluster-users@gluster.org
  Subject: Re: [Gluster-users] GlusterFS Preformance
  
  On Wed, 8 Jul 2009 10:05:58 -0700
  Liam Slusser lslus...@gmail.com wrote:
  
   You have to remember that when you are writing with NFS 
  you're writing to
   one node, where as your gluster setup below is copying the 
  same data to two
   nodes;  so you're doubling the bandwidth.  Dont expect nfs 
  like performance
   on writing with multiple storage bricks.  However read 
  performance should be
   quite good.
   liam
  
  Do you think this problem can be solved by using 2 storage 
  bricks on two
  different network cards on the client?
 
 I'd be surprised if the bottleneck here was the network. I'm testing on
 a xen network but I've only been given one eth per slice.

Do you mean your clients and servers are virtual XEN installations (on the
same physical box) ?

Regards,
Stephan


___
Gluster-users mailing list
Gluster-users@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] GlusterFS Preformance

2009-07-09 Thread Hiren Joshi
 

 -Original Message-
 From: Stephan von Krawczynski [mailto:sk...@ithnet.com] 
 Sent: 09 July 2009 13:50
 To: Hiren Joshi
 Cc: Liam Slusser; gluster-users@gluster.org
 Subject: Re: [Gluster-users] GlusterFS Preformance
 
 On Thu, 9 Jul 2009 09:33:59 +0100
 Hiren Joshi j...@moonfruit.com wrote:
 
   
  
   -Original Message-
   From: Stephan von Krawczynski [mailto:sk...@ithnet.com] 
   Sent: 09 July 2009 09:08
   To: Liam Slusser
   Cc: Hiren Joshi; gluster-users@gluster.org
   Subject: Re: [Gluster-users] GlusterFS Preformance
   
   On Wed, 8 Jul 2009 10:05:58 -0700
   Liam Slusser lslus...@gmail.com wrote:
   
You have to remember that when you are writing with NFS 
   you're writing to
one node, where as your gluster setup below is copying the 
   same data to two
nodes;  so you're doubling the bandwidth.  Dont expect nfs 
   like performance
on writing with multiple storage bricks.  However read 
   performance should be
quite good.
liam
   
   Do you think this problem can be solved by using 2 storage 
   bricks on two
   different network cards on the client?
  
  I'd be surprised if the bottleneck here was the network. 
 I'm testing on
  a xen network but I've only been given one eth per slice.
 
 Do you mean your clients and servers are virtual XEN 
 installations (on the
 same physical box) ?

They are on different boxes and using different disks (don't ask), this
seemed like a good way to evaluate as I setup an NFS server using the
same equipment to get relative timings. The plan is to roll it out onto
new physical boxes in a month or 2


 
 Regards,
 Stephan
 
 

___
Gluster-users mailing list
Gluster-users@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] GlusterFS Preformance

2009-07-09 Thread Mickey Mazarick
Just a not, we have seen a pretty significant increase in speed from 
this latest 2.03 release. Doing a test read over afr we are seeing 
speeds between 200-320 mB a second. (over infiniband, ib-verbs)


This is with direct IO disabled too. Oddly putting performance 
translators on the clients made no difference. 

We cued up 10 servers to read a single file simultaneously and got about 
~30-50 mB a sec from each client, totaling up to ~400 mB a sec on a 
single file (read from 2 servers, by 10 servers).
We are really happy with these numbers for a virtual cluster so, most 
single drives won't read at those speeds.


Thanks!
-Mic


Hiren Joshi wrote:
 

  

-Original Message-
From: Stephan von Krawczynski [mailto:sk...@ithnet.com] 
Sent: 09 July 2009 13:50

To: Hiren Joshi
Cc: Liam Slusser; gluster-users@gluster.org
Subject: Re: [Gluster-users] GlusterFS Preformance

On Thu, 9 Jul 2009 09:33:59 +0100
Hiren Joshi j...@moonfruit.com wrote:


 

  

-Original Message-
From: Stephan von Krawczynski [mailto:sk...@ithnet.com] 
Sent: 09 July 2009 09:08

To: Liam Slusser
Cc: Hiren Joshi; gluster-users@gluster.org
Subject: Re: [Gluster-users] GlusterFS Preformance

On Wed, 8 Jul 2009 10:05:58 -0700
Liam Slusser lslus...@gmail.com wrote:


You have to remember that when you are writing with NFS 
  

you're writing to

one node, where as your gluster setup below is copying the 
  

same data to two

nodes;  so you're doubling the bandwidth.  Dont expect nfs 
  

like performance

on writing with multiple storage bricks.  However read 
  

performance should be


quite good.
liam
  
Do you think this problem can be solved by using 2 storage 
bricks on two

different network cards on the client?

I'd be surprised if the bottleneck here was the network. 
  

I'm testing on


a xen network but I've only been given one eth per slice.
  
Do you mean your clients and servers are virtual XEN 
installations (on the

same physical box) ?



They are on different boxes and using different disks (don't ask), this
seemed like a good way to evaluate as I setup an NFS server using the
same equipment to get relative timings. The plan is to roll it out onto
new physical boxes in a month or 2


  

Regards,
Stephan





___
Gluster-users mailing list
Gluster-users@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
  



--

___
Gluster-users mailing list
Gluster-users@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] GlusterFS Preformance

2009-07-08 Thread Liam Slusser
You have to remember that when you are writing with NFS you're writing to
one node, where as your gluster setup below is copying the same data to two
nodes;  so you're doubling the bandwidth.  Dont expect nfs like performance
on writing with multiple storage bricks.  However read performance should be
quite good.
liam

On Wed, Jul 8, 2009 at 5:22 AM, Hiren Joshi j...@moonfruit.com wrote:

 Hi,

 I'm currently evaluating gluster with the intention of replacing our
 current setup and have a few questions:

 At the moment, we have a large SAN which is split into 10 partitions and
 served out via NFS. For gluster, I was thinking 12 nodes to make up
 about 6TB (mirrored so that's 1TB per node) and served out using
 gluster. What sort of filesystem should I be using for the nodes
 (currently on ext3) to give me the best performance and recoverability?

 Also, I setup a test with a simple mirrored pair with a client that
 looks like:
 volume glust3
  type protocol/client
  option transport-type tcp/client
  option remote-host glust3
  option remote-port 6996
  option remote-subvolume brick
 end-volume
 volume glust4
  type protocol/client
  option transport-type tcp/client
  option remote-host glust4
  option remote-port 6996
  option remote-subvolume brick
 end-volume
 volume mirror1
  type cluster/replicate
  subvolumes glust3 glust4
 end-volume
 volume writebehind
  type performance/write-behind
  option window-size 1MB
  subvolumes mirror1
 end-volume
 volume cache
  type performance/io-cache
  option cache-size 512MB
  subvolumes writebehind
 end-volume


 I ran a basic test by writing 1G to an NFS server and this gluster pair:
 [r...@glust1 ~]# time dd if=/dev/zero of=/mnt/glust2_nfs/nfs_test
 bs=65536 count=15625
 15625+0 records in
 15625+0 records out
 102400 bytes (1.0 GB) copied, 1718.16 seconds, 596 kB/s

 real28m38.278s
 user0m0.010s
 sys 0m0.650s
 [r...@glust1 ~]# time dd if=/dev/zero of=/mnt/glust/glust_test bs=65536
 count=15625
 15625+0 records in
 15625+0 records out
 102400 bytes (1.0 GB) copied, 3572.31 seconds, 287 kB/s

 real59m32.745s
 user0m0.010s
 sys 0m0.010s


 With it taking almost twice as long, can I expect this sort of
 performance degradation on 'real' servers? Also, what sort of setup
 would you recommend for us?

 Can anyone help?
 Thanks,
 Josh.

 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users