Hi Fernando, Anand,

Thanks for the suggestion. I removed the enable-direct-io option in /etc/fstab, remounted, and the performance is the same within statistical precision.

cheers, Doug


On 10/19/2012 02:13 AM, Fernando Frediani (Qube) wrote:
Hi Doug,

Try to make the change suggested by Anand and let us know how you get on. I am 
interested to hear about the performance on 3.3 because bad performance has 
been subject of many emails for a while here.

Regards,

Fernando

-----Original Message-----
From: gluster-users-boun...@gluster.org 
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Doug Schouten
Sent: 19 October 2012 02:45
To: gluster-users@gluster.org
Subject: [Gluster-users] performance in 3.3

Hi,

I am noticing a rather slow read performance using GlusterFS 3.3 with the 
following configuration:

Number of Bricks: 4
Transport-type: tcp
Bricks:
Brick1: server1:/srv/data
Brick2: server2:/srv/data
Brick3: server3:/srv/data
Brick4: server4:/srv/data
Options Reconfigured:
features.quota: off
features.quota-timeout: 1800
performance.flush-behind: on
performance.io-thread-count: 64
performance.quick-read: on
performance.stat-prefetch: on
performance.io-cache: on
performance.write-behind: on
performance.read-ahead: on
performance.write-behind-window-size: 4MB
performance.cache-refresh-timeout: 1
performance.cache-size: 4GB
nfs.rpc-auth-allow: none
network.frame-timeout: 60
nfs.disable: on
performance.cache-max-file-size: 1GB


The servers are connected with bonded 1Gb ethernet, and have LSI MegaRAID 
arrays with 12x1 TB disks in RAID-6 array, using XFS file system mounted like:

xfs     logbufs=8,logbsize=32k,noatime,nodiratime  0    0

and we use the FUSE client

localhost:/global /global glusterfs
defaults,direct-io-mode=enable,log-level=WARNING,log-file=/var/log/gluster.log
0 0

Our files are all >= 2MB. When rsync-ing we see about 50MB/s read performance 
which improves to 250MB/s after the first copy. This indicates to me that the disk 
caching is working as expected. However I am rather surprised by the low 50MB/s 
read speed; this is too low to be limited by network, and the native disk read 
performance is way better.
Is there some configuration that can improve this situation?

thanks,



--


 Doug Schouten
 Research Associate
 TRIUMF
_______________________________________________
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Reply via email to