On Friday 09 September 2011 10:30 AM, Thomas Jackson wrote:
Hi everyone,

Hello Thomas,

Try the following:

1. In the fuse volume file, try:

Under write-behind:
"option cache-size 16MB"

Under read-ahead:
"option page-count 16"

Under io-cache:
"option cache-size=64MB"

2. Did you get 9Gbits/Sec with iperf with a single thread or multiple threads?

3. Can you give me the output of:
sysctl -a | egrep 'rmem|wmem'

4. If it is not a problem for you, can you please create a pure distribute setup (instead of distributed-replicate) and then report the numbers?

5. What is the inode size with which you formatted you XFS filesystem ?
This last point might not be related to your throughput problem, but if you are planning to use this setup for a large number of files, you might be better off using an inode size of 512 instead of the default 256 bytes. To do that, your mkfs command should be:

mkfs -t xfs -i size=512 /dev/<disk device>

Pavan


I am seeing slower-than-expected performance in Gluster 3.2.3 between 4
hosts with 10 gigabit eth between them all. Each host has 4x 300GB SAS 15K
drives in RAID10, 6-core Xeon E5645 @ 2.40GHz and 24GB RAM running Ubuntu
10.04 64-bit (I have also tested with Scientific Linux 6.1 and Debian
Squeeze - same results on those as well). All of the hosts mount the volume
using the FUSE module. The base filesystem on all of the nodes is XFS,
however tests with ext4 have yielded similar results.

Command used to create the volume:
        gluster volume create cluster-volume replica 2 transport tcp
node01:/mnt/local-store/ node02:/mnt/local-store/ node03:/mnt/local-store/
node04:/mnt/local-store/

Command used to mount the Gluster volume on each node:
        mount -t glusterfs localhost:/cluster-volume /mnt/cluster-volume

Creating a 40GB file onto a node's local storage (ie no Gluster
involvement):
        dd if=/dev/zero of=/mnt/local-store/test.file bs=1M count=40000
        41943040000 bytes (42 GB) copied, 92.9264 s, 451 MB/s

Getting the same file off the node's local storage:
        dd if=/mnt/local-store/test.file of=/dev/null
        41943040000 bytes (42 GB) copied, 81.858 s, 512 MB/s

40GB file onto the Gluster storage:
        dd if=/dev/zero of=/mnt/cluster-volume/test.file bs=1M count=40000
        41943040000 bytes (42 GB) copied, 226.934 s, 185 MB/s

Getting the same file off the Gluster storage
        dd if=/mnt/cluster-volume/test.file of=/dev/null
        41943040000 bytes (42 GB) copied, 661.561 s, 63.4 MB/s

I have also tried using Gluster 3.1, with similar results.

According to the Gluster docs, I should be seeing roughly the lesser of the
drive speed and the network speed. The network is able to push 0.9GB/sec
according to iperf so that definitely isn't a limiting factor here, and each
array is able to do 400-500MB/sec as per above benchmarks. I've tried
with/without jumbo frames as well, which doesn't make any major difference.

The glusterfs process is using 120% CPU according to top, and glusterfsd is
sitting at about 90%.

Any ideas / tips of where to start for speeding this config up?

Thanks,

Thomas

_______________________________________________
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

_______________________________________________
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

Reply via email to