Any help is greatly appreciated:

server 1 - 1Gbps & identical hardware
server 2 - 1Gbps & identical hardware

both have identical client & server conf files w/ performance enhancement
and afr

the clients are run to mount to the same file path on both servers.

now, scping a file from local file system on server1 to mounted gluster fs
on server 2 transfers at ~3MB/s
scping a file from local file system on server2 to mounted gluster fs on
server1 transfers at ~23MB/s
ALSO, scping to non-glusterfs yields ~23MB/s from server1 to server2 and
vise versa, so something with gluster that is bottlenecking at 3MB/s

If I change the afr section of the client files to list "subvolume brick1
brick2" to "subvolume brick2 brick1", the transfer rates above switch - now
server 1 to server 2 is fast and server 2 to server 1 is slow.

volume afr
  type cluster/afr
  subvolumes brick1 brick2
option self-heal on
end-volume
------
to
------
volume afr
  type cluster/afr
  subvolumes brick2 brick1
option self-heal on
end-volume

/var/lib/data is the mounted file system from glusterfs-client.vol

[EMAIL PROTECTED]:/var/lib$ scp /tmp/2.mpg [EMAIL PROTECTED]:/var/lib/data/
2.mpg                                          14%   62MB   2.4MB/s   02:34
ETA

Now, the other way:

[EMAIL PROTECTED]:/tmp$ sudo scp 3.mpg [EMAIL PROTECTED]:/var/lib/data/
3.mpg                                         100%   71MB  23.6MB/s   00:03
_______________________________________________
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel

Reply via email to