Hello,
I test performance of Infiniband rdma and tcp, using glusterfs 3.2.5. There are 
10 servers and 2 clients, all of them been connected with Infiniband and have 
the same hardwares. Two hash volumes, each have 10 bricks,  All bricks are 16TB 
ext4 filesystem of raid5 on different servers. One volume's transport type is 
rdma and the other tcp. 


One client mounts the rdma volume and the other tcp volume. Using iozone to 
test the performance.
root@client-1:/mnt/rdma# iozone -i 0 -i 1 -s 10g -t 10 -R
root@client-2:/mnt/tcp# iozone -i 0 -i 1 -s 10g -t 10 -R


Performance of rdma volume:  
Read: 1.1GB/s 
Write: 1.0GB/s
While the performance tcp volume:
Read: 23.5MB/s 
Write: 52.8MB/s
I have checked the network and brick filesystem, all of them are normal. I 
wonder why performance of infiniband over tcp are so poor. 
Thanks for any help.


Best Regards,
Luna



_______________________________________________
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Reply via email to