Hi,

I have tested 3 kinds of distributed replica volume: 4 * 2, 3* 2 and 2*2. I 
suppose 4 * 2 should achieve the best IOPS, however, their performance seems 
similar. 


I have tested with "dd if=/dev/zero of=/mnt/glusterfs/block8 bs=128M count=1" 
and "dd if=/dev/zero of=/mnt/glusterfs/block8 bs=32M count=8". All bricks are 
on virtual machines,  with same hardware: 2 cores cpu, 8G memory.


The following is my volume configuration:
Volume Name: rep4
Type: Distributed-Replicate
Volume ID: b2ad2871-cfad-4f2c-afdb-38c2c4d6239c
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: 192.168.16.145:/home/vagrant/rep4
Brick2: 192.168.16.146:/home/vagrant/rep4
Brick3: 192.168.16.82:/home/vagrant/rep4
Brick4: 192.168.16.114:/home/vagrant/rep4

Volume Name: rep6
Type: Distributed-Replicate
Volume ID: 2cbcefce-da7a-4823-aee7-432c40f3ae55
Status: Started
Number of Bricks: 3 x 2 = 6
Transport-type: tcp
Bricks:
Brick1: 192.168.16.49:/home/vagrant/rep6
Brick2: 192.168.16.114:/home/vagrant/rep6
Brick3: 192.168.16.141:/home/vagrant/rep6
Brick4: 192.168.16.145:/home/vagrant/rep6
Brick5: 192.168.16.146:/home/vagrant/rep6
Brick6: 192.168.16.82:/home/vagrant/rep6

Volume Name: rep8
Type: Distributed-Replicate
Volume ID: ea77934c-bd5d-4578-8b39-c02402d00739
Status: Started
Number of Bricks: 4 x 2 = 8
Transport-type: tcp
Bricks:
Brick1: 192.168.16.145:/home/vagrant/rep8
Brick2: 192.168.16.146:/home/vagrant/rep8
Brick3: 192.168.16.114:/home/vagrant/rep8
Brick4: 192.168.16.82:/home/vagrant/rep8
Brick5: 192.168.16.141:/home/vagrant/rep8
Brick6: 192.168.16.49:/home/vagrant/rep8
Brick7: 192.168.16.144:/home/vagrant/rep8
Brick8: 192.168.16.112:/home/vagrant/rep8



According to 
"http://moo.nac.uci.edu/~hjm/Performance_in_a_Gluster_Systemv6F.pdf";, 
> To scale out performance, enterprises need simply add additional storage 
> server nodes, and will generally see linear performance improvements.


I wonder how can I achieve  linear performance improvements? Have I tested in 
the wrong way?
------------------
Regards,
Haoyuan Ge
_______________________________________________
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Reply via email to