On 04/07/2015 03:08 PM, Susant Palai wrote:
Here is one test performed on a 300GB data set and around 100%(1/2 the time) 
improvement was seen.

[root@gprfs031 ~]# gluster v i

Volume Name: rbperf
Type: Distribute
Volume ID: 35562662-337e-4923-b862-d0bbb0748003
Status: Started
Number of Bricks: 4
Transport-type: tcp
Bricks:
Brick1: gprfs029-10ge:/bricks/gprfs029/brick1
Brick2: gprfs030-10ge:/bricks/gprfs030/brick1
Brick3: gprfs031-10ge:/bricks/gprfs031/brick1
Brick4: gprfs032-10ge:/bricks/gprfs032/brick1


Added server 32 and started rebalance force.

Rebalance stat for new changes:
[root@gprfs031 ~]# gluster v rebalance rbperf status
                                     Node Rebalanced-files          size       
scanned      failures       skipped               status   run time in secs
                                ---------      -----------   -----------   
-----------   -----------   -----------         ------------     --------------
                                localhost            74639        36.1GB        
297319             0             0            completed            1743.00
                             172.17.40.30            67512        33.5GB        
269187             0             0            completed            1395.00
                            gprfs029-10ge            79095        38.8GB        
284105             0             0            completed            1559.00
                            gprfs032-10ge                0        0Bytes        
     0             0             0            completed             402.00
volume rebalance: rbperf: success:

Rebalance stat for old model:
[root@gprfs031 ~]# gluster v rebalance rbperf status
                                     Node Rebalanced-files          size       
scanned      failures       skipped               status   run time in secs
                                ---------      -----------   -----------   
-----------   -----------   -----------         ------------     --------------
                                localhost            86493        42.0GB        
634302             0             0            completed            3329.00
                            gprfs029-10ge            94115        46.2GB        
687852             0             0            completed            3328.00
                            gprfs030-10ge            74314        35.9GB        
651943             0             0            completed            3072.00
                            gprfs032-10ge                0        0Bytes        
594166             0             0            completed            1943.00
volume rebalance: rbperf: success:


This is interesting. Thanks for sharing & well done! Maybe we should attempt a much larger data set and see how we fare there :).

Regards,
Vijay


_______________________________________________
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Reply via email to