Thanks for the replies. The rebalance is running and the brick percentages
are not adjusting as expected:
# df -hP |grep data
/dev/mapper/gluster_vg-gluster_lv1_data 60T 49T 11T 83%
/gluster_bricks/data1
/dev/mapper/gluster_vg-gluster_lv2_data 60T 49T 11T 83%
/gluster_bricks/data2
/dev/mapper/gluster_vg-gluster_lv3_data 60T 4.6T 55T 8%
/gluster_bricks/data3
/dev/mapper/gluster_vg-gluster_lv4_data 60T 4.6T 55T 8%
/gluster_bricks/data4
/dev/mapper/gluster_vg-gluster_lv5_data 60T 4.6T 55T 8%
/gluster_bricks/data5
/dev/mapper/gluster_vg-gluster_lv6_data 60T 4.6T 55T 8%
/gluster_bricks/data6
At the current pace it looks like this will continue to run for another 5-6
days.
I appreciate the guidance..
HB
On Mon, Sep 2, 2019 at 9:08 PM Nithya Balachandran
wrote:
>
>
> On Sat, 31 Aug 2019 at 22:59, Herb Burnswell
> wrote:
>
>> Thank you for the reply.
>>
>> I started a rebalance with force on serverA as suggested. Now I see
>> 'activity' on that node:
>>
>> # gluster vol rebalance tank status
>> Node Rebalanced-files size
>> scanned failures skipped status run time in
>> h:m:s
>>- --- ---
>> --- --- ---
>> --
>>localhost 6143 6.1GB
>>9542 0 0 in progress0:4:5
>>serverB 00Bytes
>> 7 0 0 in progress0:4:5
>> volume rebalance: tank: success
>>
>> But I am not seeing any activity on serverB. Is this expected? Does the
>> rebalance need to run on each node even though it says both nodes are 'in
>> progress'?
>>
>>
> It looks like this is a replicate volume. If that is the case then yes,
> you are running an old version of Gluster for which this was the default
> behaviour.
>
> Regards,
> Nithya
>
> Thanks,
>>
>> HB
>>
>> On Sat, Aug 31, 2019 at 4:18 AM Strahil wrote:
>>
>>> The rebalance status show 0 Bytes.
>>>
>>> Maybe you should try with the 'gluster volume rebalance start
>>> force' ?
>>>
>>> Best Regards,
>>> Strahil Nikolov
>>>
>>> Source:
>>> https://docs.gluster.org/en/latest/Administrator%20Guide/Managing%20Volumes/#rebalancing-volumes
>>> On Aug 30, 2019 20:04, Herb Burnswell
>>> wrote:
>>>
>>> All,
>>>
>>> RHEL 7.5
>>> Gluster 3.8.15
>>> 2 Nodes: serverA & serverB
>>>
>>> I am not deeply knowledgeable about Gluster and it's administration but
>>> we have a 2 node cluster that's been running for about a year and a half.
>>> All has worked fine to date. Our main volume has consisted of two 60TB
>>> bricks on each of the cluster nodes. As we reached capacity on the volume
>>> we needed to expand. So, we've added four new 60TB bricks to each of the
>>> cluster nodes. The bricks are now seen, and the total size of the volume
>>> is as expected:
>>>
>>> # gluster vol status tank
>>> Status of volume: tank
>>> Gluster process TCP Port RDMA Port Online
>>> Pid
>>>
>>> --
>>> Brick serverA:/gluster_bricks/data1 49162 0 Y
>>> 20318
>>> Brick serverB:/gluster_bricks/data1 49166 0 Y
>>> 3432
>>> Brick serverA:/gluster_bricks/data2 49163 0 Y
>>> 20323
>>> Brick serverB:/gluster_bricks/data2 49167 0 Y
>>> 3435
>>> Brick serverA:/gluster_bricks/data3 49164 0 Y
>>> 4625
>>> Brick serverA:/gluster_bricks/data4 49165 0 Y
>>> 4644
>>> Brick serverA:/gluster_bricks/data5 49166 0 Y
>>> 5088
>>> Brick serverA:/gluster_bricks/data6 49167 0 Y
>>> 5128
>>> Brick serverB:/gluster_bricks/data3 49168 0 Y
>>> 22314
>>> Brick serverB:/gluster_bricks/data4 49169 0 Y
>>> 22345
>>> Brick serverB:/gluster_bricks/data5 49170 0 Y
>>> 22889
>>> Brick serverB:/gluster_bricks/data6 49171 0 Y
>>> 22932
>>> Self-heal Daemon on localhost N/A N/AY
>>> 22981
>>> Self-heal Daemon on serverA.example.com N/A N/AY
>>> 6202
>>>
>>> After adding the bricks we ran a rebalance from serverA as:
>>>
>>> # gluster volume rebalance tank start
>>>
>>> The rebalance completed:
>>>
>>> # gluster volume rebalance tank status
>>> Node Rebalanced-files size
>>> scanned failures skipped status run time in
>>> h:m:s
>>>- --- ---
>>> --- --- ---
>>> --
>>>localhost00Bytes
>>> 0