Re: [Gluster-users] Rebalancing newly added bricks

2019-09-04 Thread Herb Burnswell
Thanks for the replies.  The rebalance is running and the brick percentages
are not adjusting as expected:

# df -hP |grep data
/dev/mapper/gluster_vg-gluster_lv1_data   60T   49T   11T  83%
/gluster_bricks/data1
/dev/mapper/gluster_vg-gluster_lv2_data   60T   49T   11T  83%
/gluster_bricks/data2
/dev/mapper/gluster_vg-gluster_lv3_data   60T  4.6T   55T   8%
/gluster_bricks/data3
/dev/mapper/gluster_vg-gluster_lv4_data   60T  4.6T   55T   8%
/gluster_bricks/data4
/dev/mapper/gluster_vg-gluster_lv5_data   60T  4.6T   55T   8%
/gluster_bricks/data5
/dev/mapper/gluster_vg-gluster_lv6_data   60T  4.6T   55T   8%
/gluster_bricks/data6

At the current pace it looks like this will continue to run for another 5-6
days.

I appreciate the guidance..

HB

On Mon, Sep 2, 2019 at 9:08 PM Nithya Balachandran 
wrote:

>
>
> On Sat, 31 Aug 2019 at 22:59, Herb Burnswell 
> wrote:
>
>> Thank you for the reply.
>>
>> I started a rebalance with force on serverA as suggested.  Now I see
>> 'activity' on that node:
>>
>> # gluster vol rebalance tank status
>> Node Rebalanced-files  size
>> scanned  failures   skipped   status  run time in
>> h:m:s
>>-  ---   ---
>> ---   ---   --- 
>> --
>>localhost 6143 6.1GB
>>9542 0 0  in progress0:4:5
>>serverB  00Bytes
>>   7 0 0  in progress0:4:5
>> volume rebalance: tank: success
>>
>> But I am not seeing any activity on serverB.  Is this expected?  Does the
>> rebalance need to run on each node even though it says both nodes are 'in
>> progress'?
>>
>>
> It looks like this is a replicate volume. If that is the case then yes,
> you are running an old version of Gluster for which this was the default
> behaviour.
>
> Regards,
> Nithya
>
> Thanks,
>>
>> HB
>>
>> On Sat, Aug 31, 2019 at 4:18 AM Strahil  wrote:
>>
>>> The rebalance status show 0 Bytes.
>>>
>>> Maybe you should try with the 'gluster volume rebalance  start
>>> force' ?
>>>
>>> Best Regards,
>>> Strahil Nikolov
>>>
>>> Source:
>>> https://docs.gluster.org/en/latest/Administrator%20Guide/Managing%20Volumes/#rebalancing-volumes
>>> On Aug 30, 2019 20:04, Herb Burnswell 
>>> wrote:
>>>
>>> All,
>>>
>>> RHEL 7.5
>>> Gluster 3.8.15
>>> 2 Nodes: serverA & serverB
>>>
>>> I am not deeply knowledgeable about Gluster and it's administration but
>>> we have a 2 node cluster that's been running for about a year and a half.
>>> All has worked fine to date.  Our main volume has consisted of two 60TB
>>> bricks on each of the cluster nodes.  As we reached capacity on the volume
>>> we needed to expand.  So, we've added four new 60TB bricks to each of the
>>> cluster nodes.  The bricks are now seen, and the total size of the volume
>>> is as expected:
>>>
>>> # gluster vol status tank
>>> Status of volume: tank
>>> Gluster process TCP Port  RDMA Port  Online
>>>  Pid
>>>
>>> --
>>> Brick serverA:/gluster_bricks/data1   49162 0  Y
>>> 20318
>>> Brick serverB:/gluster_bricks/data1   49166 0  Y
>>> 3432
>>> Brick serverA:/gluster_bricks/data2   49163 0  Y
>>> 20323
>>> Brick serverB:/gluster_bricks/data2   49167 0  Y
>>> 3435
>>> Brick serverA:/gluster_bricks/data3   49164 0  Y
>>> 4625
>>> Brick serverA:/gluster_bricks/data4   49165 0  Y
>>> 4644
>>> Brick serverA:/gluster_bricks/data5   49166 0  Y
>>> 5088
>>> Brick serverA:/gluster_bricks/data6   49167 0  Y
>>> 5128
>>> Brick serverB:/gluster_bricks/data3   49168 0  Y
>>> 22314
>>> Brick serverB:/gluster_bricks/data4   49169 0  Y
>>> 22345
>>> Brick serverB:/gluster_bricks/data5   49170 0  Y
>>> 22889
>>> Brick serverB:/gluster_bricks/data6   49171 0  Y
>>> 22932
>>> Self-heal Daemon on localhost N/A   N/AY
>>> 22981
>>> Self-heal Daemon on serverA.example.com   N/A   N/AY
>>> 6202
>>>
>>> After adding the bricks we ran a rebalance from serverA as:
>>>
>>> # gluster volume rebalance tank start
>>>
>>> The rebalance completed:
>>>
>>> # gluster volume rebalance tank status
>>> Node Rebalanced-files  size
>>>   scanned  failures   skipped   status  run time in
>>> h:m:s
>>>-  ---   ---
>>>   ---   ---   --- 
>>> --
>>>localhost00Bytes
>>> 0

Re: [Gluster-users] Geo-Replication what is transferred

2019-09-04 Thread Strahil
As far as I know , when sharding is enabled - each shard will be synced 
separately, while a whole file  will be transferred when sharding is not 
enabled.

Is striping still supported ? I think sharding should be used.

Best Regards,
Strahil Nikolov
On Sep 3, 2019 23:47, Petric Frank  wrote:
>
> Hello, 
>
> given a geo-replicated file of 20 GBytes in size. 
>
> If one byte in this file is changed, what will be transferred ? 
> - the changed byte 
> - the block/sector the containing the changed byte 
> - the complete file 
>
> Is the storage mode relevant - sharded/striped/... ? 
>
> regards 
>   Petric 
>
>
>
> ___ 
> Gluster-users mailing list 
> Gluster-users@gluster.org 
> https://lists.gluster.org/mailman/listinfo/gluster-users 
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Tiering dropped ?

2019-09-04 Thread Amar Tumballi
On Wed, Sep 4, 2019 at 1:18 AM Carl Sirotic  wrote:

> So,
>
> I am running 4.1.x and I started to use tiering.
>
> I ran in a load of problem where my email server would get kernel
> panick, starting 12 hours after the change.
>
> I am in the process of detaching the tier.
>
> I saw that in version 6, tier feature was completely removed.
>
> I am under the impression there was some bugs.
>
>
That is -almost- correct. It is also true that there were issues in making
it actually better performant, and someone to take care of it full time.


>
> Is LVM Cache suposed to be a viable solution for nvme/ssd caching for
> sharded volume ?
>
>
What we saw is dm-cache (ie, lvm cache) has actually performed better.
Recommend you to try that for checking performance in your workload.

-Amar


>
> Carl
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users