Re: [Gluster-users] Rebalancing newly added bricks

2019-08-31 Thread Herb Burnswell
Thank you for the reply.

I started a rebalance with force on serverA as suggested.  Now I see
'activity' on that node:

# gluster vol rebalance tank status
Node Rebalanced-files  size
  scanned  failures   skipped   status  run time in
h:m:s
   -  ---   ---
---   ---   --- 
--
   localhost 6143 6.1GB
 9542 0 0  in progress0:4:5
   serverB  00Bytes
7 0 0  in progress0:4:5
volume rebalance: tank: success

But I am not seeing any activity on serverB.  Is this expected?  Does the
rebalance need to run on each node even though it says both nodes are 'in
progress'?

Thanks,

HB

On Sat, Aug 31, 2019 at 4:18 AM Strahil  wrote:

> The rebalance status show 0 Bytes.
>
> Maybe you should try with the 'gluster volume rebalance  start
> force' ?
>
> Best Regards,
> Strahil Nikolov
>
> Source:
> https://docs.gluster.org/en/latest/Administrator%20Guide/Managing%20Volumes/#rebalancing-volumes
> On Aug 30, 2019 20:04, Herb Burnswell  wrote:
>
> All,
>
> RHEL 7.5
> Gluster 3.8.15
> 2 Nodes: serverA & serverB
>
> I am not deeply knowledgeable about Gluster and it's administration but we
> have a 2 node cluster that's been running for about a year and a half.  All
> has worked fine to date.  Our main volume has consisted of two 60TB bricks
> on each of the cluster nodes.  As we reached capacity on the volume we
> needed to expand.  So, we've added four new 60TB bricks to each of the
> cluster nodes.  The bricks are now seen, and the total size of the volume
> is as expected:
>
> # gluster vol status tank
> Status of volume: tank
> Gluster process TCP Port  RDMA Port  Online
>  Pid
>
> --
> Brick serverA:/gluster_bricks/data1   49162 0  Y
> 20318
> Brick serverB:/gluster_bricks/data1   49166 0  Y
> 3432
> Brick serverA:/gluster_bricks/data2   49163 0  Y
> 20323
> Brick serverB:/gluster_bricks/data2   49167 0  Y
> 3435
> Brick serverA:/gluster_bricks/data3   49164 0  Y
> 4625
> Brick serverA:/gluster_bricks/data4   49165 0  Y
> 4644
> Brick serverA:/gluster_bricks/data5   49166 0  Y
> 5088
> Brick serverA:/gluster_bricks/data6   49167 0  Y
> 5128
> Brick serverB:/gluster_bricks/data3   49168 0  Y
> 22314
> Brick serverB:/gluster_bricks/data4   49169 0  Y
> 22345
> Brick serverB:/gluster_bricks/data5   49170 0  Y
> 22889
> Brick serverB:/gluster_bricks/data6   49171 0  Y
> 22932
> Self-heal Daemon on localhost N/A   N/AY
> 22981
> Self-heal Daemon on serverA.example.com   N/A   N/AY
> 6202
>
> After adding the bricks we ran a rebalance from serverA as:
>
> # gluster volume rebalance tank start
>
> The rebalance completed:
>
> # gluster volume rebalance tank status
> Node Rebalanced-files  size
> scanned  failures   skipped   status  run time in
> h:m:s
>-  ---   ---
> ---   ---   --- 
> --
>localhost00Bytes
>   0 0 0completed3:7:10
>  serverA.example.com00Bytes
>   0 0 0completed0:0:0
> volume rebalance: tank: success
>
> However, when I run a df, the two original bricks still show all of the
> consumed space (this is the same on both nodes):
>
> # df -hP
> Filesystem   Size  Used Avail Use% Mounted on
> /dev/mapper/vg0-root 5.0G  625M  4.4G  13% /
> devtmpfs  32G 0   32G   0% /dev
> tmpfs 32G 0   32G   0% /dev/shm
> tmpfs 32G   67M   32G   1% /run
> tmpfs 32G 0   32G   0%
> /sys/fs/cgroup
> /dev/mapper/vg0-usr   20G  3.6G   17G  18% /usr
> /dev/md126  1014M  228M  787M  23% /boot
> /dev/mapper/vg0-home 5.0G   37M  5.0G   1% /home
> /dev/mapper/vg0-opt  5.0G   37M  5.0G   1% /opt
> /dev/mapper/vg0-tmp  5.0G   33M  5.0G   1% /tmp
> /dev/mapper/vg0-var   20G  2.6G   18G  13% /var
> /dev/mapper/gluster_vg-gluster_lv1_data   60T   59T  1.1T  

[Gluster-users] Mount Gluster for untrusted users

2019-08-31 Thread Pankaj Kumar
Hi

Is it possible to not let a user with root access on the network not mount
glisterfs with any other UID than his/her own? Is there a way to
authenticate mount requests?

Thanks,
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users