Re: [Gluster-users] Rebalancing newly added bricks

2019-09-18 Thread Herb Burnswell
> > Hi, > > Rebalance will abort itself if it cannot reach any of the nodes. Are all > the bricks still up and reachable? > > Regards, > Nithya > Yes the bricks appear to be fine. I restarted the rebalance and the process is moving along again: # gluster vol rebalance tank status

Re: [Gluster-users] split-brain errors under heavy load when one brick down

2019-09-18 Thread Erik Jacobson
Thank you for replying! > Okay so 0-cm_shared-replicate-1 means these 3 bricks: > > Brick4: 172.23.0.6:/data/brick_cm_shared > Brick5: 172.23.0.7:/data/brick_cm_shared > Brick6: 172.23.0.8:/data/brick_cm_shared The above is correct. > Were there any pending self-heals for this volume? Is it

Re: [Gluster-users] Rebalancing newly added bricks

2019-09-18 Thread Nithya Balachandran
On Sat, 14 Sep 2019 at 01:25, Herb Burnswell wrote: > Hi, > > Well our rebalance seems to have failed. Here is the output: > Hi, Rebalance will abort itself if it cannot reach any of the nodes. Are all the bricks still up and reachable? Regards, Nithya > > # gluster vol rebalance tank