Thanks for bringing in the relevant experts on this one, Nithya.
Since my last mail to the list, I've tried
- Deleting some old disk image files from the volume, in hopes that the
problem chunks might belong to them
- Running a "gluster volume heal full"
- Stopping the remove-brick operation
Hi Dave,
Yes, files in split brain are not migrated as we cannot figure out which is
the good copy. Adding Ravi to look at this and see what can be done.
Also adding Krutika as this is a sharded volume.
The files with the "-T" permissions are internal files and can be
ignored. Ravi and
On Thu, Jun 27, 2019 at 12:17:10PM +0530, Nithya Balachandran wrote:
> There are some edge cases that may prevent a file from being migrated
> during a remove-brick. Please do the following after this:
>
>1. Check the remove-brick status for any failures. If there are any,
>check the
OK, I'm just careless. Forgot to include "start" after the list of
bricks...
On Fri, Jun 28, 2019 at 04:03:40AM -0500, Dave Sherohman wrote:
> On Thu, Jun 27, 2019 at 12:17:10PM +0530, Nithya Balachandran wrote:
> > On Tue, 25 Jun 2019 at 15:26, Dave Sherohman wrote:
> > > My objective is to
On Fri, 28 Jun 2019 at 14:34, Dave Sherohman wrote:
> On Thu, Jun 27, 2019 at 12:17:10PM +0530, Nithya Balachandran wrote:
> > On Tue, 25 Jun 2019 at 15:26, Dave Sherohman wrote:
> > > My objective is to remove nodes B and C entirely.
> > >
> > > First up is to pull their bricks from the
On Thu, Jun 27, 2019 at 12:17:10PM +0530, Nithya Balachandran wrote:
> On Tue, 25 Jun 2019 at 15:26, Dave Sherohman wrote:
> > My objective is to remove nodes B and C entirely.
> >
> > First up is to pull their bricks from the volume:
> >
> > # gluster volume remove-brick myvol B:/data C:/data
On Thu, 27 Jun 2019 at 12:17, Nithya Balachandran
wrote:
> Hi,
>
>
> On Tue, 25 Jun 2019 at 15:26, Dave Sherohman wrote:
>
>> I have a 9-brick, replica 2+A cluster and plan to (permanently) remove
>> one of the three subvolumes. I think I've worked out how to do it, but
>> want to verify first
Hi,
On Tue, 25 Jun 2019 at 15:26, Dave Sherohman wrote:
> I have a 9-brick, replica 2+A cluster and plan to (permanently) remove
> one of the three subvolumes. I think I've worked out how to do it, but
> want to verify first that I've got it right, since downtime or data loss
> would be Bad
I have a 9-brick, replica 2+A cluster and plan to (permanently) remove
one of the three subvolumes. I think I've worked out how to do it, but
want to verify first that I've got it right, since downtime or data loss
would be Bad Things.
The current configuration has six data bricks across six