Re: [Gluster-users] Urgent :) Procedure for replacing Gluster Node on 3.8.12

2017-06-11 Thread Lindsay Mathieson

On 9/06/2017 5:54 PM, Lindsay Mathieson wrote:
I've started the process as above, seems to be going ok - cluster is 
going to be unusable for the next couple of days.


Just as an update - I was mistaken in this, cluster was actually quite 
usable while this was going on, except for on the new server. Total sync 
time of the 3.2TB took around 30 hours.



Network: 3*1GB bonded with balance-alb

bricks: All ZFS RAID 10 with fast ssd slog (Way faster than the network).

--
Lindsay Mathieson

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Urgent :) Procedure for replacing Gluster Node on 3.8.12

2017-06-09 Thread Pranith Kumar Karampuri
On Sat, Jun 10, 2017 at 2:53 AM, Lindsay Mathieson <
lindsay.mathie...@gmail.com> wrote:

> On 9/06/2017 9:56 PM, Pranith Kumar Karampuri wrote:
>
> > gluster volume remove-brick datastore4 replica 2
>> > vna.proxmox.softlog:/tank/vmdata/datastore4 force
>> >
>> > gluster volume add-brick datastore4 replica 3
>> > vnd.proxmox.softlog:/tank/vmdata/datastore4
>>
>> I think that should work perfectly fine yes, either that
>> or directly use replace-brick ?
>>
>
> Yes, this should be replace-brick
>
>
> Was there any problem in doing the way I did?
>
You won't notice the problem. But it is inefficient.
AFR keeps track of which brick is good/bad based on the xattrs on the
bricks. If you have 3 nodes A,B,C then on each brick there can be at least
2-3 extra AFR xattrs of the form (trusted.afr.-client-[0/1/2].
When you replace C->D then the same xattrs will stay. But when you remove C
and add D it will treat it as a new brick and add one more xattr like
trusted.afr.-client-3. So it is better to use replace-brick for
this case.

Add-brick/remove-brick is for users who make the decision to make the
decision that they either want to increase the replica count of an existing
volume or decrease it.


HTH

> --
> Lindsay Mathieson
>
>


-- 
Pranith
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Urgent :) Procedure for replacing Gluster Node on 3.8.12

2017-06-09 Thread Lindsay Mathieson

On 9/06/2017 9:56 PM, Pranith Kumar Karampuri wrote:


> gluster volume remove-brick datastore4 replica 2
> vna.proxmox.softlog:/tank/vmdata/datastore4 force
>
> gluster volume add-brick datastore4 replica 3
> vnd.proxmox.softlog:/tank/vmdata/datastore4

I think that should work perfectly fine yes, either that
or directly use replace-brick ?


Yes, this should be replace-brick



Was there any problem in doing the way I did?

--
Lindsay Mathieson

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Urgent :) Procedure for replacing Gluster Node on 3.8.12

2017-06-09 Thread Pranith Kumar Karampuri
On Fri, Jun 9, 2017 at 12:41 PM,  wrote:

> > I'm thinking the following:
> >
> > gluster volume remove-brick datastore4 replica 2
> > vna.proxmox.softlog:/tank/vmdata/datastore4 force
> >
> > gluster volume add-brick datastore4 replica 3
> > vnd.proxmox.softlog:/tank/vmdata/datastore4
>
> I think that should work perfectly fine yes, either that
> or directly use replace-brick ?
>

Yes, this should be replace-brick


>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>



-- 
Pranith
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Urgent :) Procedure for replacing Gluster Node on 3.8.12

2017-06-09 Thread lemonnierk
> Must admit this sort of process - replacing bricks and/or node is *very*
> stressful with gluster. That sick feeling in the stomach - will I have to
> restore everything from backups?
> 
> Shouldn't be this way.

I know exactly what you mean.
Last week end I replaced a server (it was working fine, though), I did that
in the middle of the night, very stressed. It ended up going perfectly fine
and no-one saw anything, but that exact sick feeling in the stomach.

Well, haven't had any problems lately so I guess It'll go away after a while,
just have to get used to see it working fine I suppose. That's kind of why
I haven't dared to upgrade from 3.7 yet I think


signature.asc
Description: Digital signature
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Urgent :) Procedure for replacing Gluster Node on 3.8.12

2017-06-09 Thread Lindsay Mathieson
On 9 June 2017 at 17:12,  wrote:

> Heh, on that, did you think to take a look at the Media_Wearout indicator ?
> I recently learned that existed, and it explained A LOT.
>

Yah, that has been useful in the past for journal/cache ssd's that get a
lot of writes. However all the stats on this boot SSD were ok, it just gave
up the ghost. Internal controller failure maybe.

I've started the process as above, seems to be going ok - cluster is going
to be unusable for the next couple of days.

Must admit this sort of process - replacing bricks and/or node is *very*
stressful with gluster. That sick feeling in the stomach - will I have to
restore everything from backups?

Shouldn't be this way.


-- 
Lindsay
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Urgent :) Procedure for replacing Gluster Node on 3.8.12

2017-06-09 Thread lemonnierk
> And a big thanks (*not*) to the smart reporting which showed no issues at
> all.

Heh, on that, did you think to take a look at the Media_Wearout indicator ?
I recently learned that existed, and it explained A LOT.


signature.asc
Description: Digital signature
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Urgent :) Procedure for replacing Gluster Node on 3.8.12

2017-06-09 Thread lemonnierk
> I'm thinking the following:
> 
> gluster volume remove-brick datastore4 replica 2
> vna.proxmox.softlog:/tank/vmdata/datastore4 force
> 
> gluster volume add-brick datastore4 replica 3
> vnd.proxmox.softlog:/tank/vmdata/datastore4

I think that should work perfectly fine yes, either that
or directly use replace-brick ?


signature.asc
Description: Digital signature
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Urgent :) Procedure for replacing Gluster Node on 3.8.12

2017-06-08 Thread Lindsay Mathieson
On 9 June 2017 at 10:51, Lindsay Mathieson 
wrote:

> Or I should say we *had* a 3 node cluster, one node died today.


Boot SSD failed, definitely a reinstall from scratch.

And a big thanks (*not*) to the smart reporting which showed no issues at
all.


-- 
Lindsay
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users