[Gluster-users] When can I start using a peer that was added to a large volume?

2017-08-02 Thread Tom Cannaerts - INTRACTO
I added a peer to a 50GB replica volume and initial replication seems to go
rather slow. It's about 50GB but has a lot of small files and a lot of
files in the same folder.

What would happen if I try to access a file on the new peer? Will it just
fail? Will gluster fetch it sealessly from the replication partner? Or will
the file just not be there?

Thanks,

-- 
Met vriendelijke groeten,
Tom Cannaerts


*Service and MaintenanceIntracto - digital agency*

Zavelheide 15 - 2200 Herentals
Tel: +32 14 28 29 29
www.intracto.com


Ben je tevreden over deze e-mail?






___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Remove and re-add bricks/peers

2017-07-18 Thread Tom Cannaerts - INTRACTO
We'll definitely look into upgrading this, but it's a older, legacy system
so we need to see what we can do without breaking it.

Returning to the re-adding question, what steps do I need to do to clear
the config of the failed peers? Do I just wipe the data directory of the
volume, or do I need to clear some other config file/folders as well?

Tom


Op ma 17 jul. 2017 om 16:39 schreef Atin Mukherjee :

> That's the way. However I'd like to highlight that you're running a very
> old gluster release. We are currently with 3.11 release which is STM and
> the long term support is with 3.10. You should consider to upgrade to
> atleast 3.10.
>
> On Mon, Jul 17, 2017 at 3:25 PM, Tom Cannaerts - INTRACTO <
> tom.cannae...@intracto.com> wrote:
>
>> We had some issues with a volume. The volume is a 3 replica volume with 3
>> gluster 3.5.7 peers. We are now in a situation where only 1 of the 3 nodes
>> is operational. If we restart the node on one of the other nodes, the
>> entire volume becomes unresponsive.
>>
>> After a lot of trial and error, we have come to the conclusion that we do
>> not wan't to try to rejoin the other 2 nodes in the current form. We would
>> like to completely remove them from the config of the running node,
>> entirely reset the config on the nodes itself and then re-add them as if it
>> was a new node, having it completely sync the volume from the working node.
>>
>> What would be the correct procedure for this? I assume I can use "gluster
>> volume remove-brick" to force-remove the failed bricks from the volume and
>> decrease the replica count, and then use "gluster peer detach" to
>> force-remove the peers from the config, all on the currently still working
>> node. But what do I need to do to completely clear the config and data of
>> the failed peers? The gluster processes are currently not running on these
>> nodes, but config + data are still present. So basically, I need to be able
>> to clean them out before restarting them, so that they start in a clean
>> state and not try to connect/interfere with the currently still working
>> node.
>>
>> Thanks,
>>
>> Tom
>>
>>
>> --
>> Met vriendelijke groeten,
>> Tom Cannaerts
>>
>>
>> *Service and MaintenanceIntracto - digital agency*
>>
>> Zavelheide 15 - 2200 Herentals
>> Tel: +32 14 28 29 29
>> www.intracto.com
>>
>>
>> Ben je tevreden over deze e-mail?
>>
>> <http://www.intracto.com/feedback?user=tom.cannaerts&response_code=5>
>> <http://www.intracto.com/feedback?user=tom.cannaerts&response_code=4>
>> <http://www.intracto.com/feedback?user=tom.cannaerts&response_code=3>
>> <http://www.intracto.com/feedback?user=tom.cannaerts&response_code=2>
>> <http://www.intracto.com/feedback?user=tom.cannaerts&response_code=1>
>>
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://lists.gluster.org/mailman/listinfo/gluster-users
>>
>
> --
Met vriendelijke groeten,
Tom Cannaerts


*Service and MaintenanceIntracto - digital agency*

Zavelheide 15 - 2200 Herentals
Tel: +32 14 28 29 29
www.intracto.com


Ben je tevreden over deze e-mail?

<http://www.intracto.com/feedback?user=tom.cannaerts&response_code=5>
<http://www.intracto.com/feedback?user=tom.cannaerts&response_code=4>
<http://www.intracto.com/feedback?user=tom.cannaerts&response_code=3>
<http://www.intracto.com/feedback?user=tom.cannaerts&response_code=2>
<http://www.intracto.com/feedback?user=tom.cannaerts&response_code=1>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Remove and re-add bricks/peers

2017-07-17 Thread Tom Cannaerts - INTRACTO
We had some issues with a volume. The volume is a 3 replica volume with 3
gluster 3.5.7 peers. We are now in a situation where only 1 of the 3 nodes
is operational. If we restart the node on one of the other nodes, the
entire volume becomes unresponsive.

After a lot of trial and error, we have come to the conclusion that we do
not wan't to try to rejoin the other 2 nodes in the current form. We would
like to completely remove them from the config of the running node,
entirely reset the config on the nodes itself and then re-add them as if it
was a new node, having it completely sync the volume from the working node.

What would be the correct procedure for this? I assume I can use "gluster
volume remove-brick" to force-remove the failed bricks from the volume and
decrease the replica count, and then use "gluster peer detach" to
force-remove the peers from the config, all on the currently still working
node. But what do I need to do to completely clear the config and data of
the failed peers? The gluster processes are currently not running on these
nodes, but config + data are still present. So basically, I need to be able
to clean them out before restarting them, so that they start in a clean
state and not try to connect/interfere with the currently still working
node.

Thanks,

Tom


-- 
Met vriendelijke groeten,
Tom Cannaerts


*Service and MaintenanceIntracto - digital agency*

Zavelheide 15 - 2200 Herentals
Tel: +32 14 28 29 29
www.intracto.com


Ben je tevreden over deze e-mail?






___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users