Re: [Gluster-users] VMs blocked for more than 120 seconds

2019-05-13 Thread Martin Toth
<https://docs.gluster.org/en/v3/Administrator%20Guide/Monitoring%20Workload/#running-glusterfs-volume-profile-command> > Let me know if you have any questions. > > -Krutika > > On Mon, May 13, 2019 at 12:34 PM Martin Toth <mailto:snowmai...@gmail.com>> wrote: > Hi, &

Re: [Gluster-users] VMs blocked for more than 120 seconds

2019-05-13 Thread Martin Toth
. I am looking for problem for more than month, tried everything. Can’t find anything. Any more clues or leads? BR, Martin > On 13 May 2019, at 08:55, lemonni...@ulrar.net wrote: > > On Mon, May 13, 2019 at 08:47:45AM +0200, Martin Toth wrote: >> Hi all, > > Hi > &g

Re: [Gluster-users] Signing off from Gluster

2019-04-26 Thread Martin Toth
Thanks for all. We will miss you! BR! > On 26 Apr 2019, at 17:25, Amye Scavarda wrote: > > It's been a delight to work with this community for the past few > years, and as of today, I'm stepping away for a new opportunity. Amar > Tumballi has already taken up several of the community

Re: [Gluster-users] Replica 3 - how to replace failed node (peer)

2019-04-20 Thread Martin Toth
17:44, Martin Toth wrote: > > Thanks for clarification, one more question. > > When I will recover(boot) failed node back and this peer will be available > again to remaining two nodes. How do I tell gluster to mark this brick as > failed ? > > I mean, I’ve boot

Re: [Gluster-users] Settings for VM hosting

2019-04-18 Thread Martin Toth
Hi, I am curious about your setup and settings also. I have exactly same setup and use case. - why do you use sharding on replica3? Do you have various size of bricks(disks) pre node? Wonder if someone will share settings for this setup. BR! > On 18 Apr 2019, at 09:27, lemonni...@ulrar.net

Re: [Gluster-users] Replica 3 - how to replace failed node (peer)

2019-04-16 Thread Martin Toth
Apr 2019, at 15:40, Karthik Subrahmanya wrote: > > > > On Thu, Apr 11, 2019 at 6:38 PM Martin Toth <mailto:snowmai...@gmail.com>> wrote: > Hi Karthik, > >> On Thu, Apr 11, 2019 at 12:43 PM Martin Toth > <mailto:snowmai...@gmail.com>> wrote: >

Re: [Gluster-users] Replica 3 - how to replace failed node (peer)

2019-04-11 Thread Martin Toth
Hi Karthik, > On Thu, Apr 11, 2019 at 12:43 PM Martin Toth <mailto:snowmai...@gmail.com>> wrote: > Hi Karthik, > > more over, I would like to ask if there are some recommended > settings/parameters for SHD in order to achieve good or fair I/O while volume > will be

Re: [Gluster-users] Replica 3 - how to replace failed node (peer)

2019-04-11 Thread Martin Toth
disks became unresponsive because healing took most of I/O. My volume containing only big files with VM disks. Thanks for suggestions. BR, Martin > On 10 Apr 2019, at 12:38, Martin Toth wrote: > > Thanks, this looks ok to me, I will reset brick because I don't have any data > anymo

Re: [Gluster-users] Replica 3 - how to replace failed node (peer)

2019-04-10 Thread Martin Toth
atically, you can manually start that by running gluster volume heal > . > > HTH, > Karthik > > On Wed, Apr 10, 2019 at 3:13 PM Martin Toth <mailto:snowmai...@gmail.com>> wrote: > Hi all, > > I am running replica 3 gluster with 3 bricks. One of my servers failed - all

Re: [Gluster-users] [External] Replica 3 - how to replace failed node (peer)

2019-04-10 Thread Martin Toth
nistrator%20Guide/Managing%20Volumes/#replace-faulty-brick > > <https://docs.gluster.org/en/v3/Administrator%20Guide/Managing%20Volumes/#replace-faulty-brick> > > On Wed, Apr 10, 2019 at 11:42 AM Martin Toth <mailto:snowmai...@gmail.com>> wrote: > Hi all, > >

[Gluster-users] Replica 3 - how to replace failed node (peer)

2019-04-10 Thread Martin Toth
Hi all, I am running replica 3 gluster with 3 bricks. One of my servers failed - all disks are showing errors and raid is in fault state. Type: Replicate Volume ID: 41d5c283-3a74-4af8-a55d-924447bfa59a Status: Started Number of Bricks: 1 x 3 = 3 Transport-type: tcp Bricks: Brick1:

Re: [Gluster-users] Gluster and bonding

2019-02-25 Thread Martin Toth
s the bond the receive traffic > is redistributed among all active slaves in the bond by initiating ARP > Replies with the selected mac address to each of the clients. The updelay > parameter (detailed below) must be set to a value equal or greater than the > switch's forwarding d

Re: [Gluster-users] Gluster and bonding

2019-02-25 Thread Martin Toth
Hi Alex, you have to use bond mode 4 (LACP - 802.3ad) in order to achieve redundancy of cables/ports/switches. I suppose this is what you want. BR, Martin > On 25 Feb 2019, at 11:43, Alex K wrote: > > Hi All, > > I was asking if it is possible to have the two separate cables connected to

[Gluster-users] Self/Healing process after node maintenance

2019-01-22 Thread Martin Toth
Hi all, I just want to ensure myself how self-healing process exactly works, because I need to turn one of my nodes down for maintenance. I have replica 3 setup. Nothing complicated. 3 nodes, 1 volume, 1 brick per node (ZFS pool). All nodes running Qemu VMs and disks of VMs are on Gluster

Re: [Gluster-users] Convert replica 2 to replica 2+1 arbiter

2018-02-25 Thread Martin Toth
Hi, It should be there, see https://review.gluster.org/#/c/14502/ BR, Martin > On 25 Feb 2018, at 15:52, Mitja Mihelič wrote: > > I must ask again, just to be sure. Is what you are proposing definitely > supported in v3.8? > >

Re: [Gluster-users] How large the Arbiter node?

2017-12-11 Thread Martin Toth
Hi, there is good suggestion here : http://docs.gluster.org/en/latest/Administrator%20Guide/arbiter-volumes-and-quorum/#arbiter-bricks-sizing Since the arbiter brick does not store file

Re: [Gluster-users] Adding a slack for communication?

2017-11-09 Thread Martin Toth
@Amye +1 for this great Idea, I am 100% for it. @Vijay for archiving purposes maybe it will be possible to use free service as https://slackarchive.io/ BR, Martin > On 9 Nov 2017, at 00:09, Vijay Bellur wrote: > > > > On Wed, Nov 8, 2017 at

Re: [Gluster-users] Upgrade Gluster 3.7 to 3.12 and add 3rd replica [howto/help]

2017-10-01 Thread Martin Toth
Hi Diego, I’ve tried to upgrade and then extend gluster with 3rd node in virtualbox test environment and all went without problems. Sharding will not help me at this time so I will consider upgrading 1G to 10G before this procedure in production. That should lower downtime - healing time of VM

Re: [Gluster-users] Upgrade Gluster 3.7 to 3.12 and add 3rd replica [howto/help]

2017-09-22 Thread Martin Toth
t; be stopped for too long while a large image is healed. If you were already > using sharding you should be able to add the 3rd replica when VMs are running > without much issue. > > Once healing is completed and if you are satisfied with 3.12, then remember > to bump

[Gluster-users] Upgrade Gluster 3.7 to 3.12 and add 3rd replica [howto/help]

2017-09-21 Thread Martin Toth
Hello all fellow GlusterFriends, I would like you to comment / correct my upgrade procedure steps on replica 2 volume of 3.7.x gluster. Than I would like to change replica 2 to replica 3 in order to correct quorum issue that Infrastructure currently has. Infrastructure setup: - all clients

[Gluster-users] Upgrade Gluster 3.7 to 3.12 and add 3rd replica [howto/help]

2017-09-20 Thread Martin Toth
Hello all fellow GlusterFriends, I would like you to comment / correct my upgrade procedure steps on replica 2 volume of 3.7.x gluster. Than I would like to change replica 2 to replica 3 in order to correct quorum issue that Infrastructure currently has. Infrastructure setup: - all clients

[Gluster-users] Upgrade Gluster 3.7 to 3.12 and add 3rd replica [howto/help]

2017-09-19 Thread Martin Toth
Hello all fellow GlusterFriends, I would like you to comment / correct my upgrade procedure steps on replica 2 volume of 3.7.x gluster. Than I would like to change replica 2 to replica 3 in order to correct quorum issue that Infrastructure currently has. Infrastructure setup: - all clients

[Gluster-users] Unable to start/deploy VMs after Qemu/Gluster upgrade to 2.0.0+dfsg-2ubuntu1.28glusterfs3.7.17trusty1

2016-11-25 Thread Martin Toth
Hello all, we are using your qemu packages to deploy qemu VMs on our gluster via gfsapi. Recent upgrade broken our qemu and we are not able to deploy / start VMs anymore. Gluster is running OK, mounted with FUSE, everything looks ok, there is probably some problem with qemu while accessing