Re: [Gluster-users] Gluster clients intermittently hang until first gluster server in a Replica 1 Arbiter 1 cluster is rebooted, server error: 0-management: Unlocking failed & client error: bailing ou

2018-10-25 Thread Hoggins!
Thank you. I just upgraded my nodes to 5.0.1 and everything seems to be running smoothly. Plus the cluster.data-self-heal=off reconfigured option has gone away during the update, so I guess I'm back on nominal.     Hoggins! Le 24/10/2018 à 13:57, Ravishankar N a écrit : > > > > On 1

Re: [Gluster-users] Gluster clients intermittently hang until first gluster server in a Replica 1 Arbiter 1 cluster is rebooted, server error: 0-management: Unlocking failed & client error: bailing ou

2018-10-24 Thread Hoggins!
Thank you, it's working as expected. I guess it's only safe to put cluster.data-self-heal back on when I get an updated version of GlusterFS?     Hoggins! Le 24/10/2018 à 11:53, Ravishankar N a écrit : > > On 10/24/2018 02:38 PM, Hoggins! wrote: >> Thanks, that's helping a lo

Re: [Gluster-users] Gluster clients intermittently hang until first gluster server in a Replica 1 Arbiter 1 cluster is rebooted, server error: 0-management: Unlocking failed & client error: bailing ou

2018-10-24 Thread Hoggins!
Thanks, that's helping a lot, I will do that. One more question: should the glustershd restart be performed on the arbiter only, or on each node of the cluster? Thanks!     Hoggins! Le 24/10/2018 à 02:55, Ravishankar N a écrit : > > On 10/23/2018 10:01 PM, Hoggins! wrote: &g

Re: [Gluster-users] Gluster clients intermittently hang until first gluster server in a Replica 1 Arbiter 1 cluster is rebooted, server error: 0-management: Unlocking failed & client error: bailing ou

2018-10-23 Thread Hoggins!
he arbiter into the cluster And now it's intermittently hanging on writing *on existing files*. There is *no problem for writing new files* on the volumes. I'm lost here, thanks for your inputs!     Hoggins! Le 14/09/2018 à 04:16, Amar Tumballi a écrit : > > > On Mon, Sep 3, 2018 at 3:41 PM, Sam McLeod

Re: [Gluster-users] "Solving" a recurrent "performing entry selfheal on [...]" on my bricks

2018-10-12 Thread Hoggins!
Well, It's been doing this for weeks, at least. I hope that by the time the healing of a simple file like this one would be over. Besides, the contents of the "cur" directory must also be under healing, but it takes s long it's strange.     Hoggins! Le 10/10/2018 à 07:05, Vl

[Gluster-users] "Solving" a recurrent "performing entry selfheal on [...]" on my bricks

2018-10-07 Thread Hoggins!
and needs to be sorted out, so what can I do? Thanks for your help!     Hoggins! signature.asc Description: OpenPGP digital signature ___ Gluster-users mailing list Gluster-users@gluster.org https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Healing : No space left on device

2018-05-03 Thread Hoggins!
There are also free inodes on the disks of all the machines... don't where to look to solve this. Any idea ? Le 02/05/2018 à 12:39, Hoggins! a écrit : > Oh, and *there is* space on the device where the brick's data is located. > >     /dev/mapper/fedora-home   942G    868G   74G  93

Re: [Gluster-users] Healing : No space left on device

2018-05-02 Thread Hoggins!
Oh, and *there is* space on the device where the brick's data is located.     /dev/mapper/fedora-home   942G    868G   74G  93% /export Le 02/05/2018 à 11:49, Hoggins! a écrit : > Hello list, > > I have an issue on my Gluster cluster. It is composed of two data nodes > and an arbite

[Gluster-users] Healing : No space left on device

2018-05-02 Thread Hoggins!
ere, where should I start ?     Thanks for your help !         Hoggins! signature.asc Description: OpenPGP digital signature ___ Gluster-users mailing list Gluster-users@gluster.org http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Replacing a third data node with an arbiter one

2018-01-29 Thread Hoggins!
Hello Ravi, Le 29/01/2018 à 17:17, Ravishankar N a écrit : > You need to find why is this so. What does the arbiter brick log say? > Does gluster volume status show the brick as up and running? > -Ravi Yes it is : gluster volume status thedude Status of volume: thedude Gluster process   

Re: [Gluster-users] Replacing a third data node with an arbiter one

2018-01-29 Thread Hoggins!
nfs.disable: on performance.readdir-ahead: on client.event-threads: 8 server.event-threads: 15 ... I can see that the arbiter has been taken into account. So is it, or is it not ? How to ensure that ? Thanks !     Hoggins! signature

[Gluster-users] Replacing a third data node with an arbiter one

2018-01-24 Thread Hoggins!
ily: inet nfs.disable: on performance.readdir-ahead: on client.event-threads: 8 server.event-threads: 15 ... and I would like to replace, say ngluster-2 with an arbiter-only node, without any data. Is that possible ? How ? Thanks !     Hoggins! signature.asc Description: Open

[Gluster-users] How to make sure self-heal backlog is empty ?

2017-12-19 Thread Hoggins!
Status: Connected Number of entries: 11 Should I be worrying with this never ending ?     Thank you,         Hoggins! signature.asc Description: OpenPGP digital signature ___ Gluster-users mailing list Gluster-users@gluster.org http://lists.g

[Gluster-users] (WAS : Re: Fedora upgrade to f24 installed 3.8.0 client and broke mounting)

2016-07-04 Thread Hoggins!
3.7.11 to 3.8.0) one by one. Am I going to break anything ? Thanks ! Hoggins! signature.asc Description: OpenPGP digital signature ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] High load / hang

2015-05-08 Thread Hoggins!
Well, that's funny, because the exact same thing happened to me this morning, except that I could hard reboot the machine, and it got up and running normally again. But the symptoms you describe are oddly similar, and strangely simultaneous. Le 08/05/2015 10:36, Alun James a écrit : Hi folks,

Re: [Gluster-users] Unable to make HA work; mounts hang on remote node reboot

2015-04-07 Thread Hoggins!
Hello, Le 07/04/2015 06:22, Joe Julian a écrit : That's probably wrong. If you're doing a proper reboot, the services should be stopped before shutting down, which will do all the proper handshaking for shutting down a tcp connection. This allows the client to avoid the ping-timeout.

[Gluster-users] remote operation failed: Operation not permitted

2014-09-11 Thread Hoggins!
Hello list, What would that kind of message mean ? [2014-09-10 21:49:15.360499] W [client-rpc-fops.c:1480:client3_3_fstat_cbk] 0-mailer-client-1: remote operation failed: Operation not permitted [2014-09-10 21:49:15.360780] W [client-rpc-fops.c:1480:client3_3_fstat_cbk] 0-mailer-client-0: remote

Re: [Gluster-users] Link latency ?

2014-07-15 Thread Hoggins!
The latency is quite high for synchronous replication. For geo-replication, this latency value is sustainable. -Vijay Hi Vijay, Do you think 15-20ms is OK for synchronous? Thanks. Hello, I would love to have that answer too... I'm now on a link with a latency below 9ms.

[Gluster-users] Link latency ?

2014-06-20 Thread Hoggins!
Hello folks, I have two servers that I would like to use as bricks for GlusterFS (replication). They are connected to each other via 1Gbps links, but with 40ms latency (continental links). Do you think such a configuration is sustainable ? Thanks ! Hoggins

Re: [Gluster-users] Link latency ?

2014-06-20 Thread Hoggins!
Okay, thanks. I just had the opportunity to use two distant datacenters for geographic failover. The expected delay between them (transatlantic) is 40ms. I will stick to the 2ms links. Cheers. Le 20/06/2014 13:44, Vijay Bellur a écrit : On 06/20/2014 02:34 PM, Hoggins! wrote: Hello folks, I

[Gluster-users] Preparing my future split-brain condition

2014-01-09 Thread Hoggins!
criteria. Is that possible ? That would solve my problem, because I know what happened on my bricks for a few days, but I fear that some of my files will be inaccesible due to the upcoming split-brain status. Do you have an idea to help me ? Thanks in advance ! Hoggins

Re: [Gluster-users] Shutting down a server with no service disruption and Xen instance errors

2013-12-03 Thread Hoggins!
Hello, Simply first stop the Glusterfs services on the brick you intend to shutdown, and everything is fine. I also experienced issues when I rebooted a brick without stopping the services first. Hoggins! Le 03/12/2013 14:53, Torbjørn Thorsen a écrit : Hello. We've got a 2x2 volume

[Gluster-users] Solving healing and split-brain problems

2013-11-04 Thread Hoggins!
:04:54 gfid:bbc2bb09-db85-4069-b225-e70f33a8c649 Question is : how do I manage this issue ? I know how to handle it with regular files, but not with raw gfids. A quick howto would be the best. Thanks in advance. Hoggins! ___ Gluster-users mailing

Re: [Gluster-users] New to GlusterFS

2013-10-23 Thread Hoggins!
Hey guys, By the way, when I issue service glusterd stop and service glusterfsd stop, there are still gluster* processes running on the machine, even after a minute. Is that a normal behavior, should I kill them directly, should I just wait for them to die peacefully, or is it safe to shutdown

Re: [Gluster-users] Shutting down a GlusterFS server.

2013-10-02 Thread Hoggins!
practice is simply to end the Gluster services before shutting down the server. Everything runs as smooth as it should be, and the clients don't notice any downtime. Hope this helps. Hoggins! Le 02/10/2013 11:36, Bobby Jacob a écrit : Hi, I have a 2-node replica volume running