Thank you.
I just upgraded my nodes to 5.0.1 and everything seems to be running
smoothly. Plus the cluster.data-self-heal=off reconfigured option has
gone away during the update, so I guess I'm back on nominal.
Hoggins!
Le 24/10/2018 à 13:57, Ravishankar N a écrit :
>
>
>
> On 1
Thank you, it's working as expected.
I guess it's only safe to put cluster.data-self-heal back on when I get
an updated version of GlusterFS?
Hoggins!
Le 24/10/2018 à 11:53, Ravishankar N a écrit :
>
> On 10/24/2018 02:38 PM, Hoggins! wrote:
>> Thanks, that's helping a lo
Thanks, that's helping a lot, I will do that.
One more question: should the glustershd restart be performed on the
arbiter only, or on each node of the cluster?
Thanks!
Hoggins!
Le 24/10/2018 à 02:55, Ravishankar N a écrit :
>
> On 10/23/2018 10:01 PM, Hoggins! wrote:
&g
he arbiter into the cluster
And now it's intermittently hanging on writing *on existing files*.
There is *no problem for writing new files* on the volumes.
I'm lost here, thanks for your inputs!
Hoggins!
Le 14/09/2018 à 04:16, Amar Tumballi a écrit :
>
>
> On Mon, Sep 3, 2018 at 3:41 PM, Sam McLeod
Well,
It's been doing this for weeks, at least. I hope that by the time the
healing of a simple file like this one would be over.
Besides, the contents of the "cur" directory must also be under healing,
but it takes s long it's strange.
Hoggins!
Le 10/10/2018 à 07:05, Vl
and needs to be sorted
out, so what can I do?
Thanks for your help!
Hoggins!
signature.asc
Description: OpenPGP digital signature
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users
There are also free inodes on the disks of all the machines... don't
where to look to solve this. Any idea ?
Le 02/05/2018 à 12:39, Hoggins! a écrit :
> Oh, and *there is* space on the device where the brick's data is located.
>
> /dev/mapper/fedora-home 942G 868G 74G 93
Oh, and *there is* space on the device where the brick's data is located.
/dev/mapper/fedora-home 942G 868G 74G 93% /export
Le 02/05/2018 à 11:49, Hoggins! a écrit :
> Hello list,
>
> I have an issue on my Gluster cluster. It is composed of two data nodes
> and an arbite
ere, where should I start ?
Thanks for your help !
Hoggins!
signature.asc
Description: OpenPGP digital signature
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users
Hello Ravi,
Le 29/01/2018 à 17:17, Ravishankar N a écrit :
> You need to find why is this so. What does the arbiter brick log say?
> Does gluster volume status show the brick as up and running?
> -Ravi
Yes it is :
gluster volume status thedude
Status of volume: thedude
Gluster process
nfs.disable: on
performance.readdir-ahead: on
client.event-threads: 8
server.event-threads: 15
... I can see that the arbiter has been taken into account.
So is it, or is it not ? How to ensure that ?
Thanks !
Hoggins!
signature
ily: inet
nfs.disable: on
performance.readdir-ahead: on
client.event-threads: 8
server.event-threads: 15
... and I would like to replace, say ngluster-2 with an arbiter-only
node, without any data. Is that possible ? How ?
Thanks !
Hoggins!
signature.asc
Description: Open
Status: Connected
Number of entries: 11
Should I be worrying with this never ending ?
Thank you,
Hoggins!
signature.asc
Description: OpenPGP digital signature
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.g
3.7.11
to 3.8.0) one by one.
Am I going to break anything ?
Thanks !
Hoggins!
signature.asc
Description: OpenPGP digital signature
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
Well, that's funny, because the exact same thing happened to me this
morning, except that I could hard reboot the machine, and it got up and
running normally again.
But the symptoms you describe are oddly similar, and strangely simultaneous.
Le 08/05/2015 10:36, Alun James a écrit :
Hi folks,
Hello,
Le 07/04/2015 06:22, Joe Julian a écrit :
That's probably wrong. If you're doing a proper reboot, the services
should be stopped before shutting down, which will do all the proper
handshaking for shutting down a tcp connection. This allows the client
to avoid the ping-timeout.
Hello list,
What would that kind of message mean ?
[2014-09-10 21:49:15.360499] W
[client-rpc-fops.c:1480:client3_3_fstat_cbk] 0-mailer-client-1: remote
operation failed: Operation not permitted
[2014-09-10 21:49:15.360780] W
[client-rpc-fops.c:1480:client3_3_fstat_cbk] 0-mailer-client-0: remote
The latency is quite high for synchronous replication. For
geo-replication, this latency value is sustainable.
-Vijay
Hi Vijay,
Do you think 15-20ms is OK for synchronous?
Thanks.
Hello,
I would love to have that answer too... I'm now on a link with a latency
below 9ms.
Hello folks,
I have two servers that I would like to use as bricks for GlusterFS
(replication).
They are connected to each other via 1Gbps links, but with 40ms latency
(continental links).
Do you think such a configuration is sustainable ?
Thanks !
Hoggins
Okay, thanks.
I just had the opportunity to use two distant datacenters for geographic
failover. The expected delay between them (transatlantic) is 40ms.
I will stick to the 2ms links.
Cheers.
Le 20/06/2014 13:44, Vijay Bellur a écrit :
On 06/20/2014 02:34 PM, Hoggins! wrote:
Hello folks,
I
criteria. Is
that possible ? That would solve my problem, because I know what
happened on my bricks for a few days, but I fear that some of my files
will be inaccesible due to the upcoming split-brain status.
Do you have an idea to help me ?
Thanks in advance !
Hoggins
Hello,
Simply first stop the Glusterfs services on the brick you intend to
shutdown, and everything is fine.
I also experienced issues when I rebooted a brick without stopping the
services first.
Hoggins!
Le 03/12/2013 14:53, Torbjørn Thorsen a écrit :
Hello.
We've got a 2x2 volume
:04:54 gfid:bbc2bb09-db85-4069-b225-e70f33a8c649
Question is : how do I manage this issue ? I know how to handle it with
regular files, but not with raw gfids.
A quick howto would be the best.
Thanks in advance.
Hoggins!
___
Gluster-users mailing
Hey guys,
By the way, when I issue service glusterd stop and service glusterfsd
stop, there are still gluster* processes running on the machine, even
after a minute.
Is that a normal behavior, should I kill them directly, should I just
wait for them to die peacefully, or is it safe to shutdown
practice is simply to end the Gluster services before
shutting down the server. Everything runs as smooth as it should be, and
the clients don't notice any downtime.
Hope this helps.
Hoggins!
Le 02/10/2013 11:36, Bobby Jacob a écrit :
Hi,
I have a 2-node replica volume running
25 matches
Mail list logo