Hi Sander,
Sorry for not getting back to you.
I guess, when you don’t use quota you do not need to run the scripts.
I do not have any experience changing the op-version on a running glusterfs
cluster. But looking at some threads, it should be possible to change it on a
running glusterfs
Hi Pavel,
killing the brick proces, is the way to go.
This way, all other bricks on that server, will keep working.
After you replace/fix the disk,
A restart of the glusterd proces should me should be enough, to get the brick
back online. (self-healing scan, can take some IO)
Do you have
Hi Jiri,
yes, the glusterd restart started the brick .. there is few sec delay
only, so I was confused the first time.
btw the server was running for some months without problem .. and now
the xfs partition (only this one) had problem .. no corruption, but
after using it for few min, it
Jiri,
I’ve updated the op-version yesterday online without any problems, so I hope to
be able to migrate my old bricks to the new tomorrow night without hassle using
the remove brick command once all new bricks are added.
My new bricks are smaller than the current ones but higher in number so
Hello folks,
Our hangout session has concluded [1], and I expect we will do another next
month from the US which hopefully will have better interactivity.
In the meantime, below [2] is the gluster volume info display we are
considering for tiered volumes. Let us know any feedback on how it
On 16/04/15 19:46, Nikolai Grigoriev wrote:
Hi,
I am new to gluster and would appreciate if someone could help me to
understand what may be wrong.
We have a small filesystem (currently - just one brick) and on the
same client node I have two processes. One is writing files to a
specific
Hi,
I am new to gluster and would appreciate if someone could help me to
understand what may be wrong.
We have a small filesystem (currently - just one brick) and on the same
client node I have two processes. One is writing files to a specific
glusterfs share and another one is periodically
I appreciate the info. I have tried adjust the ping-timeout setting, and it has
seems to have no effect. The whole system hangs for 45+ seconds, which is about
what it takes the second node to reboot, no matter what the value of
ping-timeout is. The output of the mnt-log is below. It shows
On Apr 17, 2015 01:17, Alex Crow ac...@integrafin.co.uk wrote:
On 16/04/15 19:46, Nikolai Grigoriev wrote:
Hi,
I am new to gluster and would appreciate if someone could help me to
understand what may be wrong.
We have a small filesystem (currently - just one brick) and on the same
client