This is excellent. Thank you!
I will test it ASAP.
Kind regards,
Mitja
On 25/02/2018 15:59, Martin Toth wrote:
Hi,
It should be there, see https://review.gluster.org/#/c/14502/
BR,
Martin
On 25 Feb 2018, at 15:52, Mitja Mihelič <mitja.mihe...@arnes.si
<mailto:mitja.mihe...@arnes.si&g
(?) or not until v4 a
change in command will happen so it won't count the arbiter as a replica.
On February 25, 2018 5:05:04 AM EST, "Mitja Mihelič"
<mitja.mihe...@arnes.si> wrote:
Hi!
I am using GlusterFS on CentOS7 with glusterfs-3.8.15 RPM version.
I currently
Hi!
I am using GlusterFS on CentOS7 with glusterfs-3.8.15 RPM version.
I currently have a replica 2 running and I would like to get rid of the
split-brain problem before it occurs. This is one of the possible solutions.
Is it possible to and an arbiter to this volume?
I have read in a thread
Hi!
I am running a replica 3 volume. On server2 I wanted to move the brick
to a new disk.
I removed the brick from the volume:
gluster volume remove-brick VOLUME rep 2
server2:/gluster/VOLUME/brick0/brick force
I unmounted the old brick and mounted the new disk to the same location.
I added
50 constantly.
I will try to get the results from the production setup.
Rerads, Mitja
4th command tells what are the operations that are issued a lot.
Pranith
On 06/01/2015 04:41 PM, Mitja Mihelič wrote:
Hi!
I am trying to set up a Wordpress cluster using GlusterFS used for
storage. Web
: on
cluster.eager-lock: on
performance.stat-prefetch: off
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
performance.cache-refresh-timeout: 4
performance.io-thread-count: 32
nfs.disable: on
Regards, Mitja
--
--
Mitja Mihelič
ARNES, Tehnološki park 18, p.p. 7, SI-1001 Ljubljana
Hi!
I have a TSP set up with 3 servers/peers.
Can other Gluster servers become members of the existing TSP by
executing peer probe on their end?
I would not wish to see someone setting up their own server and adding
themselves to the existing TSP.
Kind regards, Mitja
--
--
Mitja Mihelič
On 10. 03. 2015 14:52, Jeff Darcy wrote:
I wold like to setup server side quorum by using the following setup:
- 2x storage nodes (s-node-1, s-node-2)
- 1x arbiter node (s-node-3)
So the trusted storage pool has three peers.
This is my volume info:
Volume Name: wp-vol-0
Type: Replicate
Volume
On 11. 03. 2015 16:05, Jeff Darcy wrote:
I have a follow-up question.
When a node is disconnected from the rest, the client gets an error
message Transport endpoint is not connected and all access is
prevented. Write access must not be allowed to such a node. I understand
that.
In my case it
# gluster volume set wp-vol-0 cluster.server-quorum-ratio 60
But the cluster.server-quorum-ratio option produces an error:
volume set: failed: Not a valid option for single volume
How would I achieve the desired setup?
Kind regards,
Mitja
--
--
Mitja Mihelič
ARNES, Tehnološki park 18, p.p. 7, SI-1001
-volume
What am I missing?
Regards, Mitja
--
Mitja Mihelič
ARNES, Tehnološki park 18, p.p. 7, SI-1001 Ljubljana, Slovenia
tel: +386 1 479 8877, fax: +386 1 479 88 78
On 10. 12. 2014 17:55, Mitja Mihelič wrote:
Per your suggestion I tired this:
env -i LC_NUMERIC=en_US.UTF-8 mount -t glusterfs -o
off.
I looked into it with strace, and here are the results.
For CentOS6: http://pastebin.com/vcqTh2Hi
For CentOS7: http://pastebin.com/s7MuTbXb
What could be the problem?
Kind regards,
Mitja
--
--
Mitja Mihelič
ARNES, Tehnološki park 18, p.p. 7, SI-1001 Ljubljana, Slovenia
tel: +386 1 479 8877
Per your suggestion I tired this:
env -i LC_NUMERIC=en_US.UTF-8 mount -t glusterfs -o transport=tcp
GLUSTER-1.NAME.SI://wp-vol-1 /mnt/volume-1
And it works.
Mounting of volumes via fstab works also.
Regards, Mitja
--
Mitja Mihelič
ARNES, Tehnološki park 18, p.p. 7, SI-1001 Ljubljana
13 matches
Mail list logo