Hi

With only two nodes it's recommended to set
cluster.server-quorum-type=server and cluster.server-quorum-ratio=51% (i.e.
more than 50%).

On Mon, Jul 31, 2017 at 4:12 AM, Seva Gluschenko <g...@webkontrol.ru> wrote:

> Hi folks,
>
>
> I'm running a simple gluster setup with a single volume replicated at two
> servers, as follows:
>
> Volume Name: gv0
> Type: Replicate
> Volume ID: dd4996c0-04e6-4f9b-a04e-73279c4f112b
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x 2 = 2
> Transport-type: tcp
> Bricks:
> Brick1: sst0:/var/glusterfs
> Brick2: sst2:/var/glusterfs
> Options Reconfigured:
> cluster.self-heal-daemon: enable
> performance.readdir-ahead: on
> nfs.disable: on
> transport.address-family: inet
>
> This volume is used to store data in highload production, and recently I
> faced two major problems that made the whole idea of using gluster quite
> questionnable, so I would like to ask gluster developers and/or call for
> community wisdom in hope that I might be missing something. The problem is,
> when it happened that one of replica servers hung, it caused the whole
> glusterfs to hang. Could you please drop me a hint, is it expected
> behaviour, or are there any tweaks and server or volume settings that might
> be altered to change this? Any help would be appreciated much.
>
>
> --
> Best Regards,
>
> Seva Gluschenko
> CTO @ http://webkontrol.ru
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>
_______________________________________________
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Reply via email to