So I am experimenting with shards using a couple VMs and decided to test his scenario (i.e. only one node available on a simple 2 node + 1 arbiter replicated/sharded volume use 3.10.1 on Cent7.3)

I setup a VM testbed. Then verified everything including the sharding works and then shutdown nodes 2 and 3 (the arbiter).

As expected I got a quorum error on the mount.

So I tried

gluster volume set VOL cluster.quorum-type none

from the remaining 'working' node1 and it simply responds with

"volume set: failed: Quorum not met. Volume operation not allowed"

how do you FORCE gluster to ignore the quorum in such a situation?

I tried stopping the volume and even rebooting node1 and still get the error (And of course the volume wont start for the same reason)

-WK


On 5/18/2017 7:41 AM, Ravishankar N wrote:
On 05/18/2017 07:18 PM, lemonni...@ulrar.net wrote:
Hi,


We are having huge hardware issues (oh joy ..) with RAID cards.
On a replica 3 volume, we have 2 nodes down. Can we somehow tell
gluster that it's quorum is 1, to get some amount of service back
while we try to fix the other nodes or install new ones ?
If you know what you are getting into, then `gluster v set <volname> cluster.quorum-type none` should give you the desired result, i.e. allow write access to the volume.
Thanks


_______________________________________________
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users



<http://www.avg.com/email-signature?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=emailclient> Virus-free. www.avg.com <http://www.avg.com/email-signature?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=emailclient>

<#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>


_______________________________________________
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

_______________________________________________
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Reply via email to