On 05/22/2017 11:02 AM, WK wrote:
On 5/21/2017 7:00 PM, Ravishankar N wrote:
On 05/22/2017 03:11 AM, W Kern wrote:
gluster volume set VOL cluster.quorum-type none
from the remaining 'working' node1 and it simply responds with
"volume set: failed: Quorum not met. Volume operation not allow
>
> Great, that worked. ie gluster volume set VOL
> cluster.server-quorum-type none
>
> Although I did get an error of "Volume set: failed: Commit failed on
> localhost, please check the log files for more details"
>
> but then I noticed that volume immediately came back up and I was able
On 5/21/2017 7:00 PM, Ravishankar N wrote:
On 05/22/2017 03:11 AM, W Kern wrote:
gluster volume set VOL cluster.quorum-type none
from the remaining 'working' node1 and it simply responds with
"volume set: failed: Quorum not met. Volume operation not allowed"
how do you FORCE gluster to ign
On 05/22/2017 03:11 AM, W Kern wrote:
So I am experimenting with shards using a couple VMs and decided to
test his scenario (i.e. only one node available on a simple 2 node + 1
arbiter replicated/sharded volume use 3.10.1 on Cent7.3)
I setup a VM testbed. Then verified everything including t
So I am experimenting with shards using a couple VMs and decided to test
his scenario (i.e. only one node available on a simple 2 node + 1
arbiter replicated/sharded volume use 3.10.1 on Cent7.3)
I setup a VM testbed. Then verified everything including the sharding
works and then shutdown node
> If you know what you are getting into, then `gluster v set
> cluster.quorum-type none` should give you the desired result, i.e. allow
> write access to the volume.
Thanks a lot ! We won't be needing it now, but I'll write that in the wiki
just in case.
We realised that the problem was the Ca
On 05/18/2017 07:18 PM, lemonni...@ulrar.net wrote:
Hi,
We are having huge hardware issues (oh joy ..) with RAID cards.
On a replica 3 volume, we have 2 nodes down. Can we somehow tell
gluster that it's quorum is 1, to get some amount of service back
while we try to fix the other nodes or insta
Hi,
We are having huge hardware issues (oh joy ..) with RAID cards.
On a replica 3 volume, we have 2 nodes down. Can we somehow tell
gluster that it's quorum is 1, to get some amount of service back
while we try to fix the other nodes or install new ones ?
Thanks
signature.asc
Description: Dig