2013/3/21 Fabio M. Di Nitto <fdini...@redhat.com> > On 3/21/2013 12:18 PM, eXeC001er wrote: > > > > > > 2013/3/20 Fabio M. Di Nitto <fdini...@redhat.com > > <mailto:fdini...@redhat.com>> > > > > On 3/20/2013 6:26 PM, eXeC001er wrote: > > > > > > > > > 2013/3/20 Fabio M. Di Nitto <fdini...@redhat.com > > <mailto:fdini...@redhat.com> > > > <mailto:fdini...@redhat.com <mailto:fdini...@redhat.com>>> > > > > > > On 3/20/2013 5:27 PM, eXeC001er wrote: > > > > > > > > > > > > > The first Q: > > > > > > > > > > According to the tests that are part of > > corosync-sources i think > > > > that i can: > > > > > > > > > > 1. create a daemon that register a QDEVICE and will > notify > > > corosync > > > > > about the device (votequorum_qdevice_poll()). > > > > > > > > > > 2. implement a master/slave logic and if the qdevice > > on a node > > > > wins then > > > > > i call votequorum_qdevice_master_wins() on the node > > and corosync > > > > notify > > > > > another nodes about this, so i can say that the node > > is MASTER. > > > > > > > > There is no votequorum_qdevice_master_wins() call... > > where did you > > > > find it? > > > > > > > > > > > > I am researching corosync-2.3.0 sources. > > > > > > whoops.. i wrote it and forgot about it..... getting old is > bad :) > > > > > > No it´s a bit more complicated than that. > > > > > > corosync starts and load the config > > > later on qdeviced starts and read the config > > > qdeviced detects that it has to run in master_wins config: > > > call votequorum_qdevice_master_wins(..., 1); > > > that calls set a flags for the node and makes sure that the > > feature is > > > enabled internally. > > > > > > > > > I thought about a different scenario: > > > > > > master_wins and cast_vote are different flags and they are used in > > > different cases. > > > > > > 1. uses only "cast_vote" and the flag can be used to decide that > on a > > > node everything fine and the node is a member of cluster (the > cluster > > > does not have master/slave) > > > > > > for example in a cluster (3 nodes) i have several qdevices on each > > node: > > > storage-qdevice and client-network-qdevice > > > > The API does not support multiple qdevices. This kind of > implementation, > > where you need to poll multiple targets, can be multiplexed/proxy´d > by > > the votequorum consumer. > > > > qdevice does not need to know how much to vote. That´s votequorum > > problem. How the qdevice implementation handles internal > voting/scoring > > it´s qdevice implementation problem. > > > > > > > > config: > > > each qdevice has 2 votes > > > each node has 1 vote > > > expected votes = 7 votes (1 own vote + 1 vote from anoter node + 2 > > votes > > > from each qdevice) > > > > ^^^^ that won´t work. votequorum accepts only one value for qdevice > > votes. > > > > > 2. "master_wins" and "cast_vote" are used. > > > > > > in this case "cast_vote" will work as in case 1 and "master_wins" > will > > > control master/slave > > > > The need to propagate the value for master_win and it´s status is to > > allow: > > > > node1 node2 node3 node4 > > > > qdevice is master on node3 for example > > > > in case of a 50%/50% split you have: > > > > node1 node2 <- not quorate > > > > > > > > please correct me if i am wrong > > > > at start point on each node qdevice sets in corosync "master_wins=1", > > each node is a memeber and node3 is master according to decision of > qdevice. > > > > so: > > - node1, node2 and node4 have 4 votes > > - node3 has 5 votes > > > > at some point we a 50%/50% split: > > > > node1 + node2 AND node3 + node4 > > > > so: > > - node1, node2 have 2 votes > > - node4 has 2 votes > > - node3 has 3 votes > > > > the quorum is 3 votes. > > > > I see the following condition > > > > 755 if ((qdevice_master_wins) && > > 756 (!quorate) && > > 757 (check_qdevice_master() == 1)) { > > > > > > 758 log_printf(LOGSYS_LEVEL_DEBUG, "node is quorate as part of > > master_wins partition"); > > 759 quorate = 1; > > 760 } > > > > and > > > > 544 static int check_qdevice_master(void) > > > > > > 545 { > > 546 struct cluster_node *node = NULL; > > 547 struct list_head *tmp; > > 548 int found = 0; > > 549 > > 550 ENTER(); > > 551 > > 552 list_iterate(tmp, &cluster_members_list) { > > 553 node = list_entry(tmp, struct cluster_node, list); > > 554 if ((node->state == NODESTATE_MEMBER) && > > 555 (node->flags & NODE_FLAGS_QDEVICE_MASTER_WINS) && > > 556 (node->flags & NODE_FLAGS_QDEVICE_CAST_VOTE)) { > > 557 found = 1; > > 558 } > > 559 } > > 560 > > 561 LEAVE(); > > 562 return found; > > 563 } > > > > and i cannot understand how node4 is a part of master_wins partition. > > node3 and node4 are in the same partition. > > node3 is master and has votes from qdevice. Those information are known > to all cluster members. > > node4 will iterate over the list of active nodes, will find node3 -> > master_win is set -> and qdevice is casting vote. That determines that > node4 is part of the "master_win partition". > > from node4 perspective, node1 and node2 don´t have NODESTATE_MEMBER > (first hit), and qdevice is not casting votes (slaves). >
now everything understood. big thanks. > > Fabio > _______________________________________________ > Openais mailing list > Openais@lists.linux-foundation.org > https://lists.linuxfoundation.org/mailman/listinfo/openais >
_______________________________________________ Openais mailing list Openais@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/openais