>> Agree. So far I haven't created any ordering constraints because it >> isn't important to me, YET, the order for starting services. However I >> have a question... if I don't have any ordering constraints at all, am >> I still able to activate resources no matter the order? > > Sort of, but not exactly. > > With a colocation constraint "A with B", the cluster must assign B to a > node before it can place A. B does not have to be started, but it does > have to be *able* to be started, in order to be assigned to a node. So > if something prevents B from being started (disabled in config, not > allowed on any online node, etc.), it will not be assigned to a node, > and A will not run. > > That doesn't mean that B will be started first, though. If the cluster > needs to start both A and B, it can start them in any order. With an > ordering constraint "B then A", B must be started first, and the start > must complete successfully, before A can be started. Ok, cool, it's a little more clear now.
>>> The above constraint as currently worded will have no effect. It says >>> that clusterdataClone must be located on either nodo1 or nodo2. Since >>> those are your only nodes, it doesn't really constrain anything. >> Ok, the last command (location with rule) was created to allow >> clusterdataClone start at both nodes, because without this rule the >> resource was always in "stopped" status in both nodes. Once I added >> this rule my clusterdataClone resource started automatically but I >> don't understand why it choosed a node to run as Master and the other >> one as Slave. Is it random? > > I don't know why the resource would be stopped without this constraint. > Maybe you have an opt-in cluster? But in that case you can use a normal > location constraint, you don't need a rule. Yes, it's an asymmetric (opt-in) cluster. I don't remember if I tried before with a normal location without rules but I'll try that again. > > It will choose one as master and one as slave because you have > master-max=1. The choice, as with everything else in pacemaker, is based > on scores, but these are transparent to the user and appears "random". > I see, that's where I thought a rule would be needed for constraint location as you suggested me in a previous email. (see next line below) >>> If you want to prefer one node for the master role, you want to add >>> role=master, take out the node you don't want to prefer, and set score >>> to something less than INFINITY. >> Well, I could add a rule to prefer nodo1 over nodo2 to run the Master >> role (in fact, I think I already did it) but what I want it's >> something different: I would like the Master role to follow IPService, >> I mean, clusterdataClone become Master where IPService was previously >> activated. >> >> Is this possible? Or the only way to configure constraints is that my >> resources (IPService, Web, MTA) follow the Master role of >> clusterdataClone? > > I think the latter approach makes more sense and is common. Storage is > more complicated than an IP and thus more likely to break, so it would > seem to be more reliable to follow where storage can successfully start. > The exception would be if the IP is much more important to you than the > storage and is useful without it. > Well, it's just that I come from a previous and long experience with IBM PowerHA for AIX. I was used to see the IP Service being activated on each node, then the shared storage+filesystems and applications scripts (aka services) at the end, so I tried to emulate the same behavior in pacemaker. But it makes sense what you say so, I might try to configure my master/slave resource to be activated as Master in a preferred node and force all other resources (such as IPService, Web and MTA) to follow it. > You might want to look at resource sets. The syntax is a bit difficult > to follow but it's very flexible. See pcs constraint colocation/order set. I was reading a little bit about it but I prefer to skip this for now at least. Thanks a lot Ken :) Now I can resume my tests with a better understanding of how things work. I'll have some fun today! _______________________________________________ Users mailing list: Users@clusterlabs.org http://clusterlabs.org/mailman/listinfo/users Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://bugs.clusterlabs.org