Re: [Pacemaker] Trace Pacemaker calls to resource agents
On Sun, Apr 24, 2011 at 4:47 AM, Rahul Dhesi dhesi-send-reply-to-mailing-l...@rahul.net wrote: Hello all, I have uploaded a script that might help you trace Pacemaker calls to resource agents. This sort of detail wasn't available in the logs? Normally people complain they're too detailed. You can find it at: http://code.google.com/p/linux-ha-utils/ -- Rahul Dhesi dhesi-send-reply-to-mailing-l...@rahul.net ___ Pacemaker mailing list: Pacemaker@oss.clusterlabs.org http://oss.clusterlabs.org/mailman/listinfo/pacemaker Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker ___ Pacemaker mailing list: Pacemaker@oss.clusterlabs.org http://oss.clusterlabs.org/mailman/listinfo/pacemaker Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker
Re: [Pacemaker] Removing a 'Faile Action' in crm_mon display
On Thu, Apr 21, 2011 at 7:46 PM, Phil Hunt phil.h...@orionhealth.com wrote: Had trouble setting up a resource, so it showed a failed action. I was doing 'crm resource cleanup xxx', and they would go away and come right back. Because the underlying cause remained. Anyway, I no longer needed the resource, so I deleted it and now I cannot do a cleanup to remove the failed action. Anyway to remove the failed actions? It should work, unless the shell is being too clever. Try using the crm_resource tool directly. Last updated: Thu Apr 21 14:51:58 2011 Stack: openais Current DC: CentClus1 - partition with quorum Version: 1.0.10-da7075976b5ff0bee71074385f8fd02f296ec8a3 2 Nodes configured, 2 expected votes 3 Resources configured. Online: [ CentClus1 CentClus2 ] Resource Group: CL_group ISCSI_disk (ocf::heartbeat:iscsi): Started CentClus1 VG_disk (ocf::heartbeat:LVM): Started CentClus1 FS_disk (ocf::heartbeat:Filesystem): Started CentClus1 ClusterIP (ocf::heartbeat:IPaddr2): Started CentClus1 Clone Set: CPM_ping Started: [ CentClus2 CentClus1 ] Clone Set: CTM_ping Started: [ CentClus1 CentClus2 ] Failed actions: dlm:0_monitor_0 (node=CentClus1, call=272, rc=5, status=complete): not insta lled dlm:0_monitor_0 (node=CentClus2, call=16, rc=5, status=complete): not in PHIL HUNT AMS Consultant phil.h...@orionhealth.com P: +1 857 488 4749 M: +1 508 654 7371 S: philhu0724 www.orionhealth.com ___ Pacemaker mailing list: Pacemaker@oss.clusterlabs.org http://oss.clusterlabs.org/mailman/listinfo/pacemaker Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker ___ Pacemaker mailing list: Pacemaker@oss.clusterlabs.org http://oss.clusterlabs.org/mailman/listinfo/pacemaker Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker
Re: [Pacemaker] Mysql master-master replication and moving vip
On Thu, Apr 21, 2011 at 6:26 PM, Viacheslav Biriukov v.v.biriu...@gmail.com wrote: Hello. I have next schema of pacemaker cluster: Fo1 - Mysql Master | | - VIP for read and write. Fo2 - Mysql Master | Mysql started as cloned resources. So my questions are: 1. When I want to move VIP from one node to other -- can I get a data consistent corruption? 2. How will persistent connections behave? The answers to these questions are specific to your application (ie. Mysql). Pacemaker just starts and stops things - it does not get involved in data replication nor place itself between the application and its clients. ___ Pacemaker mailing list: Pacemaker@oss.clusterlabs.org http://oss.clusterlabs.org/mailman/listinfo/pacemaker Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker
Re: [Pacemaker] [pacemaker] need some help regarding network failure setup in pacemaker.
On Wed, Apr 20, 2011 at 1:32 PM, Rakesh K rakirocker4...@gmail.com wrote: Jelle de Jong jelledejong@... writes: Hi Jelle de Jong On 20-04-11 11:44, rakesh k wrote: How can we detect network failure in pacemaker configuration. http://www.clusterlabs.org/wiki/Pingd_with_resources_on_different_networks http://www.woodwose.net/thatremindsme/2011/04/the-pacemaker-ping-resource-agent/ http://wiki.lustre.org/index.php/Using_Pacemaker_with_Lustre crm configure help location crm ra info ocf:ping That should give you a jup start. You may need to increase the corosync token. Kind regards, Jelle de Jong ___ Pacemaker mailing list: Pacemaker@... http://oss.clusterlabs.org/mailman/listinfo/pacemaker Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker Thanks for the help my question is I had gone through the scripts, where i found in ping_update method there is a variable called ACTIVE no.of nodes(host_list) active based on this value, for our scenario, can we stop the heartbeat/pacemaker process, when the host node cannot ping any other nodes in the cluster frame work. no. host_list should never contain the addresses of cluster nodes. the ping RA is intended to test _external_ connectivity. provide me your suggestion so that it will help us in our context. Regards Rakesh ___ Pacemaker mailing list: Pacemaker@oss.clusterlabs.org http://oss.clusterlabs.org/mailman/listinfo/pacemaker Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker ___ Pacemaker mailing list: Pacemaker@oss.clusterlabs.org http://oss.clusterlabs.org/mailman/listinfo/pacemaker Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker
Re: [Pacemaker] [pacemaker]Notification alerts when fail-over take place from one node to other node in cluster.
Vadym Chepkov vchepkov@... writes: You have to create MailTo resource for each resource or group you would like to be notified, unfortunately. You can also run crm_mon -1f¦grep -qi fail from either cron or from snmp. It's not perfect, but better then nothing. I also found check_crm script on nagios exchange, it's not ideal, but again, since this functionality doesn't come with pacemaker yet, you would have to invent your own wheel ;) Cheers, Vadym On Apr 25, 2011 1:15 AM, Rakesh K rakirocker4...@gmail.com wrote: Vadym Chepkov vchepkov at ... writes: You can colocate your resource with a MailTo pseudo resource : # crm ra meta MailTo Notifies recipients by email in the event of resource takeover (ocf:heartbeat:MailTo) Vadym ___ Pacemaker mailing list: Pacemaker at ... http://oss.clusterlabs.org/mailman/listinfo/pacemaker Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker Hi Vadym thanks for providing the reply. You said ti co-locate the resource with the MailTo resource which will notify the recipients by email provided in the configuration. But I had configured 4 resources in two node cluster. for this case what would be the best approach .. Regards Rakesh ___ Pacemaker mailing list: Pacemaker@oss.clusterlabs.org http://oss.clusterlabs.org/mailman/listinfo/pacemaker Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker ___ Pacemaker mailing list: Pacemaker@... http://oss.clusterlabs.org/mailman/listinfo/pacemaker Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker Hi Vadym Chepkov Thanks for giving the reply. As mentioned i am trying to configure MailTo RA with the Heartbeat from the command line i used the following configuration to configure it on the Heartbeat. primitive mail ocf:heartbeat:MailTo \ params email=emailid \ params subject=ClusterFailover and tried to restart the HA process using /etc/init.d/heartbeat restart when i do crm_mon it is unable to start MailTo process and digged into the ha-debug file and found the related information can u give some heads up on this issue so that i can proceed further. bash-3.2# cat ha-debug | grep MailTo May 26 10:34:39 hatest-msf3 pengine: [18575]: notice: native_print: mail MailTo[18614]: 2011/05/26_10:34:39 ERROR: Setup problem: Couldn't find utility May 26 10:34:40 hatest-msf3 pengine: [18575]: notice: native_print: mail May 26 10:34:40 hatest-msf3 pengine: [18575]: notice: native_print: mail May 26 10:34:40 hatest-msf3 pengine: [18575]: notice: native_print: mail MailTo[18635]: 2011/05/26_10:34:40 ERROR: Setup problem: Couldn't find utility May 26 10:34:44 hatest-msf3 pengine: [18575]: notice: native_print: mail Regards rakesh ___ Pacemaker mailing list: Pacemaker@oss.clusterlabs.org http://oss.clusterlabs.org/mailman/listinfo/pacemaker Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker
Re: [Pacemaker] [pacemaker]Notification alerts when fail-over take place from one node to other node in cluster.
On Apr 26, 2011, at 7:00 AM, Rakesh K wrote: Hi Vadym Chepkov Thanks for giving the reply. As mentioned i am trying to configure MailTo RA with the Heartbeat from the command line i used the following configuration to configure it on the Heartbeat. primitive mail ocf:heartbeat:MailTo \ params email=emailid \ params subject=ClusterFailover and tried to restart the HA process using /etc/init.d/heartbeat restart when i do crm_mon it is unable to start MailTo process and digged into the ha-debug file and found the related information can u give some heads up on this issue so that i can proceed further. bash-3.2# cat ha-debug | grep MailTo May 26 10:34:39 hatest-msf3 pengine: [18575]: notice: native_print: mail MailTo[18614]: 2011/05/26_10:34:39 ERROR: Setup problem: Couldn't find utility May 26 10:34:40 hatest-msf3 pengine: [18575]: notice: native_print: mail May 26 10:34:40 hatest-msf3 pengine: [18575]: notice: native_print: mail May 26 10:34:40 hatest-msf3 pengine: [18575]: notice: native_print: mail MailTo[18635]: 2011/05/26_10:34:40 ERROR: Setup problem: Couldn't find utility May 26 10:34:44 hatest-msf3 pengine: [18575]: notice: native_print: mail Actually, the log is very self-explonatory, you don't have mail utility installed. $ rpm -qf `which mail` mailx-8.1.1-44.2.2 Vadym ___ Pacemaker mailing list: Pacemaker@oss.clusterlabs.org http://oss.clusterlabs.org/mailman/listinfo/pacemaker Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker
Re: [Pacemaker] Removing a 'Faile Action' in crm_mon display
Hi, On Tue, Apr 26, 2011 at 08:49:59AM +0200, Andrew Beekhof wrote: On Thu, Apr 21, 2011 at 7:46 PM, Phil Hunt phil.h...@orionhealth.com wrote: Had trouble setting up a resource, so it showed a failed action. I was doing 'crm resource cleanup xxx', and they would go away and come right back. Because the underlying cause remained. Anyway, I no longer needed the resource, so I deleted it and now I cannot do a cleanup to remove the failed action. Anyway to remove the failed actions? It should work, unless the shell is being too clever. No, it's not trying to be clever, at least not here. It's not checking if the resource exists. Thanks, Dejan Try using the crm_resource tool directly. Last updated: Thu Apr 21 14:51:58 2011 Stack: openais Current DC: CentClus1 - partition with quorum Version: 1.0.10-da7075976b5ff0bee71074385f8fd02f296ec8a3 2 Nodes configured, 2 expected votes 3 Resources configured. Online: [ CentClus1 CentClus2 ] Resource Group: CL_group ISCSI_disk (ocf::heartbeat:iscsi): Started CentClus1 VG_disk (ocf::heartbeat:LVM): Started CentClus1 FS_disk (ocf::heartbeat:Filesystem): Started CentClus1 ClusterIP (ocf::heartbeat:IPaddr2): Started CentClus1 Clone Set: CPM_ping Started: [ CentClus2 CentClus1 ] Clone Set: CTM_ping Started: [ CentClus1 CentClus2 ] Failed actions: dlm:0_monitor_0 (node=CentClus1, call=272, rc=5, status=complete): not insta lled dlm:0_monitor_0 (node=CentClus2, call=16, rc=5, status=complete): not in PHIL HUNT AMS Consultant phil.h...@orionhealth.com P: +1 857 488 4749 M: +1 508 654 7371 S: philhu0724 www.orionhealth.com ___ Pacemaker mailing list: Pacemaker@oss.clusterlabs.org http://oss.clusterlabs.org/mailman/listinfo/pacemaker Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker ___ Pacemaker mailing list: Pacemaker@oss.clusterlabs.org http://oss.clusterlabs.org/mailman/listinfo/pacemaker Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker ___ Pacemaker mailing list: Pacemaker@oss.clusterlabs.org http://oss.clusterlabs.org/mailman/listinfo/pacemaker Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker
Re: [Pacemaker] [PATCH]Bug 2567 - crm resource migrate should support an optional role parameter
Hi Holger, On Sun, Apr 24, 2011 at 04:31:33PM +0200, Holger Teutsch wrote: On Mon, 2011-04-11 at 20:50 +0200, Andrew Beekhof wrote: why? CMD_ERR(Resource %s not moved: specifying --master is not supported for --move-from\n, rsc_id); it did not look sensible to me but I can't recall the exact reasons 8-) It's now implemented. also the legacy handling is a little off - do a make install and run tools/regression.sh and you'll see what i mean. Remaining diffs seem to be not related to my changes. other than that the crm_resource part looks pretty good. can you add some regression testcases in tools/ too please? Will add them once the code is in the repo. Latest diffs are attached. The diffs seem to be against the 1.1 code, but this should go into the devel repository. Can you please rebase the patches against the devel code. Cheers, Dejan -holger diff -r b4f456380f60 shell/modules/ui.py.in --- a/shell/modules/ui.py.in Thu Mar 17 09:41:25 2011 +0100 +++ b/shell/modules/ui.py.in Sun Apr 24 16:18:59 2011 +0200 @@ -738,8 +738,9 @@ rsc_status = crm_resource -W -r '%s' rsc_showxml = crm_resource -q -r '%s' rsc_setrole = crm_resource --meta -r '%s' -p target-role -v '%s' -rsc_migrate = crm_resource -M -r '%s' %s -rsc_unmigrate = crm_resource -U -r '%s' +rsc_move_to = crm_resource --move-to -r '%s' %s +rsc_move_from = crm_resource --move-from -r '%s' %s +rsc_move_cleanup = crm_resource --move-cleanup -r '%s' rsc_cleanup = crm_resource -C -r '%s' -H '%s' rsc_cleanup_all = crm_resource -C -r '%s' rsc_param = { @@ -776,8 +777,12 @@ self.cmd_table[demote] = (self.demote,(1,1),0) self.cmd_table[manage] = (self.manage,(1,1),0) self.cmd_table[unmanage] = (self.unmanage,(1,1),0) +# the next two commands are deprecated self.cmd_table[migrate] = (self.migrate,(1,4),0) self.cmd_table[unmigrate] = (self.unmigrate,(1,1),0) +self.cmd_table[move-to] = (self.move_to,(2,4),0) +self.cmd_table[move-from] = (self.move_from,(1,4),0) +self.cmd_table[move-cleanup] = (self.move_cleanup,(1,1),0) self.cmd_table[param] = (self.param,(3,4),1) self.cmd_table[meta] = (self.meta,(3,4),1) self.cmd_table[utilization] = (self.utilization,(3,4),1) @@ -846,9 +851,67 @@ if not is_name_sane(rsc): return False return set_deep_meta_attr(is-managed,false,rsc) +def move_to(self,cmd,*args): +usage: move-to rsc[:master] node [lifetime] [force] +elem = args[0].split(':') +rsc = elem[0] +master = False +if len(elem) 1: +master = elem[1] +if master != master: +common_error(%s is invalid, specify 'master' % master) +return False +master = True +if not is_name_sane(rsc): +return False +node = args[1] +lifetime = None +force = False +if len(args) == 3: +if args[2] == force: +force = True +else: +lifetime = args[2] +elif len(args) == 4: +if args[2] == force: +force = True +lifetime = args[3] +elif args[3] == force: +force = True +lifetime = args[2] +else: +syntax_err((cmd,force)) +return False + +opts = '' +if node: +opts = --node='%s' % node +if lifetime: +opts = %s --lifetime='%s' % (opts,lifetime) +if force or user_prefs.get_force(): +opts = %s --force % opts +if master: +opts = %s --master % opts +return ext_cmd(self.rsc_move_to % (rsc,opts)) == 0 + def migrate(self,cmd,*args): -usage: migrate rsc [node] [lifetime] [force] -rsc = args[0] +Deprecated: migrate rsc [node] [lifetime] [force] +common_warning(migrate is deprecated, use move-to or move-from) +if len(args) = 2 and args[1] in listnodes(): +return self.move_to(cmd, *args) +return self.move_from(cmd, *args) + +def move_from(self,cmd,*args): +usage: move-from rsc[:master] [node] [lifetime] [force] +elem = args[0].split(':') +rsc = elem[0] +master = False +if len(elem) 1: +master = elem[1] +if master != master: +common_error(%s is invalid, specify 'master' % master) +return False +master = True if not is_name_sane(rsc): return False node = None @@ -888,12 +951,18 @@ opts = %s --lifetime='%s' % (opts,lifetime) if force or user_prefs.get_force(): opts = %s --force % opts
Re: [Pacemaker] Resource Agents 1.0.4: HA LVM Patch
Hi, On Tue, Apr 19, 2011 at 03:56:16PM +0200, Ulf wrote: Hi, I attached a patch to enhance the LVM agent with the capability to set a tag on the VG (set_hosttag = true) in conjunction with a volume_list filter this can prevent to activate a VG on multiple host. Unfortunately active VGs will stay active in case of unclean operation. Can you please elaborate on the benefits this patch would bring. Is it supposed to prevent a VG from being mounted on more than one node? Looking at the code, it seems like that on the start operation, the existing tag would be overwritten regardless. Thanks, Dejan P.S. Moving the discussion to the proper mailing list. The tag is always the hostname. Some configuration hints can be found here: http://sources.redhat.com/cluster/wiki/LVMFailover Cheers, Ulf -- GMX DSL Doppel-Flat ab 19,99 Euro/mtl.! Jetzt mit gratis Handy-Flat! http://portal.gmx.net/de/go/dsl ___ Pacemaker mailing list: Pacemaker@oss.clusterlabs.org http://oss.clusterlabs.org/mailman/listinfo/pacemaker Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker ___ Pacemaker mailing list: Pacemaker@oss.clusterlabs.org http://oss.clusterlabs.org/mailman/listinfo/pacemaker Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker
Re: [Pacemaker] subnetmask for virtual cluster ip / ipaddr2
Hi, On Wed, Apr 13, 2011 at 07:41:23PM +0200, Felix Reinel wrote: Dear list, the default subnet mask for an virtual ip (we use the IpAddr2 resource) seems to be /32. The subnet mask should be deducted from the network interface on which the virtual IP address is created. Otherwise, it can be specified using a parameter (cidr_netmask). See crm ra info IPaddr2 Thanks, Dejan In our setups we use those virtual ip addresses on active/passive two node cluster setups. I have been asked if that should not be /24 and why, which I wasn't really sure about. I just assume that's right and /32 is fine because it only needs to get masked locally for this single IP. So I'd like to doublecheck; do you think this is correct and why (not)? Thanks in advance, Felix ___ Pacemaker mailing list: Pacemaker@oss.clusterlabs.org http://oss.clusterlabs.org/mailman/listinfo/pacemaker Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker ___ Pacemaker mailing list: Pacemaker@oss.clusterlabs.org http://oss.clusterlabs.org/mailman/listinfo/pacemaker Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker
Re: [Pacemaker] A question and demand to a resource placement strategy function
Hi Yuusuke, On 04/19/11 19:55, Yan Gao wrote: Actually I've been optimizing the placement-strategy lately. It will sort the resource processing order according to the priorities and scores of resources. That should result in ideal placement. Stay tuned. The improvement of the placement strategy has been committed into the devel branch. Please give it a test. Thanks! Regards, Yan -- Yan Gao y...@novell.com Software Engineer China Server Team, OPS Engineering, Novell, Inc. ___ Pacemaker mailing list: Pacemaker@oss.clusterlabs.org http://oss.clusterlabs.org/mailman/listinfo/pacemaker Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker
Re: [Pacemaker] A question and demand to a resource placement strategy function
Hi Yan, 27.04.2011 07:32, Yan Gao wrote: Hi Yuusuke, On 04/19/11 19:55, Yan Gao wrote: Actually I've been optimizing the placement-strategy lately. It will sort the resource processing order according to the priorities and scores of resources. That should result in ideal placement. Stay tuned. The improvement of the placement strategy has been committed into the devel branch. Please give it a test. Thanks! Do priorities work for utilization strategy? Best, Vladislav ___ Pacemaker mailing list: Pacemaker@oss.clusterlabs.org http://oss.clusterlabs.org/mailman/listinfo/pacemaker Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker
Re: [Pacemaker] A question and demand to a resource placement strategy function
Hi Vladislav, On 04/27/11 12:49, Vladislav Bogdanov wrote: Hi Yan, 27.04.2011 07:32, Yan Gao wrote: Hi Yuusuke, On 04/19/11 19:55, Yan Gao wrote: Actually I've been optimizing the placement-strategy lately. It will sort the resource processing order according to the priorities and scores of resources. That should result in ideal placement. Stay tuned. The improvement of the placement strategy has been committed into the devel branch. Please give it a test. Thanks! Do priorities work for utilization strategy? Yes, the improvement works for utilization, minimal and balanced strategy: - The nodes that are more healthy and have more capacities get consumed first (globally preferred nodes). - The resources that have higher priorities and have higher scores on globally preferred nodes get assigned first. Regards, Yan -- Yan Gao y...@novell.com Software Engineer China Server Team, OPS Engineering, Novell, Inc. ___ Pacemaker mailing list: Pacemaker@oss.clusterlabs.org http://oss.clusterlabs.org/mailman/listinfo/pacemaker Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker