Re: [Pacemaker] [pacemaker]Notification alerts when fail-over take place from one node to other node in cluster.

2011-04-26 Thread Rakesh K
Vadym Chepkov  writes:

> 
> 
> You have to create MailTo resource for each resource or group you would like
to be notified, unfortunately.  You can also run crm_mon -1f¦grep -qi fail from
either cron or from snmp. It's not perfect, but better then nothing. I also
found check_crm script on nagios exchange, it's not ideal, but again, since this
functionality doesn't come with pacemaker yet, you would have to invent your own
wheel ;)
> 
> Cheers,
> Vadym
> On Apr 25, 2011 1:15 AM, "Rakesh K"
 wrote:> Vadym Chepkov
 ...> writes:
> > > >> You can colocate your resource with a MailTo pseudo resource :>> >> #
crm ra meta MailTo>> Notifies recipients by email in the event of resource
takeover> (ocf:heartbeat:MailTo)
> >> >> Vadym>> >> ___>> Pacemaker
mailing list: Pacemaker  ...>>
http://oss.clusterlabs.org/mailman/listinfo/pacemaker
> >> >> Project Home: http://www.clusterlabs.org>> Getting started:
http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> >> Bugs:
http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker>> >> >
> Hi Vadym 
> > > thanks for providing the reply. You said ti co-locate the resource with
the> MailTo resource which will notify the recipients by email provided in the>
configuration. But I had configured 4 resources in two node cluster. for this
> > case what would be the best approach ..> > Regards> Rakesh> > > >
___> Pacemaker mailing list:
Pacemaker@oss.clusterlabs.org
> > http://oss.clusterlabs.org/mailman/listinfo/pacemaker> > Project Home:
http://www.clusterlabs.org
> > Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf>
Bugs: http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker
> 
> 
> 
> ___
> Pacemaker mailing list: Pacemaker@...
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
> 
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: 
> http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker
> 

Hi Vadym Chepkov 

Thanks for giving the reply. As mentioned i am trying to configure MailTo RA
with the Heartbeat from the command line 

i used the following configuration to configure it on the Heartbeat.

primitive mail ocf:heartbeat:MailTo \
params email="" \
params subject="ClusterFailover"

and tried to restart the HA process using /etc/init.d/heartbeat restart

when i do crm_mon

it is unable to start MailTo process and digged into the ha-debug file and found
the related information

can u give some heads up on this issue so that i can proceed further.

bash-3.2# cat ha-debug | grep MailTo
May 26 10:34:39 hatest-msf3 pengine: [18575]: notice: native_print: mail
MailTo[18614]:  2011/05/26_10:34:39 ERROR: Setup problem: Couldn't find utility
May 26 10:34:40 hatest-msf3 pengine: [18575]: notice: native_print: mail
May 26 10:34:40 hatest-msf3 pengine: [18575]: notice: native_print: mail
May 26 10:34:40 hatest-msf3 pengine: [18575]: notice: native_print: mail
MailTo[18635]:  2011/05/26_10:34:40 ERROR: Setup problem: Couldn't find utility
May 26 10:34:44 hatest-msf3 pengine: [18575]: notice: native_print: mail


Regards
rakesh





___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker


Re: [Pacemaker] [pacemaker]Notification alerts when fail-over take place from one node to other node in cluster.

2011-04-26 Thread Vadym Chepkov

On Apr 26, 2011, at 7:00 AM, Rakesh K wrote:
> 
> Hi Vadym Chepkov 
> 
> Thanks for giving the reply. As mentioned i am trying to configure MailTo RA
> with the Heartbeat from the command line 
> 
> i used the following configuration to configure it on the Heartbeat.
> 
> primitive mail ocf:heartbeat:MailTo \
>params email="" \
>params subject="ClusterFailover"
> 
> and tried to restart the HA process using /etc/init.d/heartbeat restart
> 
> when i do crm_mon
> 
> it is unable to start MailTo process and digged into the ha-debug file and 
> found
> the related information
> 
> can u give some heads up on this issue so that i can proceed further.
> 
> bash-3.2# cat ha-debug | grep MailTo
> May 26 10:34:39 hatest-msf3 pengine: [18575]: notice: native_print: mail
> MailTo[18614]:  2011/05/26_10:34:39 ERROR: Setup problem: Couldn't find 
> utility
> May 26 10:34:40 hatest-msf3 pengine: [18575]: notice: native_print: mail
> May 26 10:34:40 hatest-msf3 pengine: [18575]: notice: native_print: mail
> May 26 10:34:40 hatest-msf3 pengine: [18575]: notice: native_print: mail
> MailTo[18635]:  2011/05/26_10:34:40 ERROR: Setup problem: Couldn't find 
> utility
> May 26 10:34:44 hatest-msf3 pengine: [18575]: notice: native_print: mail
> 



Actually, the log is very self-explonatory, you don't have mail utility 
installed.

$ rpm -qf `which mail`
mailx-8.1.1-44.2.2

Vadym


___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker


Re: [Pacemaker] Removing a 'Faile Action' in crm_mon display

2011-04-26 Thread Dejan Muhamedagic
Hi,

On Tue, Apr 26, 2011 at 08:49:59AM +0200, Andrew Beekhof wrote:
> On Thu, Apr 21, 2011 at 7:46 PM, Phil Hunt  wrote:
> > Had trouble setting up a resource, so it showed a failed action.
> >
> > I was doing 'crm resource cleanup xxx', and they would go away and come 
> > right back.
> 
> Because the underlying cause remained.
> 
> >
> > Anyway, I no longer needed the resource, so I deleted it and now I cannot 
> > do a cleanup to remove the failed action.
> >
> > Anyway to remove the failed actions?
> 
> It should work, unless the shell is being too clever.

No, it's not trying to be clever, at least not here. It's not
checking if the resource exists.

Thanks,

Dejan

> Try using the crm_resource tool directly.
> >
> >
> > 
> > Last updated: Thu Apr 21 14:51:58 2011
> > Stack: openais
> > Current DC: CentClus1 - partition with quorum
> > Version: 1.0.10-da7075976b5ff0bee71074385f8fd02f296ec8a3
> > 2 Nodes configured, 2 expected votes
> > 3 Resources configured.
> > 
> >
> > Online: [ CentClus1 CentClus2 ]
> >
> >  Resource Group: CL_group
> >     ISCSI_disk (ocf::heartbeat:iscsi): Started CentClus1
> >     VG_disk    (ocf::heartbeat:LVM):   Started CentClus1
> >     FS_disk    (ocf::heartbeat:Filesystem):    Started CentClus1
> >     ClusterIP  (ocf::heartbeat:IPaddr2):       Started CentClus1
> >  Clone Set: CPM_ping
> >     Started: [ CentClus2 CentClus1 ]
> >  Clone Set: CTM_ping
> >     Started: [ CentClus1 CentClus2 ]
> >
> > Failed actions:
> >    dlm:0_monitor_0 (node=CentClus1, call=272, rc=5, status=complete): not 
> > insta
> > lled    dlm:0_monitor_0 (node=CentClus2, call=16, rc=5, status=complete): 
> > not in
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> > PHIL HUNT AMS Consultant
> > phil.h...@orionhealth.com
> > P: +1 857 488 4749
> > M: +1 508 654 7371
> > S: philhu0724
> > www.orionhealth.com
> >
> > ___
> > Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
> > http://oss.clusterlabs.org/mailman/listinfo/pacemaker
> >
> > Project Home: http://www.clusterlabs.org
> > Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> > Bugs: 
> > http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker
> >
> 
> ___
> Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
> 
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: 
> http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker

___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker


Re: [Pacemaker] Multi-site support in pacemaker (tokens, deadman, CTR)

2011-04-26 Thread Yan Gao

Hi,

On 01/13/11 17:14, Lars Marowsky-Bree wrote:

Hi all,

sorry for the delay in posting this.
And sorry for the delay in replying this :-) I have some questions about 
this blow.




IntroductioN: At LPC 2010, we discussed (once more) that a key feature
for pacemaker in 2011 would be improved support for multi-site clusters;
by multi-site, we mean two (or more) sites with a local cluster each,
Would the topology of such a multi-site deployment be indicated in cib 
configuration? Or it's just something corosync would need to care about?


And the cibs between different sites would still be synchronized? In 
other words, normally there would be only one DC among the sites, right?



and some higher level entity coordinating fail-over across these (as
opposed to "stretched" clusters, where a single cluster might spawn the
whole campus in the city).

Typically, such multi-site environments are also too far apart to
support synchronous communication/replication.

There are several aspects to this that we discussed; Andrew and I first
described and wrote this out a few years ago, so I hope he can remember
the rest ;-)

"Tokens" are, essentially, cluster-wide attributes (similar to node
attributes, just for the whole partition).

Specifically, a "" section with an attribute set (
"" or something) under "/cib/configuration"?

Should an admin grant a token to the cluster initially? Or grant it to 
several nodes which are supposed to be from a same site? Or grant it to 
a partition after a split-brain happens --  A split-brain can happen 
between the sites or inside a site. How could it be distinguished and 
what policies to handle the scenarios respectively? What if a partition 
split further?


Additionally, when a split-brain happens, how about the existing stonith 
mechanism. Should the partition without quorum be stonithed? If 
shouldn't, or if couldn't, should the partition elect a DC? What about 
the no-quorum-policy?




Via dependencies (similar to
rsc_location), one can specify that certain resources require a specific
token to be set before being started
Which way do you prefer? I found you discussed this in another thread 
last year. The choices mentioned there as:

- A "" with "Deadman" order-type specified:
  kind="Deadman"/>


- A "":
  kind="Deadman"/>



Other choices I can imagine:

- There could be a "requires" field in an "op", which could be set to 
"quorum" or "fencing". Similarly, we could also introduce a 
"requires-token" field:




The shortcoming is a resource cannot depend on multiple tokens.


- A "" with expressions:

  

  value="true"/>


  

Via boolean-op, a resource can depend on multiple tokens, or any one of 
the specified multiple tokens.


- A completely new type of constraint:
  kind="Deadman"/>




(and, vice versa, need to be
stopped if the token is cleared). You could also think of our current
"quorum" as a special, cluster-wide token that is granted in case of
node majority.

The token thus would be similar to a "site quorum"; i.e., the permission
to manage/own resources associated with that site, which would be
recorded in a rsc dependency. (It'd probably make a lot of sense if this
would support resource sets,

If so, the "op" and the current "rsc_location" are not preferred.


so one can easily list all the resources;
also, some resources like m/s may tie their role to token ownership.)

These tokens can be granted/revoked either manually (which I actually
expect will be the default for the classic enterprise clusters), or via
an automated mechanism described further below.


Another aspect to site fail-over is recovery speed. A site can only
activate the resources safely if it can be sure that the other site has
deactivated them. Waiting for them to shutdown "cleanly" could incur
very high latency (think "cascaded stop delays"). So, it would be
desirable if this could be short-circuited. The idea between Andrew and
myself was to introduce the concept of a "dead man" dependency; if the
origin goes away,nodes which host dependent resources are fenced,
immensely speeding up recovery.
Does the "origin" mean "token"? If so, isn't it supposed to be revoked 
manually by default? So the short-circuited fail-over needs an admin to 
participate?


BTW, Xinwei once suggested to treat "the token is not set" and "the 
token is set to no" differently. For the former, the behavior would be 
like the token dependencies don't exist. If the token is explicitly set, 
invoke the appropriate policies. Does that help to distinguish scenarios?




It seems to make most sense to make this an attribute of some sort for
the various dependencies that we already have, possibly, to make this
generally available. (It may also be something admins want to
temporarily disable - i.e., for a graceful switch-over, they may not
want to trigger the dead man process always.)
Does it means an option for users to choose if they want an immediate 
fencing or stopping the resources normally? Is 

Re: [Pacemaker] [PATCH]Bug 2567 - crm resource migrate should support an optional "role" parameter

2011-04-26 Thread Dejan Muhamedagic
Hi Holger,

On Sun, Apr 24, 2011 at 04:31:33PM +0200, Holger Teutsch wrote:
> On Mon, 2011-04-11 at 20:50 +0200, Andrew Beekhof wrote:
> > why?
> > CMD_ERR("Resource %s not moved:"
> > " specifying --master is not supported for
> > --move-from\n", rsc_id);
> > 
> it did not look sensible to me but I can't recall the exact reasons 8-)
> It's now implemented.
> > also the legacy handling is a little off - do a make install and run
> > tools/regression.sh and you'll see what i mean.
> 
> Remaining diffs seem to be not related to my changes.
> 
> > other than that the crm_resource part looks pretty good.
> > can you add some regression testcases in tools/ too please?
> > 
> Will add them once the code is in the repo.
> 
> Latest diffs are attached.

The diffs seem to be against the 1.1 code, but this should go
into the devel repository. Can you please rebase the patches
against the devel code.

Cheers,

Dejan

> -holger
> 

> diff -r b4f456380f60 shell/modules/ui.py.in
> --- a/shell/modules/ui.py.in  Thu Mar 17 09:41:25 2011 +0100
> +++ b/shell/modules/ui.py.in  Sun Apr 24 16:18:59 2011 +0200
> @@ -738,8 +738,9 @@
>  rsc_status = "crm_resource -W -r '%s'"
>  rsc_showxml = "crm_resource -q -r '%s'"
>  rsc_setrole = "crm_resource --meta -r '%s' -p target-role -v '%s'"
> -rsc_migrate = "crm_resource -M -r '%s' %s"
> -rsc_unmigrate = "crm_resource -U -r '%s'"
> +rsc_move_to = "crm_resource --move-to -r '%s' %s"
> +rsc_move_from = "crm_resource --move-from -r '%s' %s"
> +rsc_move_cleanup = "crm_resource --move-cleanup -r '%s'"
>  rsc_cleanup = "crm_resource -C -r '%s' -H '%s'"
>  rsc_cleanup_all = "crm_resource -C -r '%s'"
>  rsc_param =  {
> @@ -776,8 +777,12 @@
>  self.cmd_table["demote"] = (self.demote,(1,1),0)
>  self.cmd_table["manage"] = (self.manage,(1,1),0)
>  self.cmd_table["unmanage"] = (self.unmanage,(1,1),0)
> +# the next two commands are deprecated
>  self.cmd_table["migrate"] = (self.migrate,(1,4),0)
>  self.cmd_table["unmigrate"] = (self.unmigrate,(1,1),0)
> +self.cmd_table["move-to"] = (self.move_to,(2,4),0)
> +self.cmd_table["move-from"] = (self.move_from,(1,4),0)
> +self.cmd_table["move-cleanup"] = (self.move_cleanup,(1,1),0)
>  self.cmd_table["param"] = (self.param,(3,4),1)
>  self.cmd_table["meta"] = (self.meta,(3,4),1)
>  self.cmd_table["utilization"] = (self.utilization,(3,4),1)
> @@ -846,9 +851,67 @@
>  if not is_name_sane(rsc):
>  return False
>  return set_deep_meta_attr("is-managed","false",rsc)
> +def move_to(self,cmd,*args):
> +"""usage: move-to [:master]  [] [force]"""
> +elem = args[0].split(':')
> +rsc = elem[0]
> +master = False
> +if len(elem) > 1:
> +master = elem[1]
> +if master != "master":
> +common_error("%s is invalid, specify 'master'" % master)
> +return False
> +master = True
> +if not is_name_sane(rsc):
> +return False
> +node = args[1]
> +lifetime = None
> +force = False
> +if len(args) == 3:
> +if args[2] == "force":
> +force = True
> +else:
> +lifetime = args[2]
> +elif len(args) == 4:
> +if args[2] == "force":
> +force = True
> +lifetime = args[3]
> +elif args[3] == "force":
> +force = True
> +lifetime = args[2]
> +else:
> +syntax_err((cmd,force))
> +return False
> +
> +opts = ''
> +if node:
> +opts = "--node='%s'" % node
> +if lifetime:
> +opts = "%s --lifetime='%s'" % (opts,lifetime)
> +if force or user_prefs.get_force():
> +opts = "%s --force" % opts
> +if master:
> +opts = "%s --master" % opts
> +return ext_cmd(self.rsc_move_to % (rsc,opts)) == 0
> +
>  def migrate(self,cmd,*args):
> -"""usage: migrate  [] [] [force]"""
> -rsc = args[0]
> +"""Deprecated: migrate  [] [] [force]"""
> +common_warning("migrate is deprecated, use move-to or move-from")
> +if len(args) >= 2 and args[1] in listnodes():
> +return self.move_to(cmd, *args)
> +return self.move_from(cmd, *args)
> +
> +def move_from(self,cmd,*args):
> +"""usage: move-from [:master] [] [] [force]"""
> +elem = args[0].split(':')
> +rsc = elem[0]
> +master = False
> +if len(elem) > 1:
> +master = elem[1]
> +if master != "master":
> +common_error("%s is invalid, specify 'master'" % master)
> +return False
> +master = True
>  if not is_name_sane(rsc):
>  return False
>  node = Non

Re: [Pacemaker] Resource Agents 1.0.4: HA LVM Patch

2011-04-26 Thread Dejan Muhamedagic
Hi,

On Tue, Apr 19, 2011 at 03:56:16PM +0200, Ulf wrote:
> Hi,
> 
> I attached a patch to enhance the LVM agent with the capability to set a tag 
> on the VG (set_hosttag = true) in conjunction with a volume_list filter this 
> can prevent to activate a VG on multiple host. Unfortunately active VGs will 
> stay active in case of unclean operation.

Can you please elaborate on the benefits this patch would bring.
Is it supposed to prevent a VG from being mounted on more than
one node?

Looking at the code, it seems like that on the start operation,
the existing tag would be overwritten regardless.

Thanks,

Dejan

P.S. Moving the discussion to the proper mailing list.

> The tag is always the hostname.
> Some configuration hints can be found here: 
> http://sources.redhat.com/cluster/wiki/LVMFailover
> 
> Cheers,
> Ulf
> -- 
> GMX DSL Doppel-Flat ab 19,99 Euro/mtl.! Jetzt mit 
> gratis Handy-Flat! http://portal.gmx.net/de/go/dsl


> ___
> Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
> 
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: 
> http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker


___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker


Re: [Pacemaker] subnetmask for virtual cluster ip / ipaddr2

2011-04-26 Thread Dejan Muhamedagic
Hi,

On Wed, Apr 13, 2011 at 07:41:23PM +0200, Felix Reinel wrote:
> Dear list,
> 
> the default subnet mask for an virtual ip (we use the IpAddr2 resource)
> seems to be /32.

The subnet mask should be deducted from the network interface on
which the virtual IP address is created. Otherwise, it can be
specified using a parameter (cidr_netmask). See

crm ra info IPaddr2

Thanks,

Dejan


> In our setups we use those virtual ip addresses on
> active/passive two node cluster setups. I have been asked if that should
> not be /24 and why, which I wasn't really sure about.
> 
> I just assume that's right and /32 is fine because it only needs to get
> masked locally for this single IP. So I'd like to doublecheck; do you
> think this is correct and why (not)?
> 
> Thanks in advance,
> Felix
> 



> ___
> Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
> 
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: 
> http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker


___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker


Re: [Pacemaker] A question and demand to a resource placement strategy function

2011-04-26 Thread Yan Gao
Hi Yuusuke,

On 04/19/11 19:55, Yan Gao wrote:
> Actually I've been optimizing the placement-strategy lately. It will
> sort the resource processing order according to the priorities and
> scores of resources. That should result in ideal placement. Stay tuned.
The improvement of the placement strategy has been committed into the
devel branch. Please give it a test. Thanks!

Regards,
  Yan
-- 
Yan Gao 
Software Engineer
China Server Team, OPS Engineering, Novell, Inc.

___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker


Re: [Pacemaker] A question and demand to a resource placement strategy function

2011-04-26 Thread Vladislav Bogdanov
Hi Yan,

27.04.2011 07:32, Yan Gao wrote:
> Hi Yuusuke,
> 
> On 04/19/11 19:55, Yan Gao wrote:
>> Actually I've been optimizing the placement-strategy lately. It will
>> sort the resource processing order according to the priorities and
>> scores of resources. That should result in ideal placement. Stay tuned.
> The improvement of the placement strategy has been committed into the
> devel branch. Please give it a test. Thanks!

Do priorities work for "utilization" strategy?

Best,
Vladislav

___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker


Re: [Pacemaker] A question and demand to a resource placement strategy function

2011-04-26 Thread Yan Gao
Hi Vladislav,

On 04/27/11 12:49, Vladislav Bogdanov wrote:
> Hi Yan,
> 
> 27.04.2011 07:32, Yan Gao wrote:
>> Hi Yuusuke,
>>
>> On 04/19/11 19:55, Yan Gao wrote:
>>> Actually I've been optimizing the placement-strategy lately. It will
>>> sort the resource processing order according to the priorities and
>>> scores of resources. That should result in ideal placement. Stay tuned.
>> The improvement of the placement strategy has been committed into the
>> devel branch. Please give it a test. Thanks!
> 
> Do priorities work for "utilization" strategy?
Yes, the improvement works for "utilization", "minimal" and "balanced"
strategy:

- The nodes that are more healthy and have more capacities get consumed
first (globally preferred nodes).

- The resources that have higher priorities and have higher scores on
globally preferred nodes get assigned first.

Regards,
  Yan
-- 
Yan Gao 
Software Engineer
China Server Team, OPS Engineering, Novell, Inc.

___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker