Re: [Pacemaker] Cannot create more than 27 multistate resources

2014-07-21 Thread Chris Feist

On 07/21/2014 05:22 AM, Andrew Beekhof wrote:

Chris,

Does the error below mean anything to you?
This seems to be happening once the CIB reaches a certain size, but is on the 
client side and possibly before the pacemaker tools are invoked.


I grabbed your debug file and did some tests and it looks like the issue is 
caused by earlier version of pcs (0.9.90 is affected) which try to pass the 
entire cib on the command line to cibadmin.  This has been fixed upstream (and 
should be built in the next release of RHEL/CentOS).


As a workaround, you can use the upstream sources here: 
https://github.com/feist/pcs (just run pcs from the directory that is cloned).


Thanks!
Chris



On 9 Jul 2014, at 6:49 pm, K Mehta  wrote:


[root@vsanqa11 ~]# pcs resource create vha-3de5ab16-9917-4b90-93d2-7b04fc71879c 
ocf:heartbeat:vgc-cm-agent.ocf cluster_uuid=3de5ab16-9917-4b90-93d2-7b04fc71879c op monitor 
role="Master" interval=30s timeout=100s op monitor role="Master" interval=30s 
timeout=100s


pcs status output includes
  vha-3de5ab16-9917-4b90-93d2-7b04fc71879c   
(ocf::heartbeat:vgc-cm-agent.ocf):  Started vsanqa11


[root@vsanqa11 ~]# pcs resource master ms-3de5ab16-9917-4b90-93d2-7b04fc71879c 
vha-3de5ab16-9917-4b90-93d2-7b04fc71879c meta clone-max=2 globally-unique=false 
target-role=started
Error: unable to locate command: /usr/sbin/cibadmin




Looking in the logs, I see:

Jul 12 11:18:24 vsanqa11 cibadmin[7966]:   notice: crm_log_args: Invoked: /usr/sbin/cibadmin -c -R --xml-text #012  #012#012#012#012#012  


___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [Pacemaker] unexpected demote request on master

2014-05-27 Thread Chris Feist

On 05/27/14 05:38, K Mehta wrote:

One more question.
With crmsh, it was easy to add constraint to avoid a resource from running only
a subset(say vsanqa11 and vsanqa12) of nodes using the following command

crm configure location ms-${uuid}-nodes ms-$uuid rule -inf: \#uname ne vsanqa11
and \#uname ne  vsanqa12
[root@vsanqa11 ~]# pcs constraint show --full
Location Constraints:
   Resource: ms-c6933988-9e5c-419e-8fdf-744100d76ad6
 Constraint: ms-c6933988-9e5c-419e-8fdf-744100d76ad6-nodes
   Rule: score=-INFINITY
  (id:ms-c6933988-9e5c-419e-8fdf-744100d76ad6-nodes-rule)
 Expression: #uname ne vsanqa11
  (id:ms-c6933988-9e5c-419e-8fdf-744100d76ad6-nodes-expression)
 Expression: #uname ne vsanqa12
  (id:ms-c6933988-9e5c-419e-8fdf-744100d76ad6-nodes-expression-0)
Ordering Constraints:
Colocation Constraints:

So, both expression are part of the same rule as expected.



With pcs, I am not sure how to use avoid constraints if I need a resource to run
on vsanqa11 and vsanqa12 and not on any other node.
So I tried adding location constraint as follows:
pcs -f $CLUSTER_CREATE_LOG constraint location vha-$uuid rule score=-INFINITY
\#uname ne vsanqa11 and \#uname ne vsanqa12
Even though no error is thrown, the condition after "and" is silently dropped as
shown below

[root@vsanqa11 ~]# pcs constraint show --full
Location Constraints:
   Resource: ms-c6933988-9e5c-419e-8fdf-744100d76ad6
 Constraint: location-vha-c6933988-9e5c-419e-8fdf-744100d76ad6
   Rule: score=-INFINITY
  (id:location-vha-c6933988-9e5c-419e-8fdf-744100d76ad6-rule)
 Expression: #uname ne vsanqa11
  (id:location-vha-c6933988-9e5c-419e-8fdf-744100d76ad6-rule-expr)
Ordering Constraints:
Colocation Constraints:


Then I tried the following
pcs -f $CLUSTER_CREATE_LOG constraint location vha-$uuid rule score=-INFINITY
\#uname ne vsanqa11
pcs -f $CLUSTER_CREATE_LOG constraint location vha-$uuid rule score=-INFINITY
\#uname ne vsanqa12

but running these two commands did not help either. Expressions were added to
separate rules.

[root@vsanqa11 ~]# pcs constraint show --full
Location Constraints:
   Resource: ms-c6933988-9e5c-419e-8fdf-744100d76ad6
 Constraint: location-vha-c6933988-9e5c-419e-8fdf-744100d76ad6-1
   Rule: score=-INFINITY
  (id:location-vha-c6933988-9e5c-419e-8fdf-744100d76ad6-1-rule)
 Expression: #uname ne vsanqa12
  (id:location-vha-c6933988-9e5c-419e-8fdf-744100d76ad6-1-rule-expr)
 Constraint: location-vha-c6933988-9e5c-419e-8fdf-744100d76ad6
   Rule: score=-INFINITY
  (id:location-vha-c6933988-9e5c-419e-8fdf-744100d76ad6-rule)
 Expression: #uname ne vsanqa11
  (id:location-vha-c6933988-9e5c-419e-8fdf-744100d76ad6-rule-expr)
Ordering Constraints:
Colocation Constraints:


Also, tried using multistate resource name
[root@vsanqa11 ~]# pcs constraint location
ms-c6933988-9e5c-419e-8fdf-744100d76ad6 rule score=-INFINITY \#uname ne vsanqa11
Error: 'ms-c6933988-9e5c-419e-8fdf-744100d76ad6' is not a resource


Can anyone let me correct command for this ?


Which version of pcs are you using (and what distribution)?  This has been fixed 
upstream.  (Below is a test from my system using the upstream pcs).


[root@rh7-1 pcs]# pcs constraint location D1 rule score=-INFINITY \#uname ne 
vsanqa11 and \#uname ne vsanqa12

[root@rh7-1 pcs]# pcs constraint
Location Constraints:
  Resource: D1
Constraint: location-D1
  Rule: score=-INFINITY boolean-op=and
Expression: #uname ne vsanqa11
Expression: #uname ne vsanqa12

Thanks,
Chris







On Tue, May 27, 2014 at 11:01 AM, Andrew Beekhof mailto:and...@beekhof.net>> wrote:


On 27 May 2014, at 2:37 pm, K Mehta mailto:kiranmehta1...@gmail.com>> wrote:

 > So is globally-unique=false correct in my case ?

yes

 >
 >
 > On Tue, May 27, 2014 at 5:30 AM, Andrew Beekhof mailto:and...@beekhof.net>> wrote:
 >
 > On 26 May 2014, at 9:56 pm, K Mehta mailto:kiranmehta1...@gmail.com>> wrote:
 >
 > > What I understand from "globally-unique=false" is as follows
 > > Agent handling the resource does exactly same processing on all nodes.
For processing this resource, agent on all nodes will use exactly same
resources (files, processes, same parameters to agent entry points, etc).
 > >
 > > In case of my resource, agent on all nodes execute same "command" to
find score.
 > > Driver present on all nodes will make sure that the node that is to be
promoted is the one that reports highest score as output of the "command".
Score is reported to CM using ( /usr/sbin/crm_master -Q -l reboot -v $score)
in monitor entry point. Until this score
 > > is reported, agent on other node will just delete the score using
/usr/sbin/crm_master -Q -l reboot -D in monitor entry point
 > >
 > >
 > >
 > >
 > > I want to make sure that the resource does not run on nodes other than
$node1 and $node2. To achieve this i use the following commands.

Re: [Pacemaker] pcs command does not work as expected.

2014-04-08 Thread Chris Feist

On 03/24/2014 08:55 PM, Naoya Anzai wrote:

Hi all,

I'm using pcs 0.9.115 on fedora 20.


Which version of the pcs rpm are you using?  (rpm -q pcs)

This issues has been recently been fixed, but there may not yet be a fedora 
build.

Thanks,
Chris



---
[root@saturn ~]# pcs --version
0.9.115
[root@saturn ~]# cat /etc/redhat-release
Fedora release 20 (Heisenbug)
---

I want to add locational rule property "boolean-op" using pcs,
but this pcs seems it does not be implemented...

---
[root@saturn ~]# pcs -f pgsql_cfg constraint location vip-backup rule score="-INFINITY" 
pgsql-status ne "HS:sync" and pgsql-status ne "PRI"
[root@saturn ~]# pcs -f pgsql_cfg constraint location --full
Location Constraints:
   Resource: vip-backup
 Constraint: location-vip-backup
   Rule: score=-INFINITY  (id:location-vip-backup-rule)
 Expression: pgsql-status ne HS:sync  (id:location-vip-backup-rule-expr)
[root@saturn ~]# cat pgsql_cfg|grep -e location-vip-backup-rule -A2 -B1
   
 
   
 
   
---

If this feature is implemented, can anybody teach me how to use?

Incidentally,
By editing the xml file directly, it can read and load correctly.
---
#After edit pgsql_cfg
[root@saturn ~]# pcs -f pgsql_cfg constraint location --full
Location Constraints:
   Resource: vip-backup
 Constraint: location-vip-backup
   Rule: score=-INFINITY boolean-op=and  (id:location-vip-backup-rule)
 Expression: pgsql-status ne HS:sync  (id:location-vip-backup-rule-expr)
 Expression: pgsql-status ne PRI  (id:location-vip-backup-rule-expr-1)
---

Regards,

Naoya

---
Naoya Anzai
Engineering Department
NEC Soft, Ltd.
E-Mail: anzai-na...@mxu.nes.nec.co.jp
---


___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org




___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [Pacemaker] pcs available on debian wheezy?

2014-03-19 Thread Chris Feist

On 03/19/2014 09:17 AM, Vladimir wrote:

Hey everyone,

does anybody know if there is pcs already available on debian wheezy?

I first tried to ask on debian-ha-maintainers (subject: crmsh and pcs on
wheezy) but maybe that's not the right list to address this question.

I asked myself if it makes sense to already switch to pcs on wheezy (if
possible at all).


There currently isn't a debian pcs package although you should be able to 
download the source from here: https://github.com/feist/pcs/archive/master.tar.gz


Let me know if you run into any issues and we can get them fixed.

I'm planning on getting a debian & ubuntu package built for pcs in the upcoming 
few months, but I haven't had a chance yet.


Thanks,
Chris



Is it possible to first use crmsh and later to switch to pcs? Is the
actual cib the same and crmsh and pcs are only different frontends (like
cibadmin)?

Thanks in advance.

Kind regards
Vladimir

___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org




___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [Pacemaker] Colocation set options (pcs syntax)

2014-03-04 Thread Chris Feist

On 02/28/2014 02:32 AM, Asgaroth wrote:



pcs constraint colocation set fs_ldap-clone sftp01-vip ldap1 sequential=true

Let me know if this does or doesn't work for you.


I have been testing this now for a couple days and I think I must be doing
something wrong, firstly though, the command itself completes successfully:

# pcs constraint show --full

   Resource Sets:
 set fs_ldap-clone sftp01-vip ldap1 sequential=true (id:pcs_rsc_set)
(id:pcs_rsc_colocation)

However, if I try to test it by moving, for example, the "sftp01-vip" resource
group to another node, then is does not move the ldap1 service with it, example
below:


I think what you want is a resource group, that will keep all the resources 
together.  A resource set just simplifies creating an A -> B -> C ordering.


If you put fs_ldap-clone, sftp01-vip & ldap1 all in a group they will stay 
together.  (You can then assign the location constraints to the group to set a 
preferred node).


Thanks,
Chris



Cluster state before resource move:
http://pastebin.com/a13ZhyRq

Then I do "pcs resource move sftp01-vip bfievsftp02", which moves resources to
the node (except the associated ldap1 service)

Cluster state after the move:
http://pastebin.com/BSyTBEhX

Full constraint list:
http://pastebin.com/ng6m4C1Z

Here is what I am trying to achieve:
[1] The sftp0[1-3]-vip groups each have a prefered node (sftp01-vip=node1,
sftp02-vip=node2, sftp03-vip=node3
[2] The sftp0[1-3] lsb resources are colocated with sftp0[1-3]-vip groups
[3] The ldap[1-3] lsb resources are colocated with sftp0[1-3]-vip groups

I managed to achieve the above using logic contraints as listed in the
constraint output, however, the sftp0[1-3] and ldap[1-3] lsb resources also
depend on fs_cdr-clone and fs_ldap-clone respectively, being available.

I thought I would be able to achive that file system dependancy using the
colocation set, but this does not seem to work the way I am expecting it to, or,
quite possibly, my logic may be slightly(largely) off :)

How would I ensure, that in the case of a node failure, the vip group moves to a
node which has the fs_cdr and fs_ldap file system resources available? If I can
do that, then, I can keep the colocation rule for the sftp/ldap service with the
vip group. Or am I thinking about this the wrong way around?

Any tips/suggestions would be appreciated.

Thanks




___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [Pacemaker] Colocation set options (pcs syntax)

2014-02-24 Thread Chris Feist

On 02/24/2014 08:28 AM, Asgaroth wrote:

Hi All,

I have several resources that depend on a cloned share file system and vip
that need to be up and operational before the resource can start, I was
reading the pacemaker documentation and it looks like colocation sets is
what I am after. I can see in the documentation that you can define a
colocation set and set the sequential option to "true" if you need the
resources to start sequentially, I guess this then becomes an ordered
colocation set which is what I am after, documentation I was reading is
here:

http://clusterlabs.org/doc/en-US/Pacemaker/1.1/html/Pacemaker_Explained/s-re
source-sets-collocation.html

According to the pcs man page I can setup a colocation set as follows:

 colocation set   [resourceN]... [setoptions] ...
[set   ...] [setoptions
=...]

However when I run the following command to create the set:

pcs constraint colocation set fs_ldap-clone sftp01-vip ldap1 setoptions
sequential=true


I think there's an error in the man page (which I'll work on getting fixed). 
Can you try: (removing 'setoptions' from your command)



pcs constraint colocation set fs_ldap-clone sftp01-vip ldap1 sequential=true


Let me know if this does or doesn't work for you.

Thanks,
Chris


I get an error stating:

Error: Unable to update cib
Call cib_replace failed (-203): Update does not conform to the configured
schema

And then a dump of the current running info base.

Am I reading the man page incorrectly, or is this a bug I need to report?

Thanks


___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org




___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [Pacemaker] "pcs cluster status" options seems to not work

2014-02-20 Thread Chris Feist

> [root@mici-admin ~]# crm resource show libvirtd-clone
> resource libvirtd-clone is running on: mici-admin-ptp
> resource libvirtd-clone is running on: mici-admin2-ptp
> 
> ... but does not use crmsh.
> 
> 
> [root@mici-admin ~]# pcs cluster status

Can you try 'pcs status'?  Does that give you better output?

> Cluster Status:
>  Last updated: Thu Feb 20 21:43:53 2014
>  Last change: Thu Feb 20 18:56:35 2014 via crm_resource on mici-admin-ptp
>  Stack: cman
>  Current DC: mici-admin2-ptp - partition with quorum
>  Version: 1.1.10-14.el6_5.2-368c726
>  2 Nodes configured
>  8 Resources configured
> 
> PCSD Status:
> Error: no nodes found in corosync.conf
> [root@mici-admin ~]# pcs cluster status cluster
> Cluster Status:
>  Last updated: Thu Feb 20 21:44:03 2014
>  Last change: Thu Feb 20 18:56:35 2014 via crm_resource on mici-admin-ptp
>  Stack: cman
>  Current DC: mici-admin2-ptp - partition with quorum
>  Version: 1.1.10-14.el6_5.2-368c726
>  2 Nodes configured
>  8 Resources configured
> 
> PCSD Status:
> Error: no nodes found in corosync.conf
> [root@mici-admin ~]# pcs cluster status rewources
> Cluster Status:
>  Last updated: Thu Feb 20 21:44:18 2014
>  Last change: Thu Feb 20 18:56:35 2014 via crm_resource on mici-admin-ptp
>  Stack: cman
>  Current DC: mici-admin2-ptp - partition with quorum
>  Version: 1.1.10-14.el6_5.2-368c726
>  2 Nodes configured
>  8 Resources configured
> 
> PCSD Status:
> Error: no nodes found in corosync.conf
> 
> Bob Haxo
> 
> 
> 
> ___
> Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
> 
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
> 

___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [Pacemaker] possible regex error in "pcs resource enable/disable"

2014-02-19 Thread Chris Feist

On 02/19/2014 11:16 AM, Bob Haxo wrote:

Encountered this error with pcs but not with crm.  Looks like a regex
error, with the existing regex grabbing all strings starting with the
string "libvirtd-clone" ...


[root@mici-admin2 ~]# pcs resource disable libvirtd-clone
Error: Error performing operation: Invalid argument
Multiple attributes match name=target-role
   Value: Started(id=libvirtd-clone-meta-target-role)
   Value: Started(id=libvirtd-clone-meta_attributes-target-role)

[root@mici-admin ~]# crm resource stop libvirtd-clone

<>

[root@mici-admin ~]# pcs resource enable libvirtd-clone
Error: Error performing operation: Invalid argument
Multiple attributes match name=target-role
   Value: Stopped(id=libvirtd-clone-meta-target-role)
   Value: Stopped(id=libvirtd-clone-meta_attributes-target-role)

xml:
   
 
   
 
   

Regards,
Bob Haxo


Can you send the output of the following command:
pcs --debug resource disable libvirtd-clone

Thanks,
Chris



___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org




___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [Pacemaker] command to dump cluster configuration in "pcs" format?

2014-01-15 Thread Chris Feist

On 01/15/2014 05:02 PM, Bob Haxo wrote:

Greetings,

The command  "crm configure show" dumps the cluster configuration in a format
that is suitable for use in configuring a cluster.

The command "pcs config" generates nice human readable information, but this is
not directly suitable for use in configuring a cluster.

Is there a "pcs" command analogous to the "crm" command that dumps the cluster
configuration in "pcs" format?


Currently there is not.  We may at some point look into this, but it isn't on my 
short term list of things to do.


Thanks,
Chris



Regards,
Bob Haxo


___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org




___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [Pacemaker] Unable to communicate with

2014-01-08 Thread Chris Feist

On 12/13/2013 12:17 AM, Praveen wrote:

Dear all,

i'm facing problem in authenticating the other nodes, using fedora20 beta, while
implementing the cluster with the pacemaker.

details attached with mail.


*[root@pcmk-1 ~]# pcs cluster auth pcmk-1 pcmk-2 *

*pcmk-1: Already authorized *

*Unable to communicate with pcmk-2 *


It looks like there's an issue with pcsd not running on pcmk-2

Which version of pcs are your running with the beta (rpm -q pcs).

Can you try the following commands and send me the output (run these on both 
pcmk-1 and pcmk-2):

telnet pcmk-2 2224
telnet pcmk-1 2224
ping pcmk-2
service pcsd status


Thanks,
Chris




-
Regards,
Praveen


___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org




___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [Pacemaker] Error: node does not appear to exist in configuration

2014-01-08 Thread Chris Feist

On 01/08/2014 04:11 PM, Andrew Beekhof wrote:


On 6 Jan 2014, at 8:09 pm, Jerald B. Darow  wrote:


Where am I going wrong here?


Good question... Chris?


Which version of pcs are you using (pcs --version) and what operating system and 
version (ie. RHEL 6.5).


There was a bug in pcs in RHEL/CentOS which could cause this issue and has been 
fixed in a z-stream release. (pcs-0.9.90-2.el6_5.2 for RHEL 6.5).


Thanks,
Chris





[root@zero mysql]# pcs cluster standby zero.acenet.us
Error: node 'zero.acenet.us' does not appear to exist in configuration
[root@zero mysql]# pcs cluster cib | grep "node id"
  
  

---

standby  | --all
Put specified node into standby mode (the node specified will no
longer
be able to host resources), if --all is specified all nodes will
be put
into standby mode.

---


___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org





___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [Pacemaker] constraint colocation or resource group

2014-01-02 Thread Chris Feist

On 01/02/2014 01:54 PM, Luc Paulin wrote:

Thanx for the syntax, however it does not seem to work.
When I try it, it immediately return the help syntax menu, as if the command not
suported.

[root@fwcorp-01 (NEW FW HOST) ~]$pcs constraint colocation set vip_v253_178
vip_v253_179 vip_v253


Which version of pcs are you using?  (colocation sets were included in 0.9.62)



Usage: pcs constraint [constraints]...
Manage resource constraints

Commands:
[...cut...]


The only possible syntax look to be:
pcs constraint colocation [show [all]]
pcs constraint colocation add   [score] 
[options]
pcs constraint colocation rm  

I found the following link that refer to the constraint colocation set
(http://clusterlabs.org/doc/en-US/Pacemaker/1.1/html/Pacemaker_Explained/s-resource-sets-collocation.html)
however there's no reference to the pcs command





--
  !
( o o )
  --oOO(_)OOo--
Luc Paulin
email: paulinster(at)gmail.com <http://gmail.com>
Skype: paulinster



2014/1/2 Chris Feist mailto:cfe...@redhat.com>>

On 01/02/2014 10:37 AM, Lars Marowsky-Bree wrote:

On 2014-01-02T11:22:01, Luc Paulin mailto:paulins...@gmail.com>> wrote:

That make sense to use the colocation. So I guess that I should 
define a
"master" resource and tell each other resource that they should
colocated
on the same node at the "master" resource.


You could also use a resource set to achieve this, but I don't know the
pcs syntax for it.


You can create a resource set with pcs like this:
pcs constraint colocation set resourceA resourceB ...

Thanks!
Chris



Regards,
  Lars



_
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
<mailto:Pacemaker@oss.clusterlabs.org>
http://oss.clusterlabs.org/__mailman/listinfo/pacemaker
<http://oss.clusterlabs.org/mailman/listinfo/pacemaker>

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/__doc/Cluster_from_Scratch.pdf
<http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf>
Bugs: http://bugs.clusterlabs.org




___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org




___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [Pacemaker] constraint colocation or resource group

2014-01-02 Thread Chris Feist

On 01/02/2014 10:37 AM, Lars Marowsky-Bree wrote:

On 2014-01-02T11:22:01, Luc Paulin  wrote:


That make sense to use the colocation. So I guess that I should define a
"master" resource and tell each other resource that they should colocated
on the same node at the "master" resource.


You could also use a resource set to achieve this, but I don't know the
pcs syntax for it.


You can create a resource set with pcs like this:
pcs constraint colocation set resourceA resourceB ...

Thanks!
Chris




Regards,
 Lars




___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [Pacemaker] Beginner Question: not able to shutdown 2nd node

2013-12-03 Thread Chris Feist

On 11/25/2013 09:31 PM, T.J. Yang wrote:




On Mon, Nov 25, 2013 at 8:44 PM, Digimer mailto:li...@alteeve.ca>> wrote:

On 25/11/13 21:18, T.J. Yang wrote:
 > Hi
 >
 > I need help here, Looks like I missed a step to startup two nodes to
 > listen on port 2224 ?
 >
 > [root@ilclpm01 ~]# pcs --version
 > 0.9.90
 > [root@ilclpm01 ~]# pcs --debug cluster stop ilclpm02
 > Sending HTTP Request to: https://ilclpm02:2224/remote/cluster_stop
 > Data: None
 > Response Reason: [Errno 111] Connection refused
 > Error: unable to stop all nodes
 > Unable to connect to ilclpm02 ([Errno 111] Connection refused)

Is pcsd running?

If this is RHEL / CentOS 6, then I do not believe pcsd works.


Digimer is correct, we don't support pcsd in RHEL 6.5.  This only means that you 
can't do a cluster setup from just one node.  We're looking into adding pcsd 
(and web gui) support into future version of RHEL 6 & 7.


Thanks,
Chris




Hi digimer

Thanks for responding to  my question.
I can't find pcsd binary from  three packages I installed.

[root@ilclpm01 ~]# rpm -qil pcs cman pacemaker |grep pcsd
[root@ilclpm01 ~]#

following are more details about my test cluster.


3617 ?SLsl   0:07 corosync -f
  3674 ?Ssl0:00 fenced
  3690 ?Ssl0:00 dlm_controld
  3749 ?Ssl0:00 gfs_controld
  3832 pts/0S  0:01 pacemakerd
  3838 ?Ss 0:01  \_ /usr/libexec/pacemaker/cib
  3839 ?Ss 0:01  \_ /usr/libexec/pacemaker/stonithd
  3840 ?Ss 0:02  \_ /usr/libexec/pacemaker/lrmd
  3841 ?Ss 0:01  \_ /usr/libexec/pacemaker/attrd
  3842 ?Ss 0:00  \_ /usr/libexec/pacemaker/pengine
  3843 ?Ss 0:01  \_ /usr/libexec/pacemaker/crmd


[root@ilclpm01 ~]# rpm -q cman
cman-3.0.12.1-59.el6.x86_64
[root@ilclpm01 ~]# rpm -q pacemaker
pacemaker-1.1.10-14.el6.x86_64
[root@ilclpm01 ~]# cat /etc/redhat-release
Red Hat Enterprise Linux Server release 6.5 (Santiago)
[root@ilclpm01 ~]#


--
Digimer
Papers and Projects: https://alteeve.ca/w/
What if the cure for cancer is trapped in the mind of a person without
access to education?

___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org

http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org




--
T.J. Yang


___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org




___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [Pacemaker] pcs ping connectivity rule

2013-12-02 Thread Chris Feist

On 11/20/2013 03:30 PM, Martin Ševčík wrote:

Hello,
I have troubles setting up 'best connectivity' rule using pcs on RHEL 6.4. I
have two nodes setup with ping resource defined as:

pcs resource create ping ocf:pacemaker:ping host_list="10.242.40.251
10.242.40.252" multiplier="1000"

and location rule defined as:

pcs constraint location MyResource rule defined pingd

but this setup doesn't work. When I make one of the target hosts inaccesible on
the active node using iptables, the score drops from 2000 to 1000 but the
resource doesn't move. I also tried the old cmr syntax:

pcs constraint location MyResource rule pingd: defined pingd


Can you try the following:
pcs constraint location MyResource rule score=pingd defined pingd



but this results in syntax error. I have latest pcs 0.9.100 from git.

I appreciate any help.

Thanks,
m.

___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org



___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [Pacemaker] Does pcs 0.9.61 and higher support cman+pacemaker?

2013-11-13 Thread Chris Feist

On 11/13/13 09:57, Vladimir Broz wrote:



I'm using setup on CentOS 6.4 with:
pcs - 0.9.90
pacemaker - 1.1.10
cman - 3.0.12.1

was  this problem already solved?


Yes, this is a known bug (https://bugzilla.redhat.com/show_bug.cgi?id=1029129).

There is a simple one line patch for the issue until new packages are available

https://github.com/feist/pcs/commit/8b888080c37ddea88b92dfd95aadd78b9db68b55

Thanks,
Chris



When I try:
[root@sbct1 ~]# pcs cluster standby sbct2
Error: node 'sbct2' does not appear to exist in configuration

[root@sbct1 ~]# crm_node --list
sbct1 sbct2

Thanks in advance,
-Vladimir



___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [Pacemaker] Master/slave colocation

2013-10-28 Thread Chris Feist

On 10/17/13 11:15, Sam Gardner wrote:

I have a two-node, six resource cluster configured.

Two VIP addresses w/link monitoring, and a DRBD master/slave set configured
exactly as in the Clusters from Scratch documentation.

I want to make the DRBD master always be on the same node as the ExternalVIP in
my configuration.

To do this, I run:
# pcs constraint colocation add WebDataClone with ExternalVIP

This causes the secondary DRBD node to stop.

Is this expected behavior for the pcs constraint command that I showed? If so,
what is the proper procedure for making the Master node always be running on the
same node as my ExternalVIP?


Can you try 'pcs constraint colocation add master WebDataClone with ExternalVIP'

Thanks,
Chris




Thanks for any help - status follows.
Sam Gardner

*PCS status and cat /proc/drbd before*

Cluster name:
Last updated: Thu Oct 17 16:04:40 2013
Last change: Thu Oct 17 16:03:02 2013 via cibadmin on pacemaker-master
Stack: classic openais (with plugin)
Current DC: pacemaker-master - partition with quorum
Version: 1.1.8-1
2 Nodes configured, 2 expected votes
6 Resources configured.


Online: [ pacemaker-master pacemaker-slave ]

Full list of resources:

  ExternalVIP(**):   Started pacemaker-master
  Eth1Monitor(**):Started pacemaker-master
  InternalVIP(**):   Started pacemaker-master
  Eth2Monitor(**):Started pacemaker-master
  Master/Slave Set: WebDataClone [WebData]
  Masters: [ pacemaker-master ]
  Slaves: [ pacemaker-slave ]


[root@pacemaker-master ~]# cat /proc/drbd
version: 8.3.11 (api:88/proto:86-96)
srcversion: E750A52708C7363DA649D31

  1: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r-
 ns:0 nr:0 dw:0 dr:664 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0

*PCS status and cat /proc/drbd after*

Cluster name:
Last updated: Thu Oct 17 16:13:22 2013
Last change: Thu Oct 17 16:13:05 2013 via cibadmin on pacemaker-master
Stack: classic openais (with plugin)
Current DC: pacemaker-master - partition with quorum
Version: 1.1.8-1.tos2-394e906
2 Nodes configured, 2 expected votes
6 Resources configured.


Online: [ pacemaker-master pacemaker-slave ]

Full list of resources:

  ExternalVIP(**):   Started pacemaker-master
  Eth1Monitor(**):Started pacemaker-master
  InternalVIP(**):   Started pacemaker-master
  Eth2Monitor(**):Started pacemaker-master
  Master/Slave Set: WebDataClone [WebData]
  Masters: [ pacemaker-master ]
  Stopped: [ WebData:1 ]


version: 8.3.11 (api:88/proto:86-96)
srcversion: E750A52708C7363DA649D31

  1: cs:WFConnection ro:Primary/Unknown ds:UpToDate/DUnknown C r-
 ns:0 nr:0 dw:0 dr:664 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0


___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org




___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [Pacemaker] PCS vs CRM

2013-08-13 Thread Chris Feist

On 08/13/2013 08:40 AM, Martin Arrieta wrote:

Hi all,

I'm trying to reproduce the following command with pcs without luck

crm primitive p_mysql ocf:percona:mysql \
   params config="/etc/my.cnf" pid="/var/lib/mysql/mysqld.pid"
socket="/var/run/mysqld/mysqld.sock" replication_user="repl_user" \
  replication_passwd="repluser" max_slave_lag="60"
evict_outdated_slaves="false" binary="/usr/libexec/mysqld" \
  test_user="test_user" test_passwd="testuser" \
   op monitor interval="5s" role="Master" OCF_CHECK_LEVEL="1" \
   op monitor interval="2s" role="Slave" OCF_CHECK_LEVEL="1" \
   op start interval="0" timeout="60s" \
   op stop interval="0" timeout="60s"

You can find the resource here:

https://github.com/percona/percona-pacemaker-agents/blob/master/agents/mysql_prm

And the xml of the resource  here:

http://pastebin.com/yRMV2SVj

Any help will be greatly appreciated!


Previous versions of pcs didn't support the OCF_CHECK_LEVEL option for op 
monitors.

If you'd be willing to try out the upstream version here: 
https://github.com/feist/pcs


You shouldn't have a problem adding that resource:

The command should look something like this:
pcs resource create p_mysql ocf:percona:mysql \
 config="/etc/my.cnf" pid="/var/lib/mysql/mysqld.pid" \
 socket="/var/run/mysqld/mysqld.sock"  \
 replication_user="repl_user" replication_passwd="repluser" \
 max_slave_lag="60" evict_outdated_slaves="false" \
 binary="/usr/libexec/mysqld" test_user="test_user" test_passwd="testuser" \
 op monitor interval="5s" role="Master" OCF_CHECK_LEVEL="1" \
 op monitor interval="2s" role="Slave" OCF_CHECK_LEVEL="1" \
 op start interval="0" timeout="60s" \
 op stop interval="0" timeout="60s"

Let me know if you have any issues.

Thanks!
Chris



Martin.



___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org




___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [Pacemaker] Does pcs 0.9.61 and higher support cman+pacemaker?

2013-08-09 Thread Chris Feist

On 08/09/2013 06:12 AM, Nikita Staroverov wrote:

Hello all!

Recently, I've tried to use pcs 0.9.61 in CentOS 6.4 with cman-based pacemaker
cluster, but pcs tries to get some cluster information from corosync.conf and
many functions don't work.
Is it normal?


I'm working to get pcs working in that kind of environment, what commands aren't 
working for you?




With best regards, Nikita Staroverov.

___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org



___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [Pacemaker] Moving cloned resources

2013-08-08 Thread Chris Feist

On 08/08/2013 01:25 PM, Matias R. Cuenca del Rey wrote:

Hi,

This is my first mail. I'm playing with active/active cluster with 
cman+pacemaker
I have 3 nodes working great. When I reboot one node, my IP resource move to
another node, but when the rebooted node comes back, my IP resource doesn't move
in again. I tried to move mannualy with pcs but I get the following error:

[root@www-proxylb01 ~]# pcs config
Corosync Nodes:

Pacemaker Nodes:
  www-proxylb01 www-proxylb02 www-proxylb03

Resources:
  Clone: ip-xxx.xxx.xxx.xxx-clone
   Resource: ip-xxx.xxx.xxx.xxx (provider=heartbeat type=IPaddr2 class=ocf)
Attributes: ip=xxx.xxx.xxx.xxx cidr_netmask=32
clusterip_hash=sourceip-sourceport
Operations: monitor interval=30s
  Clone: fs-usr.share.haproxy-clone
   Resource: fs-usr.share.haproxy (provider=heartbeat type=Filesystem class=ocf)
Attributes: device=/dev/xvdc directory=/usr/share/haproxy/ fstype=gfs2
  Clone: haproxy-xxx.xxx.xxx.xxx-clone
   Resource: haproxy-xxx.xxx.xxx.xxx (provider=heartbeat type=haproxy class=ocf)
Attributes: conffile=/etc/haproxy/haproxy.cfg
Operations: monitor interval=30s

Location Constraints:
Ordering Constraints:
   ip-xxx.xxx.xxx.xxx-clone then haproxy-xxx.xxx.xxx.xxx-clone
   fs-usr.share.haproxy-clone then haproxy-xxx.xxx.xxx.xxx-clone
Colocation Constraints:
   haproxy-xxx.xxx.xxx.xxx-clone with ip-xxx.xxx.xxx.xxx-clone
   haproxy-xxx.xxx.xxx.xxx-clone with fs-usr.share.haproxy-clone
   fs-usr.share.haproxy-clone with ip-xxx.xxx.xxx.xxx-clone

Cluster Properties:
  dc-version: 1.1.8-7.el6-394e906
  cluster-infrastructure: cman
  expected-quorum-votes: 2
  stonith-enabled: false
  resource-stickiness: 100


[root@www-proxylb01 ~]# pcs status
Last updated: Thu Aug  8 15:17:09 2013
Last change: Wed Aug  7 16:32:10 2013 via crm_attribute on www-proxylb01
Stack: cman
Current DC: www-proxylb03 - partition with quorum
Version: 1.1.8-7.el6-394e906
3 Nodes configured, 2 expected votes
9 Resources configured.


Online: [ www-proxylb01 www-proxylb02 www-proxylb03 ]

Full list of resources:

  Clone Set: ip-xxx.xxx.xxx.xxx-clone [ip-xxx.xxx.xxx.xxx] (unique)
  ip-xxx.xxx.xxx.xxx:0(ocf::heartbeat:IPaddr2):Started www-proxylb01
  ip-xxx.xxx.xxx.xxx:1(ocf::heartbeat:IPaddr2):Started www-proxylb01
  ip-xxx.xxx.xxx.xxx:2(ocf::heartbeat:IPaddr2):Started www-proxylb03
  Clone Set: fs-usr.share.haproxy-clone [fs-usr.share.haproxy]
  Started: [ www-proxylb01 www-proxylb03 ]
  Stopped: [ fs-usr.share.haproxy:2 ]
  Clone Set: haproxy-xxx.xxx.xxx.xxx-clone [haproxy-xxx.xxx.xxx.xxx]
  Started: [ www-proxylb01 www-proxylb03 ]
  Stopped: [ haproxy-xxx.xxx.xxx.xxx:2 ]

[root@www-proxylb01 ~]# pcs resource move ip-xxx.xxx.xxx.xxx:1 www-proxylb02


Which version of pcs, pacemaker and corosync are you running?


Error moving/unmoving resource
Error performing operation: Update does not conform to the configured schema

Thanks a lot in advance


Matías R. Cuenca del Rey


___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org




___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [Pacemaker] Help required translating crm commands into pcs ones

2013-07-16 Thread Chris Feist

On 07/15/13 09:02, Alex Hemsley wrote:

Hi,

We have some telephony failover equipment (Digium R850 appliances) that came
with a sample Pacemaker configuration using the “crm configure load update”
method. Now we are using Fedora 18 Kernel 3.9.4-200 pacemaker 1.1.9-0.1, so pcs
is the preferred management interface.

For most of the commands provided I have been able to convert the crm commands
into pcs syntax, however there are three still causing me difficulties, can
anyone help? Below are the crm commands, I know they should become “pcs
constraint” commands, but am uncertain how to reformat the arguments for pcs:

location Asterisk-with-ping Asterisk \

 rule $id="Asterisk-with-ping-rule" -inf: not_defined pingd or pingd 
lte 0

colocation Everything-with-Asterisk inf: ( rseries0_ms:Master rseries1_ms:Master
Asterisk_ms:Master ) ( ClusterIP Asterisk_fs ) Asterisk

order Asterisk-after-Everything inf: ( rseries0_ms:promote rseries1_ms:promote
Asterisk_ms:promote ) ( ClusterIP Asterisk_fs ) Asterisk:start



Alex,

Support for these rules was just recently added into pcs and are currently 
present in F18.  If you'd like I can build you a test package for F18 (that will 
be similiar to what will be released for F19 soon).


The syntax would look something like this:
pcs constraint location Asterisk rule -INFINITY: not_defined pingd or pingd lte

I'm still adding support for resource_sets (should be in by the end of this 
week).  I can let you know as soon as that is ready to go.  But that syntax will 
look something similiar to this:


pcs constraint colocation add ( Master rseries0 with Master rseries1_ms with 
Master Asterisk_ms ) with (ClusterIP with Asterisk_fs) with (Asterisk) INFINITY


pcs constraint order add ( promote rseries0_ms then promote rseries1_ms then 
promote Asterisk_ms ) then (ClusterIP then Asterisk_fs) then (start Asterisk)


Thanks!
Chris



Kind Regards,

Alex



Cobalt Telephone Technologies Ltd is registered in England under number 3151938.
Registered Office: Intec 2, Wade Road, Basingstoke, Hampshire, RG24 8NE.
"RingGo" is a trading name of Cobalt Telephone Technologies Ltd.

Disclaimer: Please be aware that messages sent over the Internet may not be
secure and should not be seen as forming a legally binding contract unless
otherwise stated. The contents of this e-mail may be privileged and are
confidential. It may not be disclosed to or used by anyone other than the
addressee(s), nor copied in any way. If received in error, please advise the
sender and then delete it from your system.


This email has been scanned for all viruses by the MessageLabs Email
Security System.


___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org




___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [Pacemaker] pcs and ping location rule

2013-06-26 Thread Chris Feist

On 06/24/13 16:33, Mailing List SVR wrote:

Hi,

I defined this clone resource for connectivity check:

pcs resource create ping ocf:pacemaker:ping host_list="10.0.2.2"
multiplier="1000" dampen=10s op monitor interval=60s

pcs resource clone ping ping_clone globally-unique=false

these works, but now I need to add a location rules to make the service switch
on the node that reach the gw, with crm I used something like this

/location database_on_connected_node database_resource \/
/ rule $id="database_on_connected_node-rule" pingd: defined pingd/


how to do the same using pcs?


I'm still working on fully implementing rules into pcs, but you can run the 
following command with the latest pcs (was just fixed today).


pcs constraint location database_resource rule pingd: defined pingd

Let me know if you have any issues implementing this setting.

Thanks!
Chris



thanks
Nicola




___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org




___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [Pacemaker] CRM location rules to PCS

2013-06-13 Thread Chris Feist

On 06/13/13 03:32, Gregg Jaskiewicz wrote:

How does one convert a rule in CRM to PCS, that is bit more complicated. Like 
so:

location rsc_location-2 msPostgreSQL \
 rule $id="rsc_location-2-rule" $role="master" 200: #uname eq dev02 \
 rule $id="rsc_location-2-rule-0" $role="master" 100: #uname eq dev01 \
 rule $id="rsc_location-2-rule-1" $role="master" -inf: defined
fail-count-devMasterVIP \
 rule $id="rsc_location-2-rule-2" -inf: not_defined pingNodes or
pingNodes lt 100

Short of rewriting it in xml .


I'm still working on developing a comprehensive syntax for pcs for dealing with 
rules, but it should be available shortly (I'm currently working on it).  I will 
let you know as soon as it's available.


Thanks!
Chris




--
GJ


___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org




___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [Pacemaker] What is the pcs equivalent of crm configure show?

2013-06-03 Thread Chris Feist

On 06/03/13 10:18, Teo En Ming (Zhang Enming) wrote:

Dear list,

May I know what is the pcs equivalent of crm configure show?

Thank you very much.


This command will give you the full configuration and status of pacemaker in xml 
format:


pcs cluster cib

If you want a more user friendly format you can commands to get information 
about various components:


Print all resources and their configured options:
pcs resource --all

Print information about currently configured cluster:
pcs cluster status

Print all stonith devices and their configured options:
pcs stonith --all

Print all property settings (use --all if you want to see the defaults as well):
pcs property

Thanks,
Chris

___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [Pacemaker] pcs equivalent of crm configure erase

2013-04-18 Thread Chris Feist

On 04/17/13 23:05, Vadym Chepkov wrote:


On Apr 17, 2013, at 8:04 PM, Chris Feist wrote:


On 04/17/13 11:13, Vadym Chepkov wrote:


On Apr 17, 2013, at 11:57 AM, T. wrote:


Hi,


b) If I can't do it woith pcs, is there a reliable
and secure way to do it with pacemaker low level tools?

why not just installing the crmsh from a different repository?

This is what I have done on CentOS 6.4.


My sentiments exactly. And "erase" is not the most important missed 
functionality.
crm configure save, crm configure load (update | replace) is what made 
configurations easily manageable
and trackable with a version control software.


There is currently a command in pcs ('pcs cluster cib' & 'pcs cluster push 
cib') to save and replace the current cib, however it will save the actual xml from 
the cib, so reading/editing the file might be a little more complicated than output 
from 'crm configure save'.


I might be missing something, but how is it different from old dark cibadmin 
days ;) ?


No, you're definitely not missing anything.  The 'pcs cluster cib' output isn't 
pretty.


I'm looking at adding the ability to turn a cib xml file into pcs commands that 
can generate that xml file, but there are several other items I'm working on 
first.  So it may be a little while before that gets added.


Thanks!
Chris



Thanks,
Vadym




___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org




___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [Pacemaker] pcs equivalent of crm configure erase

2013-04-17 Thread Chris Feist

On 04/17/13 11:13, Vadym Chepkov wrote:


On Apr 17, 2013, at 11:57 AM, T. wrote:


Hi,


b) If I can't do it woith pcs, is there a reliable
and secure way to do it with pacemaker low level tools?

why not just installing the crmsh from a different repository?

This is what I have done on CentOS 6.4.


My sentiments exactly. And "erase" is not the most important missed 
functionality.
crm configure save, crm configure load (update | replace) is what made 
configurations easily manageable
and trackable with a version control software.


There is currently a command in pcs ('pcs cluster cib' & 'pcs cluster push cib') 
to save and replace the current cib, however it will save the actual xml from 
the cib, so reading/editing the file might be a little more complicated than 
output from 'crm configure save'.


Thanks!
Chris



Cheers,
Vadym




___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org




___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [Pacemaker] pcs equivalent of crm configure erase

2013-04-16 Thread Chris Feist

On 04/14/13 02:52, Andreas Mock wrote:

Hi all,

can someone tell me what the pcs equivalent to

crm configure erase is?


From my understanding, 'crm configure erase' will remove everything from the 
configuration file except for the nodes.


Are you trying to clear your configuration out and start from scratch?

pcs has a destroy command (pcs cluster destroy), which will remove all 
pacemaker/corosync configuration and allow you to create your cluster from 
scratch.  Is this what you're looking for?


Or do you need a specific command to keep the cluster running, but reset the cib 
to its defaults?


Thanks!
Chris



Is there a pcs cheat sheet showing the common tasks?

Or a documentation?

Best regards

Andreas



___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org




___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [Pacemaker] pcs: Return code handling not clean

2013-04-16 Thread Chris Feist

On 04/16/13 06:46, Andreas Mock wrote:

Hi all,

as I don't really know, where to address this
issue, I do post it here. On the one handside
as an information for guys scripting with the
help of 'pcs' and on the other handside with
the hope that one maintainer is listening
and will have a look at this.

Problem: When cluster is down a 'pcs resource'
shows an error message coming from a subprocess
call of 'crm_resource -L' but exits with an
error code of 0. That's something which can
be improved. Especially while the python code
does have error handling in other paces.

So I guess it is a simple oversight.

Look at the following piece of code in
pcs/resource.py:

915 if len(argv) == 0:
916 args = ["crm_resource","-L"]
917 output,retval = utils.run(args)
918 preg = re.compile(r'.*(stonith:.*)')
919 for line in output.split('\n'):
920 if not preg.match(line) and line != "":
921 print line
922 return

retval is totally ignored, while being handled on
other places. That leads to the fact that the script
returns with status 0.


This is an oversight on my part, I've updated the code to check retval and 
return an error.  Currently I'm not passing through the full error code (I'm 
only returning 0 on success and 1 on failure).  However, if you think it would 
be useful to have this information I would be happy to look at it and see what I 
can do.  I'm planning on eventually having pcs interpret the crm_resource error 
code and provide a more user friendly output instead of just a return code.


Thanks,
Chris



Interestingly the error handling of the utils.run call
used all over the module is IMHO a little bit inconsistent.
If I remember correctly Andrew did some efforts in the
past to have a set of return codes comming from the
base cibXXX and crm_XXX tools. (I really don't know
how much they are differentiated). Why not pass them
through?

Best regards
Andreas Mock




___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org




___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [Pacemaker] Fedora 17 pcs or crmsh ?

2013-01-22 Thread Chris Feist

On 01/18/13 13:33, E-Blokos wrote:

Hi,
I got this doc
http://clusterlabs.org/doc/Cluster_from_Scratch.pdf
so I'm trying to follow the doc but got confused
when pcsd is not included in any Fedora 17 package
but this doc is including Fedora 17 installation.
I saw there
http://clusterlabs.org/doc/
that Fedora 17 doc concerns crmsh and not pcsd
I start to have unlogical headdick ;)
where the truth ? :o)
Regards
Franck


You are correct pcsd isn't present on Fedora 17, but it is present on Fedora 18. 
 You can still configure a cluster with just 'pcs', you just need to execute 
the cluster creation commands on all the nodes (and you can't sync between 
nodes).  Once you have corosync/pacemaker up and running you can do all your 
management from just one node.  (The same is true for crmsh as well).


Thanks,
Chris



--
This message has been scanned for viruses and
dangerous content by *MailScanner* , and is
believed to be clean.


___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org




___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [Pacemaker] pcsd.service

2012-11-21 Thread Chris Feist

On 11/13/12 07:51, E-Blokos wrote:


- Original Message - From: "Dennis Jacobfeuerborn" 

To: 
Sent: Tuesday, November 13, 2012 8:17 AM
Subject: Re: [Pacemaker] pcsd.service



On 11/13/2012 07:04 AM, E-Blokos wrote:


- Original Message - From: "Dennis Jacobfeuerborn"

To: 
Sent: Tuesday, November 13, 2012 12:42 AM
Subject: Re: [Pacemaker] pcsd.service



On 11/13/2012 04:09 AM, E-Blokos wrote:

Hi,

I'm trying to install pacemaker corosync
with the pdf "install pacemaker from scratch on fedora 17"
after yum install pcs
pcsd.service doesn't exist at all.
where can I find it ?


How old is your pcs package? According to Fedora Koji only the most recent
build actually contains pcsd (pcs-0.9.27-1.fc17).

Regards,
 Dennis


rpm -q pcs
pcs-0.9.3.1-1.fc17.noarch
which repo must I use to get 0.9.27 ?


It looks like updates and updates-testing are both not up-to-date so you
probably have to get this directly from koji:
http://koji.fedoraproject.org/koji/packageinfo?packageID=13172

Not sure if it's a good idea to rely on pcsd at all though when it isn't
even technically released yet.

Regards,
 Dennis


Thanks Dennis.
actually I'm using pacemaker openais on Fedora 10,
maybe it's time to experiment again ;)


You should be able to get the update package here: 
http://koji.fedoraproject.org/koji/buildinfo?buildID=360521


(Just follow the download links).

It should be live in the F17 updates repository in the next few days.

I would definitely recommend upgrading from F10 to F17 (or F18 when it comes 
out).  To get the best support for using pacemaker with pcs.


Thanks!
Chris



Regards

Franck


___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org



___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [Pacemaker] Announce: pcs-0.9.26

2012-10-17 Thread Chris Feist

On 10/17/12 03:22, Grüninger, Andreas (LGL Extern) wrote:

Please see here:

https://github.com/feist/pcs/issues


I'm definitely planning on supporting the --prefix option (and possibly options 
to specify the location for each binary).  I should have something shortly if 
you would like to test it out.


Thanks,
Chris



Andreas

-Ursprüngliche Nachricht-
Von: Chris Feist [mailto:cfe...@redhat.com]
Gesendet: Mittwoch, 17. Oktober 2012 02:03
An: The Pacemaker cluster resource manager
Cc: linux clustering
Betreff: Re: [Pacemaker] Announce: pcs-0.9.26

On 10/08/12 19:27, Chris Feist wrote:

We've been making improvements to the pcs (pacemaker/corosync
configuration
system) command line tool over the past few months.

Currently you can setup a basic cluster (including configuring corosync 2.0 
udpu).

David Vossel has also created a version of the "Clusters from Scratch"
document that illustrates setting up a cluster using pcs.  This should
be showing up shortly.


Just an update, I've updated the pcs (to 0.9.27) and included the pcsd daemon 
with the fedora packages.  You can grab the updated packages here:

http://people.redhat.com/cfeist/pcs/

And you should be able to used the new Clusters from Scratch optimized for the 
pcs CLI here: http://www.clusterlabs.org/doc/

Just a couple things to note (this should be shortly updated in the notes).

To run pcs on Fedora 17/18 you'll need to turn off selinux & disable the 
firewall (or at least allow traffic on port 2224).

To disable SELinux set 'SELINUX=permissive' in /etc/selinux/config and reboot 
To disable the firewall run 'systemctl stop iptables.service' (to permanently 
disable run 'systemctl disable iptables.service')

The pcs_passwd command has been removed.  In it's place you can do 
authentication with the hacluster user.  Just set the hacluster user password 
(passwd hacluster) and then use that user and password to authenticate with pcs.

If you have any questions or any issues don't hesitate to contact me, we're 
still working out the bugs in the new pcsd daemon and we appreciate all the 
feedback we can get.

Thanks,
Chris



You can view the source here: https://github.com/feist/pcs/

Or download the latest tarball:
https://github.com/downloads/feist/pcs/pcs-0.9.26.tar.gz

There is also a Fedora 18 package that will be included with the next release.
You should be able to find that package in the following locations...

RPM:
http://people.redhat.com/cfeist/pcs/pcs-0.9.26-1.fc18.noarch.rpm

SRPM:
http://people.redhat.com/cfeist/pcs/pcs-0.9.26-1.fc18.src.rpm

In the near future we are planning on having builds for SUSE & Ubuntu/Debian.

We're also actively working on a GUI/Daemon that will allow control of
your entire cluster from one node and/or a web browser.

Please feel free to email me (cfe...@redhat.com) or open issues on the
pcs project at github (https://github.com/feist/pcs/issues) if you
have any questions or problems.

Thanks!
Chris

___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org Getting started:
http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org



___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org 
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org Getting started: 
http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org

___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org




___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [Pacemaker] Announce: pcs-0.9.26

2012-10-16 Thread Chris Feist

On 10/08/12 19:27, Chris Feist wrote:

We've been making improvements to the pcs (pacemaker/corosync configuration
system) command line tool over the past few months.

Currently you can setup a basic cluster (including configuring corosync 2.0 
udpu).

David Vossel has also created a version of the "Clusters from Scratch" document
that illustrates setting up a cluster using pcs.  This should be showing up
shortly.


Just an update, I've updated the pcs (to 0.9.27) and included the pcsd daemon 
with the fedora packages.  You can grab the updated packages here:


http://people.redhat.com/cfeist/pcs/

And you should be able to used the new Clusters from Scratch optimized for the 
pcs CLI here: http://www.clusterlabs.org/doc/


Just a couple things to note (this should be shortly updated in the notes).

To run pcs on Fedora 17/18 you'll need to turn off selinux & disable the 
firewall (or at least allow traffic on port 2224).


To disable SELinux set 'SELINUX=permissive' in /etc/selinux/config and reboot
To disable the firewall run 'systemctl stop iptables.service' (to permanently 
disable run 'systemctl disable iptables.service')


The pcs_passwd command has been removed.  In it's place you can do 
authentication with the hacluster user.  Just set the hacluster user password 
(passwd hacluster) and then use that user and password to authenticate with pcs.


If you have any questions or any issues don't hesitate to contact me, we're 
still working out the bugs in the new pcsd daemon and we appreciate all the 
feedback we can get.


Thanks,
Chris



You can view the source here: https://github.com/feist/pcs/

Or download the latest tarball:
https://github.com/downloads/feist/pcs/pcs-0.9.26.tar.gz

There is also a Fedora 18 package that will be included with the next release.
You should be able to find that package in the following locations...

RPM:
http://people.redhat.com/cfeist/pcs/pcs-0.9.26-1.fc18.noarch.rpm

SRPM:
http://people.redhat.com/cfeist/pcs/pcs-0.9.26-1.fc18.src.rpm

In the near future we are planning on having builds for SUSE & Ubuntu/Debian.

We're also actively working on a GUI/Daemon that will allow control of your
entire cluster from one node and/or a web browser.

Please feel free to email me (cfe...@redhat.com) or open issues on the pcs
project at github (https://github.com/feist/pcs/issues) if you have any
questions or problems.

Thanks!
Chris

___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org



___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


[Pacemaker] Announce: pcs-0.9.26

2012-10-08 Thread Chris Feist
We've been making improvements to the pcs (pacemaker/corosync configuration 
system) command line tool over the past few months.


Currently you can setup a basic cluster (including configuring corosync 2.0 
udpu).

David Vossel has also created a version of the "Clusters from Scratch" document 
that illustrates setting up a cluster using pcs.  This should be showing up shortly.


You can view the source here: https://github.com/feist/pcs/

Or download the latest tarball:
https://github.com/downloads/feist/pcs/pcs-0.9.26.tar.gz

There is also a Fedora 18 package that will be included with the next release. 
You should be able to find that package in the following locations...


RPM:
http://people.redhat.com/cfeist/pcs/pcs-0.9.26-1.fc18.noarch.rpm

SRPM:
http://people.redhat.com/cfeist/pcs/pcs-0.9.26-1.fc18.src.rpm

In the near future we are planning on having builds for SUSE & Ubuntu/Debian.

We're also actively working on a GUI/Daemon that will allow control of your 
entire cluster from one node and/or a web browser.


Please feel free to email me (cfe...@redhat.com) or open issues on the pcs 
project at github (https://github.com/feist/pcs/issues) if you have any 
questions or problems.


Thanks!
Chris

___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [Pacemaker] Announce: pcs / pcs-gui (Pacemaker/Corosync Configuration System)

2012-06-04 Thread Chris Feist

On 06/01/12 09:56, Florian Haas wrote:

On Fri, Jun 1, 2012 at 1:40 AM, Chris Feist  wrote:

I'd like to announce the existence of the "Pacemaker/Corosync configuration
system", PCS.


Be warned, I will surely catch flak for what I'm about to say. Nothing
of this should be understood in a personal way; my critique is about
the work not the artist.


Absolutely, I'm not taking any of the criticism personally and I do appreciate 
the feedback.



The emphasis in PCS differs somewhat from the existing shell:


Before you get into where it differs in emphasis, can you explain why
we need another shell?


We needed a unified CLI that would configure corosync's as well as pacemaker's 
(and eventually the dlm for GFS2).  We also needed that CLI to tightly integrate 
with our GUI efforts and to be able to remotely connect to the GUI.  As a by 
product, my goal was to make it extremely easy to get up and running, while 
still allowing power users to do configure all the options.



PCS will continue the tradition of having a regression test suite and
discoverable 'ip'-like hierarchical "menu" structure, however unlike the
shell we may end up not adding interactivity.


Strangely enough, if I were to name one feature as the most useful in
the existing shell, it's its interactivity.

How do you envision people configuring, say, an IPaddr2 resource when
they don't remember the parameter names, or whether a specific
parameter is optional or required? Or even the resource agent name?


pcs still has a ways to go, but there will be an option to print out the 
available resource (and stonith) agents as well as their required/optional 
arguments.  I'm not currently planning on having a shell mode for pcs (which is 
what I meant by not adding interactivity).



Both projects are far from complete, but so far PCS can:
- Create corosync/pacemaker clusters from scratch
- Add simple resources and add constraints


If I were a new user, I'd probably be unable to create even a simple
resource with this, for the reason given above. But I will concede
that at its current state it's probably unfair to expect that new
users are able to use this. (The existing shell is actually usable for
newcomers, even though it's not perfect. Why to we need a new shell
again?)


- Create/Remove resource groups


Why is it "resource create", but "resource group add"?


I'm still working out the exact syntax, so there will be a few inconsistencies, 
but I may change resource 'create' to 'add'.  My thinking was that you create 
resources, but you add them to a resource group.



- Set most pacemaker configuration options


How do you enumerate which ones are available?


There will be an option to list available options, but I'm not there yet.


- Start/Stop pacemaker/corosync
- Get basic cluster status



I'm currently working on getting PCS fully functional with Fedora 17 (and it
should work with other distributions based on corosync 2.0, pacemaker 1.1
and systemd).

I'm hoping to have a fairly complete version of PCS for the Fedora 17
release (or very shortly thereafter) and a functioning version of pcs-gui
(which includes the ability to remotely start/stop nodes and set corosync
config) by the Fedora 18 release.

The code for both projects is currently hosted on github
(https://github.com/feist/pcs&;  https://github.com/feist/pcs-gui)

You can view a sample pcs session to get a preliminary view of how pcs will
work  - https://gist.github.com/2697640


Any reason why the gist doesn't use "pcs cluster sync", which as per
"pcs cluster --help" would sync the Corosync config across nodes?


This code is still in development as it depends on code in the GUI to transfer 
configuration files around the cluster.  As soon as I have a somewhat stable 
version of this to test (I'm shooting for 2-3 weeks) I'll notify the list, so 
you can take a look at it.



Comments and contributions are welcome.


I'm sorry, and I really don't mean this personally, but I just don't
get the point. I fail to see significant advantages that would justify
the duplication of effort versus the existing shell, not only in terms
of development, but also documentation, training, educating users,
etc. We've confused users aplenty in the past. Now we have a shell
that while not perfect, works well, has a reasonable degree of
interactivity and self-documentation, and is suitable for general use
(at least in my, never very humble, opinion). I see no reason for it
to be replaced.

Assuming that this effort means you're planning to kick the existing
crm shell out of Fedora, I think that's a really really bad idea.


I believe the CRM shell is still in Fedora 17, but I'm not sure what its status 
is for Fedora 18 and beyond.  I'm pretty sure that if it isn't inc

[Pacemaker] Announce: pcs / pcs-gui (Pacemaker/Corosync Configuration System)

2012-05-31 Thread Chris Feist
I'd like to announce the existence of the "Pacemaker/Corosync configuration 
system", PCS.


The emphasis in PCS differs somewhat from the existing shell:
- Configure the complete cluster (corosync plus pacemaker) from scratch
- Emphasis is on modification not display
- Avoid XML round-tripping
- Syntax won't be restricted to concepts from the underlying XML (which
  should make it easier to configure simple clusters)
- Provide the ability to remotely configure corosync, start/stop cluster and
  get status.

In addition, it will do much of the back-end work for a new GUI being developed, 
also by Red Hat (pcs-gui).


PCS will continue the tradition of having a regression test suite and 
discoverable 'ip'-like hierarchical "menu" structure, however unlike the shell 
we may end up not adding interactivity.


Both projects are far from complete, but so far PCS can:
- Create corosync/pacemaker clusters from scratch
- Add simple resources and add constraints
- Create/Remove resource groups
- Set most pacemaker configuration options
- Start/Stop pacemaker/corosync
- Get basic cluster status

I'm currently working on getting PCS fully functional with Fedora 17 (and it 
should work with other distributions based on corosync 2.0, pacemaker 1.1 and 
systemd).


I'm hoping to have a fairly complete version of PCS for the Fedora 17 release 
(or very shortly thereafter) and a functioning version of pcs-gui (which 
includes the ability to remotely start/stop nodes and set corosync config) by 
the Fedora 18 release.


The code for both projects is currently hosted on github 
(https://github.com/feist/pcs & https://github.com/feist/pcs-gui)


You can view a sample pcs session to get a preliminary view of how pcs will work 
 - https://gist.github.com/2697640


Comments and contributions are welcome.

Thanks!
Chris

___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org