Re: [ClusterLabs] IPaddr2 cluster-ip restarts on all nodes after failover

2016-01-06 Thread Joakim Hansson
Thank you!
That did the trick.

/Jocke
___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] [OCF] Pacemaker reports a multi-state clone resource instance as running while it is not in fact

2016-01-06 Thread Keisuke MORI
Hi,

2016-01-06 22:57 GMT+09:00 Jan Pokorný :
> Hello ,
>
> On 04/01/16 17:33 +0100, Bogdan Dobrelya wrote:

>> Note, that it seems the very import action causes the issue, not the
>> ocf_run or ocf_log code itself.
>>
>> [0] https://github.com/ClusterLabs/resource-agents/issues/734
>
> Have to wonder if there is any correlation with the issue discussed
> recently:
>
> http://oss.clusterlabs.org/pipermail/users/2015-November/001806.html
>
> Note that ocf:pacemaker:ClusterMon resource also sources
> ocf-shellfuncs set of helper functions.

I think that it's unlikely to be relevant.

In the case of this time, /bin/dash is apparently the cause of the
fork bomb by a bug in handling PS4 shell variable from the result of
testing on the github issue 734.

In the case of ClusterMon you mentioned, crm_mon is the one forking
repeatedly, not the ClusterMon shell script, not invoked via /bin/dash
either.

Regards,
-- 
Keisuke MORI

___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] [Q] crmsh release plan for pacemaker-1.1.14?

2016-01-06 Thread Kristoffer Grönlund
Keisuke MORI  writes:

> Hi,
>
> I would like to know if there is any plan of the new crmsh release
> schedule which cooperate with Pacemaker-1.1.14.
>
> The latest release of crmsh-2.1.4 does not work well with
> Pacemaker-1.1.14-rc4 because of the unmatched schema.
>
> 
> # crm configure load update sample.crm
> ERROR: CIB not supported: validator 'pacemaker-2.4', release '3.0.10'
> ERROR: You may try the upgrade command
> ERROR: configure: Missing requirements
> #
> 
>
> Regards,
> -- 
> Keisuke MORI

Hello,

Yes, I am planning a new release of crmsh very soon. The development
version of crmsh should work well with 1.1.14, so I would recomend using
that for now.

There are a few issues that I would like to investigate before the
release, but regardless I will release a new version soon.

Cheers,
Kristoffer

>
> ___
> Users mailing list: Users@clusterlabs.org
> http://clusterlabs.org/mailman/listinfo/users
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
>

-- 
// Kristoffer Grönlund
// kgronl...@suse.com

___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] IPaddr2 cluster-ip restarts on all nodes after failover

2016-01-06 Thread Ken Gaillot
On 01/06/2016 02:40 PM, Joakim Hansson wrote:
> Hi list!
> I'm running a 3-node vm-cluster in which all the nodes run Tomcat (Solr)
> from the same disk using GFS2.
> On top of this I use IPaddr2-clone for cluster-ip and loadbalancing between
> all the nodes.
> 
> Everything works fine, except when i perform a failover on one node.
> When node01 shuts down, node02 takes over it's ipaddr-clone. So far so good.
> The thing is, when I fire up node01 again all the ipaddr-clones on all
> nodes restarts and thereby messes up Tomcat.

You want interleave=true on your clones.

http://clusterlabs.org/doc/en-US/Pacemaker/1.1-pcs/html-single/Pacemaker_Explained/index.html#_clone_options

> Here is my configuration:
> 
> Cluster Name: GFS2-cluster
> Corosync Nodes:
>  node01 node02 node03
> Pacemaker Nodes:
>  node01 node02 node03
> 
> Resources:
>  Clone: dlm-clone
>   Meta Attrs: clone-max=3 clone-node-max=1
>   Resource: dlm (class=ocf provider=pacemaker type=controld)
>Operations: start interval=0s timeout=90 (dlm-start-timeout-90)
>stop interval=0s timeout=100 (dlm-stop-timeout-100)
>monitor interval=60s (dlm-monitor-interval-60s)
>  Clone: GFS2-clone
>   Meta Attrs: clone-max=3 clone-node-max=1 globally-unique=true
>   Resource: GFS2 (class=ocf provider=heartbeat type=Filesystem)
>Attributes: device=/dev/sdb directory=/home/solr fstype=gfs2
>Operations: start interval=0s timeout=60 (GFS2-start-timeout-60)
>stop interval=0s timeout=60 (GFS2-stop-timeout-60)
>monitor interval=20 timeout=40 (GFS2-monitor-interval-20)
>  Clone: ClusterIP-clone
>   Meta Attrs: clone-max=3 clone-node-max=3 globally-unique=true
>   Resource: ClusterIP (class=ocf provider=heartbeat type=IPaddr2)
>Attributes: ip=192.168.100.200 cidr_netmask=32 clusterip_hash=sourceip
>Meta Attrs: resource-stickiness=0
>Operations: start interval=0s timeout=20s (ClusterIP-start-timeout-20s)
>stop interval=0s timeout=20s (ClusterIP-stop-timeout-20s)
>monitor interval=30s (ClusterIP-monitor-interval-30s)
>  Clone: Tomcat-clone
>   Meta Attrs: clone-max=3 clone-node-max=1
>   Resource: Tomcat (class=systemd type=tomcat)
>Operations: monitor interval=60s (Tomcat-monitor-interval-60s)
> 
> Stonith Devices:
>  Resource: fence-vmware (class=stonith type=fence_vmware_soap)
>   Attributes:
> pcmk_host_map=node01:4212a559-8e66-2882-e7fe-96e2bd86bfdb;node02:4212150e-2d2d-dc3e-ee16-2eb280db2ec7;node03:42126708-bd46-adc5-75cb-678cdbcc06be
> pcmk_host_check=static-list login=USERNAME passwd=PASSWORD action=reboot
> ssl_insecure=true ipaddr=IP-ADDRESS
>   Operations: monitor interval=60s (fence-vmware-monitor-interval-60s)
> Fencing Levels:
> 
> Location Constraints:
> Ordering Constraints:
>   start dlm-clone then start GFS2-clone (kind:Mandatory)
> (id:order-dlm-clone-GFS2-clone-mandatory)
>   start GFS2-clone then start Tomcat-clone (kind:Mandatory)
> (id:order-GFS2-clone-Tomcat-clone-mandatory)
>   start Tomcat-clone then start ClusterIP-clone (kind:Mandatory)
> (id:order-Tomcat-clone-ClusterIP-clone-mandatory)
>   stop ClusterIP-clone then stop Tomcat-clone (kind:Mandatory)
> (id:order-ClusterIP-clone-Tomcat-clone-mandatory)
>   stop Tomcat-clone then stop GFS2-clone (kind:Mandatory)
> (id:order-Tomcat-clone-GFS2-clone-mandatory)
> Colocation Constraints:
>   GFS2-clone with dlm-clone (score:INFINITY)
> (id:colocation-GFS2-clone-dlm-clone-INFINITY)
>   GFS2-clone with Tomcat-clone (score:INFINITY)
> (id:colocation-GFS2-clone-Tomcat-clone-INFINITY)
> 
> Resources Defaults:
>  resource-stickiness: 100
> Operations Defaults:
>  No defaults set
> 
> Cluster Properties:
>  cluster-infrastructure: corosync
>  cluster-name: GFS2-cluster
>  dc-version: 1.1.13-10.el7-44eb2dd
>  enabled: false
>  have-watchdog: false
>  last-lrm-refresh: 1450177886
>  stonith-enabled: true
> 
> 
> Any help is greatly appreciated.
> 
> Thanks in advance
> /Jocke

___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org