Re: [ClusterLabs] Pacemaker 1.1.15 - Release Candidate 2

2016-05-16 Thread Jan Pokorný
On 16/05/16 10:48 -0500, Ken Gaillot wrote:
> The second release candidate for Pacemaker version 1.1.15 is now
> available at:
> 
> https://github.com/ClusterLabs/pacemaker/releases/tag/Pacemaker-1.1.15-rc2
> 
> The most interesting changes since 1.1.15-rc1 are:
> 
> * With the new "alerts" feature, the "tstamp_format" attribute has been
> renamed to "timestamp-format" and properly defaults to "%H:%M:%S.%06N".
> 
> * A regression introduced in 1.1.15-rc1 has been fixed. After a cluster
> partition, node attribute values might not be properly re-synchronized
> among nodes.
> 
> * The SysInfo resource now automatically sets the #health_disk node
> attribute back to "green" if free disk space recovers after becoming too
> low.
> 
> * Other minor bug fixes.

Once gain, to check this release candidate out using Fedora/EPEL builds,
there's a COPR link[*] for your convenience (you can also stick with
repo file downloaded from Overview page and install the packages
in a common way):

https://copr.fedorainfracloud.org/coprs/jpokorny/pacemaker/

Fedora rawhide will be updated shortly.

> Everyone is encouraged to download, compile and test the new release.
> Your feedback is important and appreciated. I am aiming for one or two
> more release candidates, with the final released in mid- to late June.

-- 
Jan (Poki)


pgplu0czdzH6a.pgp
Description: PGP signature
___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Two related Cluster

2016-05-16 Thread ‪H Yavari‬ ‪
Thank you for reply.
I mean when in cluster X , node A is online and node B is offline, in cluster Y 
nodes will have same status.

Regards,

  From: Kristoffer Grönlund 
 
   
‪H Yavari‬ ‪  writes:

> Hi,
> I have a question, it is possible to make a relation between 2 Clusters?I 
> mean when a node changing occurs in one cluster, it happens on other cluster 
> too.
>

I'm not sure what you mean by a node changing, but there is booth [1]
which enables the transfer of resource ownership between multiple
clusters.

[1]: https://github.com/ClusterLabs/booth

Cheers,
Kristoffer

> It is accessible?
> Thanks for helps.
>
> Regards,H.Yavari
>
> ___
> Users mailing list: Users@clusterlabs.org
> http://clusterlabs.org/mailman/listinfo/users
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org

-- 
// Kristoffer Grönlund
// kgronl...@suse.com

  ___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Two related Cluster

2016-05-16 Thread Kristoffer Grönlund
‪H Yavari‬ ‪  writes:

> Hi,
> I have a question, it is possible to make a relation between 2 Clusters?I 
> mean when a node changing occurs in one cluster, it happens on other cluster 
> too.
>

I'm not sure what you mean by a node changing, but there is booth [1]
which enables the transfer of resource ownership between multiple
clusters.

[1]: https://github.com/ClusterLabs/booth

Cheers,
Kristoffer

> It is accessible?
> Thanks for helps.
>
> Regards,H.Yavari
>
> ___
> Users mailing list: Users@clusterlabs.org
> http://clusterlabs.org/mailman/listinfo/users
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org

-- 
// Kristoffer Grönlund
// kgronl...@suse.com

___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


[ClusterLabs] Two related Cluster

2016-05-16 Thread ‪H Yavari‬ ‪
Hi,
I have a question, it is possible to make a relation between 2 Clusters?I mean 
when a node changing occurs in one cluster, it happens on other cluster too.

It is accessible?
Thanks for helps.

Regards,H.Yavari

___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Pacemaker with Zookeeper??

2016-05-16 Thread Bogdan Dobrelya
On 05/16/2016 09:23 AM, Jan Friesse wrote:
>> Hi,
>>
>> I have an idea: use Pacemaker with Zookeeper (instead of Corosync). Is
>> it possible?
>> Is there any examination about that?

Indeed, would be *great* to have a Pacemaker based control plane on top
of other "pluggable" distributed KVS & messaging systems, for example
etcd as well :)
I'm looking forward to joining any dev efforts around that, although I'm
not a Java or Go developer.

> 
> From my point of view (and yes, I'm biased), biggest problem of Zookeper
> is need to have quorum
> (https://zookeeper.apache.org/doc/trunk/zookeeperAdmin.html#sc_designing).
> Direct consequence is inability to tolerate one node failure in 2 node
> cluster -> no 2 node clusters (and such deployment is extremely
> popular). Also Corosync can operate completely without quorum.
> 
> Regards,
>   Honza
> 
>>
>> Thanks for your help!
>> Hai Nguyen
>>
>>
> 
> 
> ___
> Users mailing list: Users@clusterlabs.org
> http://clusterlabs.org/mailman/listinfo/users
> 
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org


-- 
Best regards,
Bogdan Dobrelya,
Irc #bogdando

___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Pacemaker with Zookeeper??

2016-05-16 Thread Jan Friesse

Hi,

I have an idea: use Pacemaker with Zookeeper (instead of Corosync). Is
it possible?
Is there any examination about that?


From my point of view (and yes, I'm biased), biggest problem of 
Zookeper is need to have quorum 
(https://zookeeper.apache.org/doc/trunk/zookeeperAdmin.html#sc_designing). 
Direct consequence is inability to tolerate one node failure in 2 node 
cluster -> no 2 node clusters (and such deployment is extremely 
popular). Also Corosync can operate completely without quorum.


Regards,
  Honza



Thanks for your help!
Hai Nguyen





___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


[ClusterLabs] Help with service banning on a node

2016-05-16 Thread Leon Botes

Hi List.

I have the following configuration:

pcs -f ha_config property set symmetric-cluster="true"
pcs -f ha_config property set no-quorum-policy="stop"
pcs -f ha_config property set stonith-enabled="false"
pcs -f ha_config resource defaults resource-stickiness="200"

pcs -f ha_config resource create drbd ocf:linbit:drbd drbd_resource=r0 
op monitor interval=60s
pcs -f ha_config resource master drbd master-max=1 master-node-max=1 
clone-max=2 clone-node-max=1 notify=true
pcs -f ha_config resource create vip-blue ocf:heartbeat:IPaddr2 
ip=192.168.101.100 cidr_netmask=32 nic=blue op monitor interval=20s
pcs -f ha_config resource create vip-green ocf:heartbeat:IPaddr2 
ip=192.168.102.100 cidr_netmask=32 nic=blue op monitor interval=20s


pcs -f ha_config constraint colocation add vip-blue drbd-master INFINITY 
with-rsc-role=Master
pcs -f ha_config constraint colocation add vip-green drbd-master 
INFINITY with-rsc-role=Master


pcs -f ha_config constraint location drbd-master prefers stor-san1=50
pcs -f ha_config constraint location drbd-master avoids stor-node1=INFINITY
pcs -f ha_config constraint location vip-blue prefers stor-san1=50
pcs -f ha_config constraint location vip-blue avoids stor-node1=INFINITY
pcs -f ha_config constraint location vip-green prefers stor-san1=50
pcs -f ha_config constraint location vip-green avoids stor-node1=INFINITY

pcs -f ha_config constraint order promote drbd-master then start vip-blue
pcs -f ha_config constraint order start vip-blue then start vip-green

Which results in:

[root@san1 ~]# pcs status
Cluster name: ha_cluster
Last updated: Mon May 16 08:21:28 2016  Last change: Mon May 16 
08:21:25 2016 by root via crm_resource on iscsiA-san1

Stack: corosync
Current DC: iscsiA-node1 (version 1.1.13-10.el7_2.2-44eb2dd) - partition 
with quorum

3 nodes and 4 resources configured

Online: [ iscsiA-node1 iscsiA-san1 iscsiA-san2 ]

Full list of resources:

 Master/Slave Set: drbd-master [drbd]
 drbd   (ocf::linbit:drbd): FAILED iscsiA-node1 (unmanaged)
 Masters: [ iscsiA-san1 ]
 Stopped: [ iscsiA-san2 ]
 vip-blue   (ocf::heartbeat:IPaddr2):   Started iscsiA-san1
 vip-green  (ocf::heartbeat:IPaddr2):   Started iscsiA-san1

Failed Actions:
* drbd_stop_0 on iscsiA-node1 'not installed' (5): call=18, 
status=complete, exitreason='none',

last-rc-change='Mon May 16 08:20:16 2016', queued=0ms, exec=45ms


PCSD Status:
  iscsiA-san1: Online
  iscsiA-san2: Online
  iscsiA-node1: Online

Daemon Status:
  corosync: active/disabled
  pacemaker: active/disabled
  pcsd: active/enabled


Is there any way in the configuration to make the drbd sections 
completely be ignored on  iscsiA-node1  to avoide this:

 drbd (ocf::linbit:drbd): FAILED iscsiA-node1 (unmanaged)
and
Failed Actions:
* drbd_stop_0 on iscsiA-node1 'not installed' (5): call=18, 
status=complete, exitreason='none',

last-rc-change='Mon May 16 08:20:16 2016', queued=0ms, exec=45ms

Tried the ban statements but that seesm to have the same result.

Also is there any better way to write the configuration so that the drbd 
starts first then the vip's and colocate together. Also ensure that they 
run on only san1 or san2. Tried grouping but that seems to fail with 
Master / Slave resourcess.


--
Regards
Leon

___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org