Re: [ClusterLabs] MySQL resource causes error 0_monitor_20000.

2015-08-18 Thread Kiwamu Okabe
Hi Andrei,

On Tue, Aug 18, 2015 at 2:24 PM, Andrei Borzenkov arvidj...@gmail.com wrote:
 I made master-master replication on Pacemaker.
 But it causes error 0_monitor_2.

 It's not an error, it is just operation name.

Sorry. I'm comfused.

 If one of them boots Heartbeat and another doesn't, the error doesn't occur.

 What should I check?

 Probably you have to allow more than one master (default is just one); see 
 description of master-max resource option.

I used following settings:

```
centillion.db01# crm configure
crm(live)configure# primitive vip_192.168.10.200 ocf:heartbeat:IPaddr2
params ip=192.168.10.200 cidr_netmask=24 nic=eth0
crm(live)configure# property no-quorum-policy=ignore stonith-enabled=false
crm(live)configure# node centillion.db01
crm(live)configure# node centillion.db02
crm(live)configure# commit
crm(live)configure# quit
centillion.db01# crm
crm(live)# cib new mysql_repl
crm(mysql_repl)# configure primitive mysql ocf:heartbeat:mysql params
binary=/usr/local/mysql/bin/mysqld_safe datadir=/data/mysql
pid=/data/mysql/mysql.pid socket=/tmp/mysql.sock
log=/data/mysql/centillion.db.err replication_user=repl
replication_passwd=slavepass op start interval=0 timeout=120s op stop
interval=0 timeout=120s op monitor interval=20s timeout=30s op monitor
interval=10s role=Master timeout=30s op monitor interval=30s
role=Slave timeout=30s op promote interval=0 timeout=120s op demote
interval=0 timeout=120s op notify interval=0 timeout=90s
crm(mysql_repl)# cib commit mysql_repl
crm(mysql_repl)# quit
centillion.db01# crm configure ms mysql-clone mysql meta master-max=2
master-node-max=1 clone-max=2 clone-node-max=1 notify=true
centillion.db01# crm configure colocation vip_on_mysql inf:
vip_192.168.10.200 mysql-clone:Master
centillion.db01# crm configure order vip_after_mysql inf:
mysql-clone:promote vip_192.168.10.200:start
```

Then, I got following result:

```

Last updated: Tue Aug 18 14:42:37 2015
Stack: Heartbeat
Current DC: centillion.db02 (0302e3d0-df06-4847-b0f9-9ebddfb6aec7) -
partition with quorum
Version: 1.0.13-a83fae5
2 Nodes configured, unknown expected votes
2 Resources configured.


Online: [ centillion.db01 centillion.db02 ]

vip_192.168.10.200  (ocf::heartbeat:IPaddr2):   Started centillion.db01
 Master/Slave Set: mysql-clone
 Masters: [ centillion.db01 centillion.db02 ]

Failed actions:
mysql:0_demote_0 (node=centillion.db01, call=11, rc=7,
status=complete): not running
```

It has no error. But my meaning of master-master replication is:

A. If both of the nodes lived, one of them becomes master and the
other becomes slave.
B. If one of the nodes only lived, the node becomes master.
C. If a node joins, the node becomes slave.

How to shape nodes such like above?

Thank's, for your advice.
-- 
Kiwamu Okabe at METASEPI DESIGN

___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] MySQL resource causes error 0_monitor_20000.

2015-08-18 Thread Kiwamu Okabe
Hi,

On Tue, Aug 18, 2015 at 5:07 PM, Kiwamu Okabe kiw...@debian.or.jp wrote:
 ```
 2015-08-18 16:50:38 7081 [ERROR] Slave I/O: Fatal error: The slave I/O
 thread stops because master and slave have equal MySQL server ids;
 these ids must be different for replication to work (or the
 --replicate-same-server-id option must be used on slave but this does
 not always make sense; please check the manual before using it).
 Error_code: 1593
 ```

I fixed it, and get following settings;

```
centillion.db01# mysql show slave status\G
...
 Slave_IO_Running: Yes
Slave_SQL_Running: Yes
...
centillion.db01# mysql stop slave;
centillion.db02# mysql start slave;
centillion.db02# mysql show slave status\G
...
 Slave_IO_Running: Yes
Slave_SQL_Running: Yes
...
```

But the error message of crm is not changed...

```

Last updated: Tue Aug 18 17:41:48 2015
Stack: Heartbeat
Current DC: centillion.db02 (0302e3d0-df06-4847-b0f9-9ebddfb6aec7) -
partition with quorum
Version: 1.0.13-a83fae5
2 Nodes configured, unknown expected votes
2 Resources configured.


Online: [ centillion.db01 centillion.db02 ]

vip_192.168.10.200  (ocf::heartbeat:IPaddr2):   Started centillion.db01
 Master/Slave Set: mysql-clone
 mysql:1(ocf::heartbeat:mysql): Master centillion.db02 FAILED
 Masters: [ centillion.db01 ]

Failed actions:
mysql:1_monitor_2 (node=centillion.db02, call=26, rc=8,
status=complete): master
mysql:1_monitor_3 (node=centillion.db02, call=27, rc=8,
status=complete): master
```

Thank's,
-- 
Kiwamu Okabe at METASEPI DESIGN

___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] MySQL resource causes error 0_monitor_20000.

2015-08-18 Thread Andrei Borzenkov
On Tue, Aug 18, 2015 at 9:15 AM, Kiwamu Okabe kiw...@debian.or.jp wrote:
 Hi Andrei,

 On Tue, Aug 18, 2015 at 2:24 PM, Andrei Borzenkov arvidj...@gmail.com wrote:
 I made master-master replication on Pacemaker.
 But it causes error 0_monitor_2.

 It's not an error, it is just operation name.

 Sorry. I'm comfused.

 If one of them boots Heartbeat and another doesn't, the error doesn't occur.

 What should I check?

 Probably you have to allow more than one master (default is just one); see 
 description of master-max resource option.

 I used following settings:

 ```
 centillion.db01# crm configure
 crm(live)configure# primitive vip_192.168.10.200 ocf:heartbeat:IPaddr2
 params ip=192.168.10.200 cidr_netmask=24 nic=eth0
 crm(live)configure# property no-quorum-policy=ignore stonith-enabled=false
 crm(live)configure# node centillion.db01
 crm(live)configure# node centillion.db02
 crm(live)configure# commit
 crm(live)configure# quit
 centillion.db01# crm
 crm(live)# cib new mysql_repl
 crm(mysql_repl)# configure primitive mysql ocf:heartbeat:mysql params
 binary=/usr/local/mysql/bin/mysqld_safe datadir=/data/mysql
 pid=/data/mysql/mysql.pid socket=/tmp/mysql.sock
 log=/data/mysql/centillion.db.err replication_user=repl
 replication_passwd=slavepass op start interval=0 timeout=120s op stop
 interval=0 timeout=120s op monitor interval=20s timeout=30s op monitor
 interval=10s role=Master timeout=30s op monitor interval=30s
 role=Slave timeout=30s op promote interval=0 timeout=120s op demote
 interval=0 timeout=120s op notify interval=0 timeout=90s
 crm(mysql_repl)# cib commit mysql_repl
 crm(mysql_repl)# quit
 centillion.db01# crm configure ms mysql-clone mysql meta master-max=2
 master-node-max=1 clone-max=2 clone-node-max=1 notify=true
 centillion.db01# crm configure colocation vip_on_mysql inf:
 vip_192.168.10.200 mysql-clone:Master
 centillion.db01# crm configure order vip_after_mysql inf:
 mysql-clone:promote vip_192.168.10.200:start
 ```

 Then, I got following result:

 ```
 
 Last updated: Tue Aug 18 14:42:37 2015
 Stack: Heartbeat
 Current DC: centillion.db02 (0302e3d0-df06-4847-b0f9-9ebddfb6aec7) -
 partition with quorum
 Version: 1.0.13-a83fae5
 2 Nodes configured, unknown expected votes
 2 Resources configured.
 

 Online: [ centillion.db01 centillion.db02 ]

 vip_192.168.10.200  (ocf::heartbeat:IPaddr2):   Started 
 centillion.db01
  Master/Slave Set: mysql-clone
  Masters: [ centillion.db01 centillion.db02 ]

 Failed actions:
 mysql:0_demote_0 (node=centillion.db01, call=11, rc=7,
 status=complete): not running
 ```

 It has no error. But my meaning of master-master replication is:

 A. If both of the nodes lived, one of them becomes master and the
 other becomes slave.
 B. If one of the nodes only lived, the node becomes master.
 C. If a node joins, the node becomes slave.


Oh, sorry, I misunderstood you. What you describe falls under
master-slave in my vocabulary :)

 How to shape nodes such like above?


Did you setup mySQL replication before bringing it under pacemaker
control? If not, my guess is that resource agent sees both instances
as independent and hence masters.

 Thank's, for your advice.
 --
 Kiwamu Okabe at METASEPI DESIGN

 ___
 Users mailing list: Users@clusterlabs.org
 http://clusterlabs.org/mailman/listinfo/users

 Project Home: http://www.clusterlabs.org
 Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
 Bugs: http://bugs.clusterlabs.org

___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] MySQL resource causes error 0_monitor_20000.

2015-08-18 Thread Kiwamu Okabe
Hi Andrei,

On Tue, Aug 18, 2015 at 3:58 PM, Andrei Borzenkov arvidj...@gmail.com wrote:
 How to shape nodes such like above?

 Did you setup mySQL replication before bringing it under pacemaker
 control? If not, my guess is that resource agent sees both instances
 as independent and hence masters.

I stop Heartbeat on both nodes. And get following error on
/data/mysql/centillion.db01.err.

```
2015-08-18 16:50:38 7081 [ERROR] Slave I/O: Fatal error: The slave I/O
thread stops because master and slave have equal MySQL server ids;
these ids must be different for replication to work (or the
--replicate-same-server-id option must be used on slave but this does
not always make sense; please check the manual before using it).
Error_code: 1593
```

I try to fix it.

Thank's!
-- 
Kiwamu Okabe at METASEPI DESIGN

___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Antw: Re: MySQL resource causes error 0_monitor_20000.

2015-08-18 Thread Kiwamu Okabe
Hi Ulrich,

On Tue, Aug 18, 2015 at 9:34 PM, Ulrich Windl
ulrich.wi...@rz.uni-regensburg.de wrote:
 I feel the message
 Aug 18 18:21:35 centillion.db01 lrmd: [15607]: info: RA output: 
 (mysql:0:promote:stderr) Error performing operation: The object/attribute 
 does not exist

 is a problem (why it's only info is another question when it says Error).

Thank's. It's important information for me.

 I wonder: Did you try the OCF tester?

No.
Is it explained at following URL?

http://www.linux-ha.org/doc/dev-guides/_testing_resource_agents.html

Thank's a lot,
-- 
Kiwamu Okabe at METASEPI DESIGN

___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


[ClusterLabs] Antw: Re: MySQL resource causes error 0_monitor_20000.

2015-08-18 Thread Ulrich Windl
 Kiwamu Okabe kiw...@debian.or.jp schrieb am 18.08.2015 um 11:48 in 
 Nachricht
CAEvX6dky8=_w6l2nhndfbowux+ol7ktaa44salru7a9-xed...@mail.gmail.com:
 Hi Andrei,
 
 On Tue, Aug 18, 2015 at 6:28 PM, Andrei Borzenkov arvidj...@gmail.com wrote:
 You should attach full logs from both nodes starting with pacemaker start.
 
 The attached logs are taken after cleaning up log file and kick heartbeat.

I feel the message
Aug 18 18:21:35 centillion.db01 lrmd: [15607]: info: RA output: 
(mysql:0:promote:stderr) Error performing operation: The object/attribute does 
not exist

is a problem (why it's only info is another question when it says Error).

I wonder: Did you try the OCF tester?

 Thank's for your attention,
 -- 
 Kiwamu Okabe at METASEPI DESIGN





___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Antw: Re: MySQL resource causes error 0_monitor_20000.

2015-08-18 Thread Andrei Borzenkov
On Tue, Aug 18, 2015 at 3:34 PM, Ulrich Windl
ulrich.wi...@rz.uni-regensburg.de wrote:
 Kiwamu Okabe kiw...@debian.or.jp schrieb am 18.08.2015 um 11:48 in 
 Nachricht
 CAEvX6dky8=_w6l2nhndfbowux+ol7ktaa44salru7a9-xed...@mail.gmail.com:
 Hi Andrei,

 On Tue, Aug 18, 2015 at 6:28 PM, Andrei Borzenkov arvidj...@gmail.com 
 wrote:
 You should attach full logs from both nodes starting with pacemaker start.

 The attached logs are taken after cleaning up log file and kick heartbeat.

 I feel the message
 Aug 18 18:21:35 centillion.db01 lrmd: [15607]: info: RA output: 
 (mysql:0:promote:stderr) Error performing operation: The object/attribute 
 does not exist

 is a problem (why it's only info is another question when it says Error).


This likely comes from reading transient attribute before it is set
and is benign. But for some reason agent does not recognize instance
is running as slave. Hopefully someone who knows agent internals can
shed light here.

 I wonder: Did you try the OCF tester?

 Thank's for your attention,
 --
 Kiwamu Okabe at METASEPI DESIGN





 ___
 Users mailing list: Users@clusterlabs.org
 http://clusterlabs.org/mailman/listinfo/users

 Project Home: http://www.clusterlabs.org
 Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
 Bugs: http://bugs.clusterlabs.org

___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] upgrade from 1.1.9 to 1.1.12 fails to start

2015-08-18 Thread Digimer
On 18/08/15 10:15 AM, Streeter, Michelle N wrote:
 I created a whole new virtual and installed everything with the new
 version and pacemaker wouldn’t start.
 
 I have not yet learned how to use the logs yet to see what they have to say.
 
 No, I did not upgrade corosync.  I am running the latest which will work
 with rhel6. 
 
 When I tried later versions, they failed and I was told it was because
 we are not running rhel7.
 
 I am getting the feeling this version of Pacemaker does not work on
 rhel6 either.  Do you believe this is the case?
 
 Or is there some configuration that needs to be done between 1.1.9 and
 1.1.12?
 
 Michelle Streeter

You need to upgrade all of the cluster components please. Ideally,
upgrade the whole OS...

-- 
Digimer
Papers and Projects: https://alteeve.ca/w/
What if the cure for cancer is trapped in the mind of a person without
access to education?

___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Mysql M/S, binlogs - how to delete them safely without failing over first?

2015-08-18 Thread Brett Moser
Hi Attila,

It sounds like on failover the new slave is not having it's master
replication file  position updated.

What Resource Agent are you using to control the M/S mysql resource?

Have you investigated the Percona agent?   I performs the CHANGE MASTER TO
commands for you, and I have found it to be a good RA for my M/S MySQL
purposes.

https://github.com/percona/percona-pacemaker-agents

regards,
-Brett Moser


On Tue, Aug 18, 2015 at 2:19 PM, Attila Megyeri amegy...@minerva-soft.com
wrote:

 Hi List,



 We are using M/S replication in a couple of clusters, and there is an
 issue that has been causing headaches for me for quite some time.



 My problem comes from the fact that binlog files grow very quickly on both
 the Master and Slave nodes.



 Let’s assume, that node 1 is the master – it logs all operations to
 binlog.

 Node 2 is the slave, it replicates everything properly. (It is strange,
 however, that node2 must also generate and keep binlog files while it is a
 slave, but let’s assume that this is by design).



 There are ways to configure mysql to keep the binlog files only for some
 time, e.g. 10 days, but I had an issue with this:



 To explain the issue, please consider the following case:

 Let’s say that both node1 and node2 are up-to-date, and we did a failover
 test on day 0.

 DB1 is the master, DB2 is the slave.

 DB1 has master position “A”, DB1 has master position “B”.



 After 20 days, lots of binlog files exist on both servers, I would like to
 get rid of them, as the slave is up-to-date.



 I decide to delete all binlog files older than 1 day by issuing “purge
 binary logs…”.



 I try to fail over so that DB2 becomes the master, but DB1 tries to
 connect to DB2 for replication and wants to replicate starting from a
 position that it “remembers” from the times when DB2 was the master, and
 starts to look for some binlog files that are 20 days old. Now, the issue
 is, that those old binlog files have been deleted since, and replication
 stops with error (cannot find binlogs).



 Am I doing something wrong here, or is something configured badly?

 OR am I assuming correctly that in order to purge the binlog files from
 both servers I need to make a failover first?





 Thank you!

 ___
 Users mailing list: Users@clusterlabs.org
 http://clusterlabs.org/mailman/listinfo/users

 Project Home: http://www.clusterlabs.org
 Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
 Bugs: http://bugs.clusterlabs.org


___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Antw: Re: Antw: Re: MySQL resource causes error 0_monitor_20000.

2015-08-18 Thread Kiwamu Okabe
Hi,

On Wed, Aug 19, 2015 at 12:08 PM, Kiwamu Okabe kiw...@debian.or.jp wrote:
 I tried to find ocf-tester command but not found.
 And resource-agents package is also not installed!

It's miss understanding. Package resource-agents is already installed.
But ocf-tester command is not found...

centillion.db01# rpm -qa|grep resource
resource-agents-3.9.5-24.el6.x86_64
centillion.db01# ls /usr/sbin/ocf-tester
ls: cannot access /usr/sbin/ocf-tester: No such file or directory

On my other server, resource-agents package has ocf-tester command.
Why the package does not include it at server centillion.db01 and
centillion.db02?

Thank's,,
-- 
Kiwamu Okabe at METASEPI DESIGN

___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Antw: Re: Antw: Re: MySQL resource causes error 0_monitor_20000.

2015-08-18 Thread Kiwamu Okabe
Hi Ulrich,

On Tue, Aug 18, 2015 at 10:43 PM, Ulrich Windl
ulrich.wi...@rz.uni-regensburg.de wrote:
 I haven't looked there, but there should be a manual page (ocf-tester(8)). I 
 use it like this:
 --
 /usr/sbin/ocf-tester -n your_resource_name \
 -o param1=value1 \
 ...
 path_to_RA
 --
 When testing, the resource should not be running (not be controlled by the 
 cluster).

I tried to find ocf-tester command but not found.
And resource-agents package is also not installed!

I'll try to fix it.

Thank's,
-- 
Kiwamu Okabe at METASEPI DESIGN

___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] MySQL resource causes error 0_monitor_20000.

2015-08-18 Thread Andrei Borzenkov
On Tue, Aug 18, 2015 at 11:57 AM, Kiwamu Okabe kiw...@debian.or.jp wrote:
 Hi,

 On Tue, Aug 18, 2015 at 5:07 PM, Kiwamu Okabe kiw...@debian.or.jp wrote:
 ```
 2015-08-18 16:50:38 7081 [ERROR] Slave I/O: Fatal error: The slave I/O
 thread stops because master and slave have equal MySQL server ids;
 these ids must be different for replication to work (or the
 --replicate-same-server-id option must be used on slave but this does
 not always make sense; please check the manual before using it).
 Error_code: 1593
 ```

 I fixed it, and get following settings;

 ```
 centillion.db01# mysql show slave status\G
 ...
  Slave_IO_Running: Yes
 Slave_SQL_Running: Yes
 ...
 centillion.db01# mysql stop slave;
 centillion.db02# mysql start slave;
 centillion.db02# mysql show slave status\G
 ...
  Slave_IO_Running: Yes
 Slave_SQL_Running: Yes
 ...
 ```

 But the error message of crm is not changed...


You should attach full logs from both nodes starting with pacemaker start.

 ```
 
 Last updated: Tue Aug 18 17:41:48 2015
 Stack: Heartbeat
 Current DC: centillion.db02 (0302e3d0-df06-4847-b0f9-9ebddfb6aec7) -
 partition with quorum
 Version: 1.0.13-a83fae5
 2 Nodes configured, unknown expected votes
 2 Resources configured.
 

 Online: [ centillion.db01 centillion.db02 ]

 vip_192.168.10.200  (ocf::heartbeat:IPaddr2):   Started 
 centillion.db01
  Master/Slave Set: mysql-clone
  mysql:1(ocf::heartbeat:mysql): Master centillion.db02 FAILED
  Masters: [ centillion.db01 ]

 Failed actions:
 mysql:1_monitor_2 (node=centillion.db02, call=26, rc=8,
 status=complete): master
 mysql:1_monitor_3 (node=centillion.db02, call=27, rc=8,
 status=complete): master
 ```

 Thank's,
 --
 Kiwamu Okabe at METASEPI DESIGN

 ___
 Users mailing list: Users@clusterlabs.org
 http://clusterlabs.org/mailman/listinfo/users

 Project Home: http://www.clusterlabs.org
 Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
 Bugs: http://bugs.clusterlabs.org

___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] upgrade from 1.1.9 to 1.1.12 fails to start

2015-08-18 Thread Streeter, Michelle N
I created a whole new virtual and installed everything with the new version and 
pacemaker wouldn't start.
I have not yet learned how to use the logs yet to see what they have to say.
No, I did not upgrade corosync.  I am running the latest which will work with 
rhel6.
When I tried later versions, they failed and I was told it was because we are 
not running rhel7.

I am getting the feeling this version of Pacemaker does not work on rhel6 
either.  Do you believe this is the case?
Or is there some configuration that needs to be done between 1.1.9 and 1.1.12?

Michelle Streeter
ASC2 MCS - SDE/ACL/SDL/EDL OKC Software Engineer
The Boeing Company

___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


[ClusterLabs] Antw: Re: Antw: Re: MySQL resource causes error 0_monitor_20000.

2015-08-18 Thread Ulrich Windl
 Kiwamu Okabe kiw...@debian.or.jp schrieb am 18.08.2015 um 14:55 in 
 Nachricht
caevx6dnufyi7_dgd4dxbmxb0ptznspxiaxumymqjj5f2b6h...@mail.gmail.com:
 Hi Ulrich,
 
 On Tue, Aug 18, 2015 at 9:34 PM, Ulrich Windl
 ulrich.wi...@rz.uni-regensburg.de wrote:
 I feel the message
 Aug 18 18:21:35 centillion.db01 lrmd: [15607]: info: RA output: 
 (mysql:0:promote:stderr) Error performing operation: The object/attribute 
 does not exist

 is a problem (why it's only info is another question when it says 
 Error).
 
 Thank's. It's important information for me.
 
 I wonder: Did you try the OCF tester?
 
 No.
 Is it explained at following URL?
 
 http://www.linux-ha.org/doc/dev-guides/_testing_resource_agents.html 

I haven't looked there, but there should be a manual page (ocf-tester(8)). I 
use it like this:
--
/usr/sbin/ocf-tester -n your_resource_name \
-o param1=value1 \
...
path_to_RA
--
When testing, the resource should not be running (not be controlled by the 
cluster).

Regards,
Ulrich

 
 Thank's a lot,
 -- 
 Kiwamu Okabe at METASEPI DESIGN
 
 ___
 Users mailing list: Users@clusterlabs.org 
 http://clusterlabs.org/mailman/listinfo/users 
 
 Project Home: http://www.clusterlabs.org 
 Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf 
 Bugs: http://bugs.clusterlabs.org 





___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org