Re: [Pacemaker] DRBD+OCFS2+Pacemaker on UBUNTU 12.04, DRBD via pacemaker doesn't start when corosync invoked

2014-05-15 Thread emmanuel segura
You don't declared your drbd resource r0 in the configuration, read this
http://www.drbd.org/users-guide/s-configure-resource.html


2014-05-15 9:33 GMT+02:00 kamal kishi kamal.ki...@gmail.com:

 Hi All,

 My configuration is simple and straight, UBUNTU 12.04 used to run
 pacemaker.
 Pacemaker runs DRBD and OCFS2.

 The DRBD can be started manually without any error or issues with
 primary/primary configuration.

 (NOTE : This configuration is being done as base to configure
 ACTIVE-ACTIVE XEN configuration, hence the become-primary-on both;' is
 used in DRBD config)

 Configuration attached :
 1. DRBD
 2. Pacemaker

 Log attached :
 Syslog1 - Server 1
 Syslog2 - Server 2

 Hope and wish I get a solution.

 --
 Regards,
 Kamal Kishore B V

 ___
 Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
 http://oss.clusterlabs.org/mailman/listinfo/pacemaker

 Project Home: http://www.clusterlabs.org
 Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
 Bugs: http://bugs.clusterlabs.org




-- 
esta es mi vida e me la vivo hasta que dios quiera
___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [Pacemaker] DRBD+OCFS2+Pacemaker on UBUNTU 12.04, DRBD via pacemaker doesn't start when corosync invoked

2014-05-15 Thread emmanuel segura
Try to use a simple configuration as follow:

global {
  usage-count yes;
}
common {
  protocol C;
}
resource r0 {
  on alice {
device/dev/drbd1;
disk  /dev/sda7;
address   10.1.1.31:7789;
meta-disk internal;
  }
  on bob {
device/dev/drbd1;
disk  /dev/sda7;
address   10.1.1.32:7789;
meta-disk internal;
  }
}

I don't know if this a the problem, but the resource r0 is declared
out side of global tag




2014-05-15 10:21 GMT+02:00 emmanuel segura emi2f...@gmail.com:

 You don't declared your drbd resource r0 in the configuration, read this
 http://www.drbd.org/users-guide/s-configure-resource.html


 2014-05-15 9:33 GMT+02:00 kamal kishi kamal.ki...@gmail.com:

 Hi All,

 My configuration is simple and straight, UBUNTU 12.04 used to run
 pacemaker.
 Pacemaker runs DRBD and OCFS2.

 The DRBD can be started manually without any error or issues with
 primary/primary configuration.

 (NOTE : This configuration is being done as base to configure
 ACTIVE-ACTIVE XEN configuration, hence the become-primary-on both;' is
 used in DRBD config)

 Configuration attached :
 1. DRBD
 2. Pacemaker

 Log attached :
 Syslog1 - Server 1
 Syslog2 - Server 2

 Hope and wish I get a solution.

 --
 Regards,
 Kamal Kishore B V

 ___
 Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
 http://oss.clusterlabs.org/mailman/listinfo/pacemaker

 Project Home: http://www.clusterlabs.org
 Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
 Bugs: http://bugs.clusterlabs.org




 --
 esta es mi vida e me la vivo hasta que dios quiera




-- 
esta es mi vida e me la vivo hasta que dios quiera
___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [Pacemaker] DRBD+OCFS2+Pacemaker on UBUNTU 12.04, DRBD via pacemaker doesn't start when corosync invoked

2014-05-15 Thread kamal kishi
Hi emi,

The document also says the following :

It is also possible to use drbd.conf as a flat configuration file without
any include statements at all. Such a configuration, however, quickly
becomes cluttered and hard to manage, which is why the multiple-file
approach is the preferred one.

Regardless of which approach you employ, you should always make sure that
drbd.conf, and any other files it includes, are *exactly identical* on all
participating cluster nodes.

I've tried that way too, but of no use.

Tried giving -

params device=/dev/drbd0 directory=/cluster fstype=ocfs2 \

instead of

params device=/dev/drbd/by-res/r0 directory=/cluster fstype=ocfs2 \

even that failed.

But my doubt is that i'm able to manually work with DRBD without any issue
but why can't via pacemaker.

Any useful info in logs??

I did not get any, so inquiring



On Thu, May 15, 2014 at 1:51 PM, emmanuel segura emi2f...@gmail.com wrote:

 You don't declared your drbd resource r0 in the configuration, read this
 http://www.drbd.org/users-guide/s-configure-resource.html


 2014-05-15 9:33 GMT+02:00 kamal kishi kamal.ki...@gmail.com:

 Hi All,

 My configuration is simple and straight, UBUNTU 12.04 used to run
 pacemaker.
 Pacemaker runs DRBD and OCFS2.

 The DRBD can be started manually without any error or issues with
 primary/primary configuration.

 (NOTE : This configuration is being done as base to configure
 ACTIVE-ACTIVE XEN configuration, hence the become-primary-on both;' is
 used in DRBD config)

 Configuration attached :
 1. DRBD
 2. Pacemaker

 Log attached :
 Syslog1 - Server 1
 Syslog2 - Server 2

 Hope and wish I get a solution.

 --
 Regards,
 Kamal Kishore B V

 ___
 Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
 http://oss.clusterlabs.org/mailman/listinfo/pacemaker

 Project Home: http://www.clusterlabs.org
 Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
 Bugs: http://bugs.clusterlabs.org




 --
 esta es mi vida e me la vivo hasta que dios quiera

 ___
 Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
 http://oss.clusterlabs.org/mailman/listinfo/pacemaker

 Project Home: http://www.clusterlabs.org
 Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
 Bugs: http://bugs.clusterlabs.org




-- 
Regards,
Kamal Kishore B V
___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [Pacemaker] DRBD+OCFS2+Pacemaker on UBUNTU 12.04, DRBD via pacemaker doesn't start when corosync invoked

2014-05-15 Thread emmanuel segura
Are you sure that your drbd work by hand? this is from your log

May 15 12:26:04 server1 lrmd: [1211]: info: RA output:
(Cluster-FS-DRBD:0:start:stderr) Command 'drbdsetup new-resource r0'
terminated with exit code 20
May 15 12:26:04 server1 drbd[1808]: ERROR: r0: Called drbdadm -c
/etc/drbd.conf new-resource r0
May 15 12:26:04 server1 drbd[1808]: ERROR: r0: Exit code 20
May 15 12:26:04 server1 drbd[1808]: ERROR: r0: Command output:


2014-05-15 10:37 GMT+02:00 kamal kishi kamal.ki...@gmail.com:

 Hi emi,

 The document also says the following :

 It is also possible to use drbd.conf as a flat configuration file
 without any include statements at all. Such a configuration, however,
 quickly becomes cluttered and hard to manage, which is why the
 multiple-file approach is the preferred one.

 Regardless of which approach you employ, you should always make sure that
 drbd.conf, and any other files it includes, are *exactly identical* on
 all participating cluster nodes.

 I've tried that way too, but of no use.

 Tried giving -

 params device=/dev/drbd0 directory=/cluster fstype=ocfs2 \

 instead of

 params device=/dev/drbd/by-res/r0 directory=/cluster fstype=ocfs2 \

 even that failed.

 But my doubt is that i'm able to manually work with DRBD without any issue
 but why can't via pacemaker.

 Any useful info in logs??

 I did not get any, so inquiring



 On Thu, May 15, 2014 at 1:51 PM, emmanuel segura emi2f...@gmail.comwrote:

 You don't declared your drbd resource r0 in the configuration, read this
 http://www.drbd.org/users-guide/s-configure-resource.html


 2014-05-15 9:33 GMT+02:00 kamal kishi kamal.ki...@gmail.com:

  Hi All,

 My configuration is simple and straight, UBUNTU 12.04 used to run
 pacemaker.
 Pacemaker runs DRBD and OCFS2.

 The DRBD can be started manually without any error or issues with
 primary/primary configuration.

 (NOTE : This configuration is being done as base to configure
 ACTIVE-ACTIVE XEN configuration, hence the become-primary-on both;' is
 used in DRBD config)

 Configuration attached :
 1. DRBD
 2. Pacemaker

 Log attached :
 Syslog1 - Server 1
 Syslog2 - Server 2

 Hope and wish I get a solution.

 --
 Regards,
 Kamal Kishore B V

 ___
 Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
 http://oss.clusterlabs.org/mailman/listinfo/pacemaker

 Project Home: http://www.clusterlabs.org
 Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
 Bugs: http://bugs.clusterlabs.org




 --
 esta es mi vida e me la vivo hasta que dios quiera

 ___
 Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
 http://oss.clusterlabs.org/mailman/listinfo/pacemaker

 Project Home: http://www.clusterlabs.org
 Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
 Bugs: http://bugs.clusterlabs.org




 --
 Regards,
 Kamal Kishore B V

 ___
 Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
 http://oss.clusterlabs.org/mailman/listinfo/pacemaker

 Project Home: http://www.clusterlabs.org
 Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
 Bugs: http://bugs.clusterlabs.org




-- 
esta es mi vida e me la vivo hasta que dios quiera
___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [Pacemaker] Pacemaker unnecessarily (?) restarts a vm on active node when other node brought out of standby - possible solution?

2014-05-15 Thread Ian

Doing some experiments and Reading TFM, I found this:

5.2.2. Advisory Ordering
When the kind=Optional option is specified for an order constraint, the 
constraint is considered optional and only has an effect when both 
resources are stopping and/or starting. Any change in state of the first 
resource you specified has no effect on the second resource you 
specified.


(From 
https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Configuring_the_Red_Hat_High_Availability_Add-On_with_Pacemaker/index.html)


This seems to tickle the right area. Adding kind=Optional to the gfs2 
- drbd order constraint makes it all work as desired (start-up and 
shut-down is correctly ordered, and bringing the other node out of 
standby doesn't force a gratuitous restart of the gfs2 filesystem and 
the vms that rely on it on the already active node).


Is that the correct solution I wonder? The term optional makes me 
nervous, but the description matches the desired behavior, in normal 
cases at least.



FYI, Here's the working configuration:

# pcs config
Cluster Name: jusme
Corosync Nodes:

Pacemaker Nodes:
 sv06 sv07

Resources:
 Master: vm_storage_core_dev-master
  Meta Attrs: master-max=2 master-node-max=1 clone-max=2 
clone-node-max=1 notify=true

  Group: vm_storage_core_dev
   Resource: res_drbd_vm1 (class=ocf provider=linbit type=drbd)
Attributes: drbd_resource=vm1
Operations: monitor interval=60s (res_drbd_vm1-monitor-interval-60s)
 Clone: vm_storage_core-clone
  Group: vm_storage_core
   Resource: res_fs_vm1 (class=ocf provider=heartbeat type=Filesystem)
Attributes: device=/dev/drbd/by-res/vm1 directory=/data/vm1 
fstype=gfs2 options=noatime,nodiratime

Operations: monitor interval=60s (res_fs_vm1-monitor-interval-60s)
 Master: nfs_server_dev-master
  Meta Attrs: master-max=1 master-node-max=1 clone-max=2 
clone-node-max=1 notify=true

  Group: nfs_server_dev
   Resource: res_drbd_live (class=ocf provider=linbit type=drbd)
Attributes: drbd_resource=live
Operations: monitor interval=60s 
(res_drbd_live-monitor-interval-60s)
 Resource: res_vm_nfs_server (class=ocf provider=heartbeat 
type=VirtualDomain)

  Attributes: config=/etc/libvirt/qemu/vm09.xml
  Meta Attrs: resource-stickiness=100
  Operations: monitor interval=60s 
(res_vm_nfs_server-monitor-interval-60s)


Stonith Devices:
Fencing Levels:

Location Constraints:
Ordering Constraints:
  promote vm_storage_core_dev-master then start vm_storage_core-clone 
(Optional) 
(id:order-vm_storage_core_dev-master-vm_storage_core-clone-Optional)
  promote nfs_server_dev-master then start res_vm_nfs_server (Mandatory) 
(id:order-nfs_server_dev-master-res_vm_nfs_server-mandatory)
  start vm_storage_core-clone then start res_vm_nfs_server (Mandatory) 
(id:order-vm_storage_core-clone-res_vm_nfs_server-mandatory)

Colocation Constraints:
  vm_storage_core-clone with vm_storage_core_dev-master (INFINITY) 
(rsc-role:Started) (with-rsc-role:Master) 
(id:colocation-vm_storage_core-clone-vm_storage_core_dev-master-INFINITY)
  res_vm_nfs_server with nfs_server_dev-master (INFINITY) 
(rsc-role:Started) (with-rsc-role:Master) 
(id:colocation-res_vm_nfs_server-nfs_server_dev-master-INFINITY)
  res_vm_nfs_server with vm_storage_core-clone (INFINITY) 
(id:colocation-res_vm_nfs_server-vm_storage_core-clone-INFINITY)



Ian.


___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org