Re: [ClusterLabs] Error When Creating LVM Resource

2016-08-26 Thread Jason A Ramsey
That makes sense. I wasn’t yet configuring the constraints on the cluster and 
was alarmed by the error messages…especially the ones where it seemed like the 
services weren’t starting anywhere. Eventually, however, that somehow magically 
resolved itself, so I did went ahead with adding the resource constraints.

Here’s what I added:

# pcs constraint colocation add gctvanas-vip with gctvanas-fs2o INFINITY 
with-rsc-role=Master
# pcs constraint colocation add gctvanas-lvm with gctvanas-fs2o INFINITY 
with-rsc-role=Master
# pcs constraint colocation add gctvanas-tgt with gctvanas-fs2o INFINITY 
with-rsc-role=Master
# pcs constraint colocation add gctvanas-lun1 with gctvanas-fs2o INFINITY 
with-rsc-role=Master
# pcs constraint colocation add gctvanas-lun2 with gctvanas-fs2o INFINITY 
with-rsc-role=Master
# pcs constraint order promote gctvanas-fs2o then start gctvanas-lvm
# pcs constraint order gctvanas-vip then gctvanas-lvm
# pcs constraint order gctvanas-lvm then gctvanas-tgt
# pcs constraint order gctvanas-tgt then gctvanas-lun1
# pcs constraint order gctvanas-tgt then gctvanas-lun2
# pcs constraint
Location Constraints:
Ordering Constraints:
  promote gctvanas-fs2o then start gctvanas-lvm (kind:Mandatory)
  start gctvanas-vip then start gctvanas-lvm (kind:Mandatory)
  start gctvanas-lvm then start gctvanas-tgt (kind:Mandatory)
  start gctvanas-tgt then start gctvanas-lun1 (kind:Mandatory)
  start gctvanas-tgt then start gctvanas-lun2 (kind:Mandatory)
Colocation Constraints:
  gctvanas-vip with gctvanas-fs2o (score:INFINITY) (with-rsc-role:Master)
  gctvanas-lvm with gctvanas-fs2o (score:INFINITY) (with-rsc-role:Master)
  gctvanas-tgt with gctvanas-fs2o (score:INFINITY) (with-rsc-role:Master)
  gctvanas-lun1 with gctvanas-fs2o (score:INFINITY) (with-rsc-role:Master)
  gctvanas-lun2 with gctvanas-fs2o (score:INFINITY) (with-rsc-role:Master)

I think this looks about right… hopefully when I test everything doesn’t go 
t/u. Thanks for the input!

--

[ jR ]
  @: ja...@eramsey.org<mailto:ja...@eramsey.org>

  there is no path to greatness; greatness is the path

From: Greg Woods <wo...@ucar.edu>
Reply-To: Cluster Labs - All topics related to open-source clustering welcomed 
<users@clusterlabs.org>
Date: Friday, August 26, 2016 at 2:09 PM
To: Cluster Labs - All topics related to open-source clustering welcomed 
<users@clusterlabs.org>
Subject: Re: [ClusterLabs] Error When Creating LVM Resource


On Fri, Aug 26, 2016 at 9:32 AM, Jason A Ramsey 
<ja...@eramsey.org<mailto:ja...@eramsey.org>> wrote:
Failed Actions:
* gctvanas-lvm_start_0 on node1 'not running' (7): call=42, status=complete, 
exitreason='LVM: targetfs did not activate correctly',
last-rc-change='Fri Aug 26 10:57:22 2016', queued=0ms, exec=577ms
* gctvanas-lvm_start_0 on node2 'unknown error' (1): call=34, status=complete, 
exitreason='Volume group [targetfs] does not exist or contains error!   Volume 
group "targetfs" not found',
last-rc-change='Fri Aug 26 10:57:21 2016', queued=0ms, exec=322ms


I think you need a colocation constraint to prevent it from trying to start the 
LVM resource on the DRBD secondary node. I used to run LVM-over-DRBD clusters 
but don't any more (switched to NFS backend storage), so I don't remember the 
exact syntax, but you certainly don't want the LVM resource to start on node2 
at this point because it will certainly fail.

It may not be running on node1 because it failed on node2, so if you can get 
the proper colocation constraint in place, things may work after you do a 
resource cleanup. (I stand ready to be corrected by someone more knowledgeable 
who can spot a configuration problem that I missed).

If you still get failure and the constraint is correct, then I would try 
running the lvcreate command manually on the DRBD primary node to make sure 
that works.

--Greg

___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Error When Creating LVM Resource

2016-08-26 Thread Greg Woods
On Fri, Aug 26, 2016 at 9:32 AM, Jason A Ramsey  wrote:

> Failed Actions:
>
> * gctvanas-lvm_start_0 on node1 'not running' (7): call=42,
> status=complete, exitreason='LVM: targetfs did not activate correctly',
>
> last-rc-change='Fri Aug 26 10:57:22 2016', queued=0ms, exec=577ms
>
> * gctvanas-lvm_start_0 on node2 'unknown error' (1): call=34,
> status=complete, exitreason='Volume group [targetfs] does not exist or
> contains error!   Volume group "targetfs" not found',
>
> last-rc-change='Fri Aug 26 10:57:21 2016', queued=0ms, exec=322ms
>
>
>

I think you need a colocation constraint to prevent it from trying to start
the LVM resource on the DRBD secondary node. I used to run LVM-over-DRBD
clusters but don't any more (switched to NFS backend storage), so I don't
remember the exact syntax, but you certainly don't want the LVM resource to
start on node2 at this point because it will certainly fail.

It may not be running on node1 because it failed on node2, so if you can
get the proper colocation constraint in place, things may work after you do
a resource cleanup. (I stand ready to be corrected by someone more
knowledgeable who can spot a configuration problem that I missed).

If you still get failure and the constraint is correct, then I would try
running the lvcreate command manually on the DRBD primary node to make sure
that works.

--Greg
___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org