Re: [Pacemaker] Clearing a resource which returned "not installed" from START

2011-04-01 Thread Andrew Beekhof
On Thu, Mar 31, 2011 at 5:06 AM, Bob Schatz  wrote:
> I am running Pacemaker 1.0.9 and Heartbeat 3.0.3.
>
> I started a resource and the agent start method returned "OCF_ERR_INSTALLED".
>
> I have fixed the problem and I would like to restart the resource and I cannot
> get it to restart.
>
> Any ideas?

crm resource cleanup SSJD6662 (or something similar)

>
>
> Thanks,
>
> Bob
>
>
> The failcounts are 0 as shown below and with the crm_resource command:
>
>     # crm_mon -1 -f
>     
>     Last updated: Wed Mar 30 19:55:39 2011
>     Stack: Heartbeat
>     Current DC: mgraid-sd6661-0 (f4e5e15c-d06b-4e37-89b9-4621af05128f) -
> partition with quorum
>     Version: 1.0.9-89bd754939df5150de7cd76835f98fe90851b677
>     2 Nodes configured, unknown expected votes
>    5 Resources configured.
>    
>
>     Online: [ mgraid-sd6661-1 mgraid-sd6661-0 ]
>
>      Clone Set: Fencing
>          Started: [ mgraid-sd6661-1 mgraid-sd6661-0 ]
>      Clone Set: cloneIcms
>          Started: [ mgraid-sd6661-1 mgraid-sd6661-0 ]
>      Clone Set: cloneOmserver
>          Started: [ mgraid-sd6661-1 mgraid-sd6661-0 ]
>      Master/Slave Set: ms-SSSD6661
>          Masters: [ mgraid-sd6661-0 ]
>          Slaves: [ mgraid-sd6661-1 ]
>      Master/Slave Set: ms-SSJD6662
>          Masters: [ mgraid-sd6661-0 ]
>          Stopped: [ SSJD6662:0 ]
>
>     Migration summary:
>     * Node mgraid-sd6661-0:
>     * Node mgraid-sd6661-1:
>
>     Failed actions:
>        SSJD6662:0_start_0 (node=mgraid-sd6661-1, call=27, rc=5,
> status=complete): not installed
>
> I have also tried to cleanup the resource with these commands:
>
>  #  crm_resource --resource SSJD6662:0  --cleanup --node
> mgraid-sd6661-1
>  #  crm_resource --resource SSJD6662:1  --cleanup --node
> mgraid-sd6661-1
>  #  crm_resource --resource SSJD6662:0  --cleanup --node
> mgraid-sd6661-0
>  #  crm_resource --resource SSJD6662:1 --cleanup --node 
> mgraid-sd6661-0
>  # crm_resource --resource ms-SSJD6662 --cleanup --node 
> mgraid-sd6661-1
>
>  # crm resource start SSJD6662:0
>
> My configuration is:
>
> node $id="856c1f72-7cd1-4906-8183-8be87eef96f2" mgraid-sd6661-1
> node $id="f4e5e15c-d06b-4e37-89b9-4621af05128f" mgraid-sd6661-0
> primitive SSJD6662 ocf:omneon:ss \
>        params ss_resource="SSJD6662"
> ssconf="/var/omneon/config/config.JD6662" \
>        op monitor interval="3s" role="Master" timeout="7s" \
>        op monitor interval="10s" role="Slave" timeout="7" \
>        op stop interval="0" timeout="20" \
>        op start interval="0" timeout="300"
> primitive SSSD6661 ocf:omneon:ss \
>        params ss_resource="SSSD6661"
> ssconf="/var/omneon/config/config.SD6661" \
>        op monitor interval="3s" role="Master" timeout="7s" \
>        op monitor interval="10s" role="Slave" timeout="7" \
>        op stop interval="0" timeout="20" \
>        op start interval="0" timeout="300"
> primitive icms lsb:S53icms \
>        op monitor interval="5s" timeout="7" \
>        op start interval="0" timeout="5"
> primitive mgraid-stonith stonith:external/mgpstonith \
>        params hostlist="mgraid-canister" \
>        op monitor interval="0" timeout="20s"
> primitive omserver lsb:S49omserver \
>        op monitor interval="5s" timeout="7" \
>        op start interval="0" timeout="5"
> ms ms-SSJD6662 SSJD6662 \
>        meta clone-max="2" notify="true" globally-unique="false"
> target-role="Started"
> ms ms-SSSD6661 SSSD6661 \
>        meta clone-max="2" notify="true" globally-unique="false"
> target-role="Started"
> clone Fencing mgraid-stonith
> clone cloneIcms icms
> clone cloneOmserver omserver
> location ms-SSJD6662-master-w1 ms-SSJD6662 \
>        rule $id="ms-SSJD6662-master-w1-rule" $role="master" 100: #uname eq
> mgraid-sd6661-1
> location ms-SSSD6661-master-w1 ms-SSSD6661 \
>        rule $id="ms-SSSD6661-master-w1-rule" $role="master" 100: #uname eq
> mgraid-sd6661-0
> order orderms-SSJD6662 0: cloneIcms ms-SSJD6662
> order orderms-SSSD6661 0: cloneIcms ms-SSSD6661
> property $id="cib-bootstrap-options" \
>        dc-version="1.0.9-89bd754939df5150de7cd76835f98fe90851b677" \
>        cluster-infrastructure="Heartbeat" \
>        dc-deadtime="5s" \
>        stonith-enabled="true" \
>        last-lrm-refresh="1301536426"
>
>
>
>
> ___
> Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: 
> http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker
>

___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/l

[Pacemaker] Clearing a resource which returned "not installed" from START

2011-03-30 Thread Bob Schatz
I am running Pacemaker 1.0.9 and Heartbeat 3.0.3.

I started a resource and the agent start method returned "OCF_ERR_INSTALLED".

I have fixed the problem and I would like to restart the resource and I cannot 
get it to restart.

Any ideas?


Thanks,

Bob


The failcounts are 0 as shown below and with the crm_resource command:

 # crm_mon -1 -f
 
 Last updated: Wed Mar 30 19:55:39 2011
 Stack: Heartbeat
 Current DC: mgraid-sd6661-0 (f4e5e15c-d06b-4e37-89b9-4621af05128f) - 
partition with quorum
 Version: 1.0.9-89bd754939df5150de7cd76835f98fe90851b677
 2 Nodes configured, unknown expected votes
5 Resources configured.


 Online: [ mgraid-sd6661-1 mgraid-sd6661-0 ]

  Clone Set: Fencing
  Started: [ mgraid-sd6661-1 mgraid-sd6661-0 ]
  Clone Set: cloneIcms
  Started: [ mgraid-sd6661-1 mgraid-sd6661-0 ]
  Clone Set: cloneOmserver
  Started: [ mgraid-sd6661-1 mgraid-sd6661-0 ]
  Master/Slave Set: ms-SSSD6661
  Masters: [ mgraid-sd6661-0 ]
  Slaves: [ mgraid-sd6661-1 ]
  Master/Slave Set: ms-SSJD6662
  Masters: [ mgraid-sd6661-0 ]
  Stopped: [ SSJD6662:0 ]

 Migration summary:
 * Node mgraid-sd6661-0:
 * Node mgraid-sd6661-1:

 Failed actions:
SSJD6662:0_start_0 (node=mgraid-sd6661-1, call=27, rc=5, 
status=complete): not installed

I have also tried to cleanup the resource with these commands:

  #  crm_resource --resource SSJD6662:0  --cleanup --node 
mgraid-sd6661-1
  #  crm_resource --resource SSJD6662:1  --cleanup --node 
mgraid-sd6661-1
  #  crm_resource --resource SSJD6662:0  --cleanup --node 
mgraid-sd6661-0
  #  crm_resource --resource SSJD6662:1 --cleanup --node mgraid-sd6661-0
  # crm_resource --resource ms-SSJD6662 --cleanup --node mgraid-sd6661-1

  # crm resource start SSJD6662:0

My configuration is:

node $id="856c1f72-7cd1-4906-8183-8be87eef96f2" mgraid-sd6661-1
node $id="f4e5e15c-d06b-4e37-89b9-4621af05128f" mgraid-sd6661-0
primitive SSJD6662 ocf:omneon:ss \
params ss_resource="SSJD6662" 
ssconf="/var/omneon/config/config.JD6662" \
op monitor interval="3s" role="Master" timeout="7s" \
op monitor interval="10s" role="Slave" timeout="7" \
op stop interval="0" timeout="20" \
op start interval="0" timeout="300"
primitive SSSD6661 ocf:omneon:ss \
params ss_resource="SSSD6661" 
ssconf="/var/omneon/config/config.SD6661" \
op monitor interval="3s" role="Master" timeout="7s" \
op monitor interval="10s" role="Slave" timeout="7" \
op stop interval="0" timeout="20" \
op start interval="0" timeout="300"
primitive icms lsb:S53icms \
op monitor interval="5s" timeout="7" \
op start interval="0" timeout="5"
primitive mgraid-stonith stonith:external/mgpstonith \
params hostlist="mgraid-canister" \
op monitor interval="0" timeout="20s"
primitive omserver lsb:S49omserver \
op monitor interval="5s" timeout="7" \
op start interval="0" timeout="5"
ms ms-SSJD6662 SSJD6662 \
meta clone-max="2" notify="true" globally-unique="false" 
target-role="Started"
ms ms-SSSD6661 SSSD6661 \
meta clone-max="2" notify="true" globally-unique="false" 
target-role="Started"
clone Fencing mgraid-stonith
clone cloneIcms icms
clone cloneOmserver omserver
location ms-SSJD6662-master-w1 ms-SSJD6662 \
rule $id="ms-SSJD6662-master-w1-rule" $role="master" 100: #uname eq 
mgraid-sd6661-1
location ms-SSSD6661-master-w1 ms-SSSD6661 \
rule $id="ms-SSSD6661-master-w1-rule" $role="master" 100: #uname eq 
mgraid-sd6661-0
order orderms-SSJD6662 0: cloneIcms ms-SSJD6662
order orderms-SSSD6661 0: cloneIcms ms-SSSD6661
property $id="cib-bootstrap-options" \
dc-version="1.0.9-89bd754939df5150de7cd76835f98fe90851b677" \
cluster-infrastructure="Heartbeat" \
dc-deadtime="5s" \
stonith-enabled="true" \
last-lrm-refresh="1301536426"


  

___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker