On Mon, Jun 6, 2011 at 10:22 PM, Randy Wilson <randyedwil...@gmail.com> wrote:
> Hi,
>
> I've setup two ClusterIP instances on a two node cluster using the
> below configuration:
>
> node node1.domain.com
> node node2.domain.com
> primitive clusterip_33 ocf:heartbeat:IPaddr2 \
>     params ip="xxx.xxx.xxx.33" cidr_netmask="27" nic="eth0:10"
> clusterip_hash="sourceip-sourceport-destport" mac="01:XX:XX:XX:XX:XX"
> primitive clusterip_34 ocf:heartbeat:IPaddr2 \
>     params ip="xxx.xxx.xxx.34" cidr_netmask="27" nic="eth0:11"
> clusterip_hash="sourceip-sourceport-destport" mac="01:XX:XX:XX:XX:XX"
> clone clone_clusterip_33 clusterip_33 \
>     meta globally-unique="true" clone-max="2" clone-node-max="2"
> notify="true" target-role="Started" \
>     params resource-stickiness="0"
> clone clone_clusterip_34 clusterip_34 \
>     meta globally-unique="true" clone-max="2" clone-node-max="2"
> notify="true" target-role="Started" \
>     params resource-stickiness="0"
> property $id="cib-bootstrap-options" \
>     dc-version="1.0.8-042548a451fce8400660f6031f4da6f0223dd5dd" \
>     cluster-infrastructure="openais" \
>     stonith-enabled="false" \
>     expected-quorum-votes="2" \
>     last-lrm-refresh="1307352624"
>
> The resources start up on each node, with the correct iptables rules
> being assigned.
>
> ============
> Last updated: Mon Jun  6 11:29:24 2011
> Stack: openais
> Current DC: node1.domain.com - partition with quorum
> Version: 1.0.8-042548a451fce8400660f6031f4da6f0223dd5dd
> 2 Nodes configured, 2 expected votes
> 2 Resources configured.
> ============
>
> Online: [ node1.domain.com node2.domain.com ]
>
>  Clone Set: clone_clusterip_33 (unique)
>      clusterip_33:0    (ocf::heartbeat:IPaddr2):    Started node1.domain.com
>      clusterip_33:1    (ocf::heartbeat:IPaddr2):    Started node2.domain.com
>  Clone Set: clone_clusterip_34 (unique)
>      clusterip_34:0    (ocf::heartbeat:IPaddr2):    Started node1.domain.com
>      clusterip_34:1    (ocf::heartbeat:IPaddr2):    Started node2.domain.com
>
> I receive an error whenever I attempt to migrate one of the resources,
> so that a single node handles all the ClusterIP traffic.
>
> crm(live)resource# migrate clusterip_33:1 node1.domain.com
> Error performing operation: Update does not conform to the configured 
> schema/DTD

You can't (yet) migrate individual instances.  Although
   migrate clusterip_33 node1.domain.com
might still do what you want.
>
> crm(live)resource# migrate clusterip_34:1 node1.domain.com
> Error performing operation: Update does not conform to the configured 
> schema/DTD
>
> And when one of the nodes is taken offline, by stopping corosync, the
> resources are stopped on the remaining node and cannot be started
> without the other node being brought back online.
>
> ============
> Last updated: Mon Jun  6 12:42:21 2011
> Stack: openais
> Current DC: node1.domain.com - partition WITHOUT quorum
> Version: 1.0.8-042548a451fce8400660f6031f4da6f0223dd5dd
> 2 Nodes configured, 2 expected votes
> 2 Resources configured.
> ============
>
> Online: [ node1.domain.com ]
> OFFLINE: [ node2.domain.com ]
>
> If I add a colocation to the config:
>
> colocation coloc_clusterip inf: clone_clusterip_33 clone_clusterip_34
>
> When the offline node is brought back up, all the resources are
> started on the other node.
>
> ============
> Last updated: Mon Jun  6 13:00:39 2011
> Stack: openais
> Current DC: node1.domain.com - partition WITHOUT quorum
> Version: 1.0.8-042548a451fce8400660f6031f4da6f0223dd5dd
> 2 Nodes configured, 2 expected votes
> 2 Resources configured.
> ============
>
> Online: [ node1.domain.com node2.domain.com ]
>
>  Clone Set: clone_clusterip_33 (unique)
>      clusterip_33:0    (ocf::heartbeat:IPaddr2):    Started node1.domain.com
>      clusterip_33:1    (ocf::heartbeat:IPaddr2):    Started node1.domain.com
>  Clone Set: clone_clusterip_34 (unique)
>      clusterip_34:0    (ocf::heartbeat:IPaddr2):    Started node1.domain.com
>      clusterip_34:1    (ocf::heartbeat:IPaddr2):    Started node1.domain.com
>
> Can anyone see where I'm going wrong with this?
>
>
> Many thanks,
>
> REW
> _______________________________________________
> Linux-HA mailing list
> Linux-HA@lists.linux-ha.org
> http://lists.linux-ha.org/mailman/listinfo/linux-ha
> See also: http://linux-ha.org/ReportingProblems
>
_______________________________________________
Linux-HA mailing list
Linux-HA@lists.linux-ha.org
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems

Reply via email to