On Fri, 2018-10-26 at 18:36 +0200, Gabriel Buades wrote:
> Thanks Dileep.
> 
> I've modified the setup to match your suggestion, but I've got
> the same result.
> 
> Executing ifconfig -a, I can see node 1 has the floating IP added to
> the ethernet device.
> Executing ifconfig -a in node 2, the floating IP is missing, as
> expected.
> 
> But when node 2 tries to connect to the floating IP, it connects to
> itself.
> 
> Any other suggestion?

You definitely want lvs_support off: Enabling it means "In case an IP
address is stopped, only move it to the loopback device to allow the
local node to continue to service requests, but no longer advertise it
on the network." That would lead to the behavior you describe. Perhaps
having already set it, the address now needs to be cleared off the
loopback manually?

There is also the flush_routes option: "Flush the routing table on
stop. This is for applications which use the cluster IP address and
which run on the same physical host that the IP address lives on. The
Linux kernel may force that application to take a shortcut to the local
loopback interface, instead of the interface the address is really
bound to. Under those circumstances, an application may, somewhat
unexpectedly, continue to use connections for some time even after the
IP address is deconfigured." I am not sure why one setup would require
that and not another, but it's worth a try.

> 
> 
> El vie., 26 oct. 2018 a las 15:27, Dileep V Nair (<dilen...@in.ibm.co
> m>) escribió:
> > Hello Gabriel,
> > 
> > I have a similar cluster configuration running fine. I am using the
> > virtual IP to NFS mount a filesystem from Node 1 to node 2. The
> > differences I could see from your configuration ..
> > > primitive site_one_ip ocf:heartbeat:IPaddr \
> > > params ip="192.168.2.200" cidr_netmask="255.255.252.0" nic="eth0"
> > \
> > > op monitor interval="40s" timeout="20s"
> > 
> > 
> > I use ocf:heartbeat:IPaddr2 
> > I have given only the IP parameter, no netmask and nic
> > I have a virtual hostname associated with the IP addr using
> > /etc/hosts and use the virtual hostname to connect. 
> > 
> > 
> > Thanks & Regards
> > 
> > Dileep Nair
> > Squad Lead - SAP Base 
> > IBM Services for Managed Applications
> > +91 98450 22258 Mobile
> > dilen...@in.ibm.com
> > 
> > IBM Services
> > 
> > Gabriel Buades ---10/26/2018 06:34:46 PM---Hello Andrei. I did not
> > add lvs_support at first. I added it later when noticed the
> > 
> > From: Gabriel Buades <gbua...@soffid.com>
> > To: users@clusterlabs.org
> > Date: 10/26/2018 06:34 PM
> > Subject: Re: [ClusterLabs] Floating IP active in both nodes
> > Sent by: "Users" <users-boun...@clusterlabs.org>
> > 
> > 
> > 
> > Hello Andrei.
> > 
> > 
> > I did not add lvs_support at first. I added it later when noticed
> > the problem to test if something changes, but I got same result
> > 
> > Gabriel
> > 
> > 
> > El vie., 26 oct. 2018 a las 11:47, Andrei Borzenkov (<arvidjaar@gma
> > il.com>) escribió:
> > 26.10.2018 11:14, Gabriel Buades пишет:
> > > Dear cluster labs team.
> > > 
> > > I previously configured a two nodes cluster with replicated maria
> > Db.
> > > To use one database as the active, and the other one as failover,
> > I
> > > configured a cluster using heartbeat:
> > > 
> > > root@logpmgid01v:~$ sudo crm configure show
> > > node $id="59bbdb76-be67-4be0-aedb-9e27d65f371e" logpmgid01v
> > > node $id="adbc5972-c491-4fc4-b87d-8170e1b2d4d0" logpmgid02v \
> > > attributes standby="off"
> > > primitive site_one_ip ocf:heartbeat:IPaddr \
> > > params ip="192.168.2.200" cidr_netmask="255.255.252.0" nic="eth0"
> > \
> > > op monitor interval="40s" timeout="20s"
> > > location site_one_ip_pref site_one_ip 100: logpmgid01v
> > > property $id="cib-bootstrap-options" \
> > > dc-version="1.1.10-42f2063" \
> > > cluster-infrastructure="heartbeat" \
> > > stonith-enabled="false"
> > > 
> > > Now, I've done a similar setup using corosync:
> > > root@908soffid02:~# crm configure show
> > > node 1: 908soffid01
> > > node 2: 908soffid02
> > > primitive site_one_ip IPaddr \
> > > params ip=10.6.12.118 cidr_netmask=255.255.0.0 nic=ens160
> > lvs_support=true \
> > 
> > What is the reason you added lvs_support? Previous configuration
> > did not
> > have it.
> > 
> > > meta target-role=Started is-managed=true
> > > location cli-prefer-site_one_ip site_one_ip role=Started inf:
> > 908soffid01
> > > location site_one_ip_pref site_one_ip 100: 908soffid01
> > > property cib-bootstrap-options: \
> > > have-watchdog=false \
> > > dc-version=1.1.14-70404b0 \
> > > cluster-infrastructure=corosync \
> > > cluster-name=debian \
> > > stonith-enabled=false \
> > > no-quorum-policy=ignore \
> > > maintenance-mode=false
> > > 
> > > Apparently, it works fine, and floating IP address is active in
> > node1:
> > > root@908soffid02:~# crm_mon -1
> > > Last updated: Fri Oct 26 10:06:12 2018 Last change: Fri Oct 26
> > 10:02:53
> > > 2018 by root via cibadmin on 908soffid02
> > > Stack: corosync
> > > Current DC: 908soffid01 (version 1.1.14-70404b0) - partition with
> > quorum
> > > 2 nodes and 1 resource configured
> > > 
> > > Online: [ 908soffid01 908soffid02 ]
> > > 
> > >  site_one_ip (ocf::heartbeat:IPaddr): Started 908soffid01
> > > 
> > > But when node2 tries to connect to the floating IP address, it
> > gets
> > > connected to itself, despite the IP address is bound to the first
> > node:
> > > root@908soffid02:~# ssh root@10.6.12.118 hostname
> > > root@soffiddb's password:
> > > 908soffid02
> > > 
> > > I'd want the second node connects to the actual floating IP
> > address, but I
> > > cannot see how to set it up. Any help is welcome.
> > > 
> > > I am using pacemaker 1.1.14-2ubuntu1.4 and corosync 2.3.5-
> > 3ubuntu2.1
> > > 
> > > Kind regards.
> > > 
> > > 
> > > 
> > > Gabriel Buades
> > > 
> > > 
> > > _______________________________________________
> > > Users mailing list: Users@clusterlabs.org
> > > https://lists.clusterlabs.org/mailman/listinfo/users
> > > 
> > > Project Home: http://www.clusterlabs.org
> > > Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scra
> > tch.pdf
> > > Bugs: http://bugs.clusterlabs.org
> > > 
> > 
> > _______________________________________________
> > Users mailing list: Users@clusterlabs.org
> > https://lists.clusterlabs.org/mailman/listinfo/users
> > 
> > Project Home: http://www.clusterlabs.org
> > Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratc
> > h.pdf
> > Bugs: http://bugs.clusterlabs.org__________________________________
> > _____________
> > Users mailing list: Users@clusterlabs.org
> > https://lists.clusterlabs.org/mailman/listinfo/users
> > 
> > Project Home: http://www.clusterlabs.org
> > Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratc
> > h.pdf
> > Bugs: http://bugs.clusterlabs.org
> > 
> > 
> > 
> > _______________________________________________
> > Users mailing list: Users@clusterlabs.org
> > https://lists.clusterlabs.org/mailman/listinfo/users
> > 
> > Project Home: http://www.clusterlabs.org
> > Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratc
> > h.pdf
> > Bugs: http://bugs.clusterlabs.org
> > 
> 
> _______________________________________________
> Users mailing list: Users@clusterlabs.org
> https://lists.clusterlabs.org/mailman/listinfo/users
> 
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.
> pdf
> Bugs: http://bugs.clusterlabs.org
-- 
Ken Gaillot <kgail...@redhat.com>
_______________________________________________
Users mailing list: Users@clusterlabs.org
https://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org

Reply via email to