On 8/27/2015 10:22 PM, Nathan Williams wrote:
We have 2 OpenStack VMs with IPs on the internal network, a keepalived
-managed VIP on the internal network that's added to each VMs allowed
-address-pairs in neutron, and a floating IP from the external network
mapped to the internal VIP
Yeah, keepalived handles the gratuitous arp on failover, it works nicely. I
do miss the admin tools for pacemaker though. I'm AFK, but I'll write up a
full explanation of our HA setup when I'm back at a PC.
Cheers,
Nathan
On Thu, Aug 27, 2015, 6:11 PM Shawn Heisey hapr...@elyograg.org wrote:
On 8/27/2015 6:52 PM, Nathan Williams wrote:
There's a sysctl for that, net.ipv4.ip_nonlocal_bind.
Interesting. That's one I had never seen before. I would assume that
the OS does this intelligently so that when the IP address *does*
suddenly appear at a later time, the application works
On Fri, 2015-08-28 at 01:25 +, Nathan Williams wrote:
Yeah, keepalived handles the gratuitous arp on failover, it works
nicely. I do miss the admin tools for pacemaker though. I'm AFK, but
I'll write up a full explanation of our HA setup when I'm back at a
PC.
Cheers,
Nathan
Okay,
There's a sysctl for that, net.ipv4.ip_nonlocal_bind.
On Thu, Aug 27, 2015, 5:49 PM Shawn Heisey hapr...@elyograg.org wrote:
On 8/24/2015 12:06 PM, Dennis Jacobfeuerborn wrote:
There is no need to run a full Pacemaker stack. Just run HAProxy on both
nodes and manage the virtual ips using
Hi
I have redundant haproxy servers on my environment. We use corosync and
pacemaker that manages the HA and then have HAproxy run on the HA domain
controller.
On 21/08/2015 15:51, Jeff Palmer wrote:
I've done exactly this. Amazon AWS has a DNS service called route53.
Route53 has built
There is no need to run a full Pacemaker stack. Just run HAProxy on both
nodes and manage the virtual ips using keepalived.
Regards,
Dennis
On 08/24/2015 06:09 PM, Kobus Bensch wrote:
Hi
I have redundant haproxy servers on my environment. We use corosync and
pacemaker that manages the HA
I've done exactly this. Amazon AWS has a DNS service called route53.
Route53 has built in health checks
What I did was setup the 2 haproxy nodes as A records, with a
healthcheck and a TTL of 30 seconds.
If one of the haproxy nodes failed the health check twice, route53
would remove it from the
Hello,
We are setting up a proxy, a haproxy server on CentOS 7, to our mail
services (webmail, smtp, pop3, imap, simple and with STARTTLS, or
SSL/TLS as appropriate). The load of the services is considered low. All
clients will be accessing the above services through the new proxy.
Current
9 matches
Mail list logo