Re: [ClusterLabs] Access denied when using Floating IP

2017-01-06 Thread Erick Ocrospoma
On 6 January 2017 at 14:37, Ken Gaillot  wrote:

> On 12/26/2016 12:03 AM, Kaushal Shriyan wrote:
> > Hi,
> >
> > I have set up Highly Available HAProxy Servers with Keepalived and
> > Floating IP.  I have the below details
> >
> > *Master Node keepalived.conf*
> >
> > global_defs {
> > # Keepalived process identifier
> > #lvs_id haproxy_DH
> > }
> > # Script used to check if HAProxy is running
> > vrrp_script check_haproxy {
> > script "/usr/bin/killall -0 haproxy"
> > interval 2
> > weight 2
> > }
> > # Virtual interface
> > # The priority specifies the order in which the assigned interface to
> > take over in a failover
> > vrrp_instance VI_01 {
> > state MASTER
> > interface eth0
> > virtual_router_id 51
> > priority 200
> > # The virtual ip address shared between the two loadbalancers
> > virtual_ipaddress {
> > *172.16.0.75/32 *
> > }
> > track_script {
> > check_haproxy
> > }
> > }
> >
> > *Slave Node keepalived.conf*
> >
> > global_defs {
> > # Keepalived process identifier
> > #lvs_id haproxy_DH_passive
> > }
> > # Script used to check if HAProxy is running
> > vrrp_script check_haproxy {
> > script "/usr/bin/killall -0 haproxy"
> > interval 2
> > weight 2
> > }
> > # Virtual interface
> > # The priority specifies the order in which the assigned interface to
> > take over in a failover
> > vrrp_instance VI_01 {
> > state BACKUP
> > interface eth0
> > virtual_router_id 51
> > priority 100
> > # The virtual ip address shared between the two loadbalancers
> > virtual_ipaddress {
> > 172.16.0.75/32 
> > }
> > track_script {
> > check_haproxy
> > }
> > }
> >
> > HAProxy Node 1 has two IP Addresses
> >
> > eth0 :- 172.16.0.20 LAN IP of the box Master Node
> > eth0 :- 172.16.0.75 Virtual IP
> >
> > eth0 :- 172.16.0.21 LAN IP of the box Slave Node
> >
> > In MySQL server, i have given access for the Floating IP :- 172.16.0.75
> >
> > *GRANT USAGE ON *.* TO 'haproxy_check'@'172.16.0.75';
> > *
> > *GRANT ALL PRIVILEGES ON *.* TO 'haproxy_root'@'172.16.0.75' IDENTIFIED
> > BY PASSWORD '*7A3F28E9F3E3AEFDFF87BCFE119DCF830101DD71' WITH GRANT
> OPTION;*
> >
> > When i try to connect to the MySQL server using floating IP :-
> 172.16.0.75,
> > I get access denied inspite of giving grant access as per the above
> > mentioned command. When i try to use the static IP to connect to the
> > MySQL server using LAN IP :- 172.16.0.20, it works as expected. is it
> > because eth0 has two IPs :- 172.16.0.20 and 172.16.0.75?
>

​Might be. Try by giving privileges to both IPs.
Also you could try "seeing" from which IP you are exactly login from.
http://serverfault.com/questions/65255/log-mysql-login-attempts​


> >
> > Please do let me know if you need any additional information.
> >
> > Regards,
> >
> > Kaushal
>
> People on this list tend to be more familiar with pacemaker clusters
> than keepalived, but my guess is that mysql's privileges apply to the IP
> address that the user is connecting *from*. Try giving the same
> privileges to the user at all other local IPs (or @'%' if you don't mind
> allowing connections from anywhere, and use a firewall to block unwanted
> connections instead).
>
> ___
> Users mailing list: Users@clusterlabs.org
> http://lists.clusterlabs.org/mailman/listinfo/users
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
>



-- 


Erick.


---
IRC :   zerick
Blog: http://zerick.me
About :  http://about.me/zerick
Linux User ID :  549567
___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Status and help with pgsql RA

2017-01-06 Thread Jehan-Guillaume de Rorthais
On Fri, 6 Jan 2017 13:47:34 -0600
Ken Gaillot  wrote:

> On 12/28/2016 02:24 PM, Nils Carlson wrote:
> > Hi,
> > 
> > I am looking to set up postgresql in high-availability and have been
> > comparing the guide at
> > http://wiki.clusterlabs.org/wiki/PgSQL_Replicated_Cluster with the
> > contents of the pgsql resource agent on github. It seems that there have
> > been substantial improvements in the resource agent regarding the use of
> > replication slots.
> > 
> > Could anybody look at updating the guide, or just sending it out in an
> > e-mail to help spread knowledge?
> > 
> > The replications slots with pacemaker look really cool, if I've
> > understood things right there should be no need for manual work after
> > node recovery with the replication slots (though there is a risk of a
> > full disk)?
> > 
> > All help, tips and guidance much appreciated.
> > 
> > Cheers,
> > Nils  
> 
> Hmm, that wiki page could definitely use updating. I'm personally not
> familiar with pgsql, so hopefully someone else can chime in.
> 
> Another user on this list has made an alternative resource agent that
> you might want to check out:
> 
> http://lists.clusterlabs.org/pipermail/users/2016-December/004740.html

Indeed, but PAF does not supports replication slots itself.

As far as I understand the pgsql RA, the parameter "replication_slot_name" is
only useful to generate the slot names, create them on all nodes and add the
"primary_slot_name" parameter in the generated recovery.conf files for each
node. Other admin considerations are in the hands of the user.

As PAF does not take care of the recovery.conf files altogether, it is
quite easy to have the same behavior: just create the required slots on all
nodes by hands and set up accordingly the recovery.conf files.

Regards,

___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] centos 7 drbd fubar

2017-01-06 Thread Dimitri Maziuk
On 01/06/2017 01:41 PM, Ken Gaillot wrote:

> That is disconcerting. Since no one here seems to know, have you tried
> asking on the drbd list? It sounds like an issue with the drbd kernel
> module.

AFAIK at least a couple of linbit people are on this list too.

I have another pair that fails over just fine, the difference is it
doesn't export the drbd over nfs. So it could be nfs. I also had it
working initially -- otherwise it'd never made it into production, so it
may be the recent redhat kernels.

Thanks though, I'll probably try drbd-users next.
-- 
Dimitri Maziuk
Programmer/sysadmin
BioMagResBank, UW-Madison -- http://www.bmrb.wisc.edu



signature.asc
Description: OpenPGP digital signature
___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Status and help with pgsql RA

2017-01-06 Thread Ken Gaillot
On 12/28/2016 02:24 PM, Nils Carlson wrote:
> Hi,
> 
> I am looking to set up postgresql in high-availability and have been
> comparing the guide at
> http://wiki.clusterlabs.org/wiki/PgSQL_Replicated_Cluster with the
> contents of the pgsql resource agent on github. It seems that there have
> been substantial improvements in the resource agent regarding the use of
> replication slots.
> 
> Could anybody look at updating the guide, or just sending it out in an
> e-mail to help spread knowledge?
> 
> The replications slots with pacemaker look really cool, if I've
> understood things right there should be no need for manual work after
> node recovery with the replication slots (though there is a risk of a
> full disk)?
> 
> All help, tips and guidance much appreciated.
> 
> Cheers,
> Nils

Hmm, that wiki page could definitely use updating. I'm personally not
familiar with pgsql, so hopefully someone else can chime in.

Another user on this list has made an alternative resource agent that
you might want to check out:

http://lists.clusterlabs.org/pipermail/users/2016-December/004740.html

___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] centos 7 drbd fubar

2017-01-06 Thread Ken Gaillot
On 12/27/2016 03:08 PM, Dimitri Maziuk wrote:
> I ran centos 7.3.1611 update over the holidays and my drbd + nfs + imap
> active-passive pair locked up again. This has now been consistent for at
> least 3 kernel updates. This time I had enough consoles open to run
> fuser & lsof though.
> 
> The procedure:
> 
> 1. pcs cluster standby 
> 2. yum up && reboot 
> 3. pcs cluster unstandby 
> 
> Fine so far.
> 
> 4. pcs cluster standby 
> results in
> 
>> Filesystem(drbd_filesystem)[18277]: 2016/12/23_17:36:41 INFO: Running 
>> stop for /dev/drbd0 on /raid
>> Filesystem(drbd_filesystem)[18277]: 2016/12/23_17:36:41 INFO: Trying to 
>> unmount /raid
>> Filesystem(drbd_filesystem)[18277]: 2016/12/23_17:36:41 ERROR: Couldn't 
>> unmount /raid; trying cleanup with TERM
>> Filesystem(drbd_filesystem)[18277]: 2016/12/23_17:36:41 INFO: No 
>> processes on /raid were signalled. force_unmount is set to 'yes'
>> Filesystem(drbd_filesystem)[18277]: 2016/12/23_17:36:42 ERROR: Couldn't 
>> unmount /raid; trying cleanup with TERM
>> Filesystem(drbd_filesystem)[18277]: 2016/12/23_17:36:42 INFO: No 
>> processes on /raid were signalled. force_unmount is set to 'yes'
>> Filesystem(drbd_filesystem)[18277]: 2016/12/23_17:36:43 ERROR: Couldn't 
>> unmount /raid; trying cleanup with TERM
>> Filesystem(drbd_filesystem)[18277]: 2016/12/23_17:36:43 INFO: No 
>> processes on /raid were signalled. force_unmount is set to 'yes'
>> Filesystem(drbd_filesystem)[18277]: 2016/12/23_17:36:44 ERROR: Couldn't 
>> unmount /raid; trying cleanup with KILL
>> Filesystem(drbd_filesystem)[18277]: 2016/12/23_17:36:44 INFO: No 
>> processes on /raid were signalled. force_unmount is set to 'yes'
>> Filesystem(drbd_filesystem)[18277]: 2016/12/23_17:36:45 ERROR: Couldn't 
>> unmount /raid; trying cleanup with KILL
>> Filesystem(drbd_filesystem)[18277]: 2016/12/23_17:36:46 INFO: No 
>> processes on /raid were signalled. force_unmount is set to 'yes'
>> Filesystem(drbd_filesystem)[18277]: 2016/12/23_17:36:47 ERROR: Couldn't 
>> unmount /raid; trying cleanup with KILL
>> Filesystem(drbd_filesystem)[18277]: 2016/12/23_17:36:47 INFO: No 
>> processes on /raid were signalled. force_unmount is set to 'yes'
>> Filesystem(drbd_filesystem)[18277]: 2016/12/23_17:36:48 ERROR: Couldn't 
>> unmount /raid, giving up!
>> Dec 23 17:36:48 [1138] zebrafish.bmrb.wisc.edu   lrmd:   notice: 
>> operation_finished:drbd_filesystem_stop_0:18277:stderr [ umount: 
>> /raid: target i
>> s busy. ]
> 
> ... until the system's powered down. Before power down I ran lsof, it
> hung, and fuser:
> 
>> # fuser -vum /raid
>>  USERPID ACCESS COMMAND
>> /raid:   root kernel mount (root)/raid
> 
> After running yum up on the primary and rebooting it again,
> 
> 5. pcs cluster unstandby 
> causes the same fail to unmount loop on the secondary, that has to be
> powered down until the primary recovers.
> 
> Hopefully I'm doing something wrong, please someone tell me what it is.
> Anyone? Bueller?

That is disconcerting. Since no one here seems to know, have you tried
asking on the drbd list? It sounds like an issue with the drbd kernel
module.

http://lists.linbit.com/listinfo/drbd-user


___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Access denied when using Floating IP

2017-01-06 Thread Ken Gaillot
On 12/26/2016 12:03 AM, Kaushal Shriyan wrote:
> Hi,
> 
> I have set up Highly Available HAProxy Servers with Keepalived and
> Floating IP.  I have the below details 
> 
> *Master Node keepalived.conf*
> 
> global_defs {
> # Keepalived process identifier
> #lvs_id haproxy_DH
> }
> # Script used to check if HAProxy is running
> vrrp_script check_haproxy {
> script "/usr/bin/killall -0 haproxy"
> interval 2
> weight 2
> }
> # Virtual interface
> # The priority specifies the order in which the assigned interface to
> take over in a failover
> vrrp_instance VI_01 {
> state MASTER
> interface eth0
> virtual_router_id 51
> priority 200
> # The virtual ip address shared between the two loadbalancers
> virtual_ipaddress {
> *172.16.0.75/32 *
> }
> track_script {
> check_haproxy
> }
> }
> 
> *Slave Node keepalived.conf*
> 
> global_defs {
> # Keepalived process identifier
> #lvs_id haproxy_DH_passive
> }
> # Script used to check if HAProxy is running
> vrrp_script check_haproxy {
> script "/usr/bin/killall -0 haproxy"
> interval 2
> weight 2
> }
> # Virtual interface
> # The priority specifies the order in which the assigned interface to
> take over in a failover
> vrrp_instance VI_01 {
> state BACKUP
> interface eth0
> virtual_router_id 51
> priority 100
> # The virtual ip address shared between the two loadbalancers
> virtual_ipaddress {
> 172.16.0.75/32 
> }
> track_script {
> check_haproxy
> }
> }
> 
> HAProxy Node 1 has two IP Addresses
> 
> eth0 :- 172.16.0.20 LAN IP of the box Master Node
> eth0 :- 172.16.0.75 Virtual IP
> 
> eth0 :- 172.16.0.21 LAN IP of the box Slave Node
> 
> In MySQL server, i have given access for the Floating IP :- 172.16.0.75
> 
> *GRANT USAGE ON *.* TO 'haproxy_check'@'172.16.0.75';
> *
> *GRANT ALL PRIVILEGES ON *.* TO 'haproxy_root'@'172.16.0.75' IDENTIFIED
> BY PASSWORD '*7A3F28E9F3E3AEFDFF87BCFE119DCF830101DD71' WITH GRANT OPTION;*
> 
> When i try to connect to the MySQL server using floating IP :- 172.16.0.75,
> I get access denied inspite of giving grant access as per the above
> mentioned command. When i try to use the static IP to connect to the
> MySQL server using LAN IP :- 172.16.0.20, it works as expected. is it
> because eth0 has two IPs :- 172.16.0.20 and 172.16.0.75?
> 
> Please do let me know if you need any additional information.
> 
> Regards,
> 
> Kaushal

People on this list tend to be more familiar with pacemaker clusters
than keepalived, but my guess is that mysql's privileges apply to the IP
address that the user is connecting *from*. Try giving the same
privileges to the user at all other local IPs (or @'%' if you don't mind
allowing connections from anywhere, and use a firewall to block unwanted
connections instead).

___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Galera 10.1 cluster

2017-01-06 Thread Oscar Segarra
Hi, I get errors like the following:

017-01-06 11:25:47 139902389713152 [ERROR] WSREP: failed to open gcomm
backend connection: 131: invalid UUID:  (FATAL)
 at gcomm/src/pc.cpp:PC():267
2017-01-06 11:25:47 139902389713152 [ERROR] WSREP:
gcs/src/gcs_core.cpp:gcs_core_open():208: Failed to open backend
connection: -131 (State not recoverable)
2017-01-06 11:25:47 139902389713152 [ERROR] WSREP:
gcs/src/gcs.cpp:gcs_open():1380: Failed to open channel
'vdic-galera-cluster' at
'gcomm://vdicdb01-priv,vdicdb02-priv,vdicdb03-priv': -131 (State not
recoverable)
2017-01-06 11:25:47 139902389713152 [ERROR] WSREP: gcs connect failed:
State not recoverable
2017-01-06 11:25:47 139902389713152 [ERROR] WSREP:
wsrep::connect(gcomm://vdicdb01-priv,vdicdb02-priv,vdicdb03-priv) failed: 7
2017-01-06 11:25:47 139902389713152 [ERROR] Aborting

Of  course, three nodes are started up!

Thanks a lot.

2017-01-06 11:14 GMT+01:00 Oscar Segarra :

> Hi,
>
> In my environment I'm not able to bootstrap the cluster after a crash and
> I thought it could be a configuration problem at cluster level and I wanted
> to know if anybody has been able to configure it.
>
> Thanks a lot.
>
> 2017-01-06 9:34 GMT+01:00 Damien Ciabrini :
>
>> Hey Oscar,
>>
>> - Original Message -
>> > Hi,
>> >
>> > Anybody has been able to set up a Galera cluster with the latest
>> available
>> > version of Galera?
>> >
>> > Can anybody paste the configuration?
>> >
>> > I have tested it but I have not been able to make it run resiliently.
>> >
>> Can you be more specific on why it wouldn't run "resiliently"?
>> I just saw you've asked question on codership group re. bootstrapping the
>> cluster, is it related?
>>
>> > Any help will be welcome!
>> >
>> > Thanks a lot.
>> >
>> >
>> >
>> > ___
>> > Users mailing list: Users@clusterlabs.org
>> > http://lists.clusterlabs.org/mailman/listinfo/users
>> >
>> > Project Home: http://www.clusterlabs.org
>> > Getting started: http://www.clusterlabs.org/doc
>> /Cluster_from_Scratch.pdf
>> > Bugs: http://bugs.clusterlabs.org
>> >
>>
>> ___
>> Users mailing list: Users@clusterlabs.org
>> http://lists.clusterlabs.org/mailman/listinfo/users
>>
>> Project Home: http://www.clusterlabs.org
>> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
>> Bugs: http://bugs.clusterlabs.org
>>
>
>
___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Galera 10.1 cluster

2017-01-06 Thread Oscar Segarra
Hi,

In my environment I'm not able to bootstrap the cluster after a crash and I
thought it could be a configuration problem at cluster level and I wanted
to know if anybody has been able to configure it.

Thanks a lot.

2017-01-06 9:34 GMT+01:00 Damien Ciabrini :

> Hey Oscar,
>
> - Original Message -
> > Hi,
> >
> > Anybody has been able to set up a Galera cluster with the latest
> available
> > version of Galera?
> >
> > Can anybody paste the configuration?
> >
> > I have tested it but I have not been able to make it run resiliently.
> >
> Can you be more specific on why it wouldn't run "resiliently"?
> I just saw you've asked question on codership group re. bootstrapping the
> cluster, is it related?
>
> > Any help will be welcome!
> >
> > Thanks a lot.
> >
> >
> >
> > ___
> > Users mailing list: Users@clusterlabs.org
> > http://lists.clusterlabs.org/mailman/listinfo/users
> >
> > Project Home: http://www.clusterlabs.org
> > Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> > Bugs: http://bugs.clusterlabs.org
> >
>
> ___
> Users mailing list: Users@clusterlabs.org
> http://lists.clusterlabs.org/mailman/listinfo/users
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
>
___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Galera 10.1 cluster

2017-01-06 Thread Damien Ciabrini
Hey Oscar,

- Original Message -
> Hi,
> 
> Anybody has been able to set up a Galera cluster with the latest available
> version of Galera?
> 
> Can anybody paste the configuration?
> 
> I have tested it but I have not been able to make it run resiliently.
> 
Can you be more specific on why it wouldn't run "resiliently"? 
I just saw you've asked question on codership group re. bootstrapping the
cluster, is it related?

> Any help will be welcome!
> 
> Thanks a lot.
> 
> 
> 
> ___
> Users mailing list: Users@clusterlabs.org
> http://lists.clusterlabs.org/mailman/listinfo/users
> 
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
> 

___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org