Re: [Gluster-users] glusterfs, ganesh, and pcs rules

2018-01-03 Thread Renaud Fortier
Have you tried with :
VIP_tlxdmz-nfs1="10.X.X.181"
VIP_tlxdmz-nfs2="10.X.X.182"
Instead of :
VIP_server1="10.X.X.181"
VIP_server2="10.X.X.182"

Also, I don’t have the HA_VOL_SERVER in my settings but i’m using gluster 
3.10.x. I think it’s deprecated in 3.10 but not in 3.8.

Your /etc/hosts file or your DNS is correct for these servers ?

Renaud


De : Hetz Ben Hamo [mailto:h...@hetz.biz]
Envoyé : 24 décembre 2017 04:33
À : Renaud Fortier 
Cc : gluster-users@gluster.org
Objet : Re: [Gluster-users] glusterfs, ganesh, and pcs rules

I checked, and I have it like this:

# Name of the HA cluster created.
# must be unique within the subnet
HA_NAME="ganesha-nfs"
#
# The gluster server from which to mount the shared data volume.
HA_VOL_SERVER="tlxdmz-nfs1"
#
# N.B. you may use short names or long names; you may not use IP addrs.
# Once you select one, stay with it as it will be mildly unpleasant to
# clean up if you switch later on. Ensure that all names - short and/or
# long - are in DNS or /etc/hosts on all machines in the cluster.
#
# The subset of nodes of the Gluster Trusted Pool that form the ganesha
# HA cluster. Hostname is specified.
HA_CLUSTER_NODES="tlxdmz-nfs1,tlxdmz-nfs2"
#HA_CLUSTER_NODES="server1.lab.redhat.com<http://server1.lab.redhat.com>,server2.lab.redhat.com<http://server2.lab.redhat.com>,..."
#
# Virtual IPs for each of the nodes specified above.
VIP_server1="10.X.X.181"
VIP_server2="10.X.X.182"

תודה,
חץ בן חמו
אתם מוזמנים לבקר בבלוג היעוץ<http://linvirtstor.net/> או בבלוג הפרטי 
שלי<http://benhamo.org>

On Thu, Dec 21, 2017 at 3:47 PM, Renaud Fortier 
mailto:renaud.fort...@fsaa.ulaval.ca>> wrote:
Hi,
In your ganesha-ha.conf do you have your virtual ip adresses set something like 
this :

VIP_tlxdmz-nfs1="192.168.22.33"
VIP_tlxdmz-nfs2="192.168.22.34"

Renaud

De : 
gluster-users-boun...@gluster.org<mailto:gluster-users-boun...@gluster.org> 
[mailto:gluster-users-boun...@gluster.org<mailto:gluster-users-boun...@gluster.org>]
 De la part de Hetz Ben Hamo
Envoyé : 20 décembre 2017 04:35
À : gluster-users@gluster.org<mailto:gluster-users@gluster.org>
Objet : [Gluster-users] glusterfs, ganesh, and pcs rules

Hi,

I've just created again the gluster with NFS ganesha. Glusterfs version 3.8

When I run the command  gluster nfs-ganesha enable - it returns a success. 
However, looking at the pcs status, I see this:

[root@tlxdmz-nfs1 ~]# pcs status
Cluster name: ganesha-nfs
Stack: corosync
Current DC: tlxdmz-nfs2 (version 1.1.16-12.el7_4.5-94ff4df) - partition with 
quorum
Last updated: Wed Dec 20 09:20:44 2017
Last change: Wed Dec 20 09:19:27 2017 by root via cibadmin on tlxdmz-nfs1

2 nodes configured
8 resources configured

Online: [ tlxdmz-nfs1 tlxdmz-nfs2 ]

Full list of resources:

 Clone Set: nfs_setup-clone [nfs_setup]
 Started: [ tlxdmz-nfs1 tlxdmz-nfs2 ]
 Clone Set: nfs-mon-clone [nfs-mon]
 Started: [ tlxdmz-nfs1 tlxdmz-nfs2 ]
 Clone Set: nfs-grace-clone [nfs-grace]
 Started: [ tlxdmz-nfs1 tlxdmz-nfs2 ]
 tlxdmz-nfs1-cluster_ip-1   (ocf::heartbeat:IPaddr):Stopped
 tlxdmz-nfs2-cluster_ip-1   (ocf::heartbeat:IPaddr):Stopped

Failed Actions:
* tlxdmz-nfs1-cluster_ip-1_monitor_0 on tlxdmz-nfs2 'not configured' (6): 
call=23, status=complete, exitreason='IP address (the ip parameter) is 
mandatory',
last-rc-change='Wed Dec 20 09:19:28 2017', queued=0ms, exec=26ms
* tlxdmz-nfs2-cluster_ip-1_monitor_0 on tlxdmz-nfs2 'not configured' (6): 
call=27, status=complete, exitreason='IP address (the ip parameter) is 
mandatory',
last-rc-change='Wed Dec 20 09:19:28 2017', queued=0ms, exec=26ms
* tlxdmz-nfs1-cluster_ip-1_monitor_0 on tlxdmz-nfs1 'not configured' (6): 
call=23, status=complete, exitreason='IP address (the ip parameter) is 
mandatory',
last-rc-change='Wed Dec 20 09:19:28 2017', queued=0ms, exec=24ms
* tlxdmz-nfs2-cluster_ip-1_monitor_0 on tlxdmz-nfs1 'not configured' (6): 
call=27, status=complete, exitreason='IP address (the ip parameter) is 
mandatory',
last-rc-change='Wed Dec 20 09:19:28 2017', queued=0ms, exec=61ms


Daemon Status:
  corosync: active/disabled
  pacemaker: active/disabled
  pcsd: active/enabled

Any suggestion how this can be fixed when enabling nfs-ganesha when invoking 
the above command or anything else that I can do to fixed the failed actions?

Thanks

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] glusterfs, ganesh, and pcs rules

2017-12-24 Thread Hetz Ben Hamo
I checked, and I have it like this:

# Name of the HA cluster created.
# must be unique within the subnet
HA_NAME="ganesha-nfs"
#
# The gluster server from which to mount the shared data volume.
HA_VOL_SERVER="tlxdmz-nfs1"
#
# N.B. you may use short names or long names; you may not use IP addrs.
# Once you select one, stay with it as it will be mildly unpleasant to
# clean up if you switch later on. Ensure that all names - short and/or
# long - are in DNS or /etc/hosts on all machines in the cluster.
#
# The subset of nodes of the Gluster Trusted Pool that form the ganesha
# HA cluster. Hostname is specified.
HA_CLUSTER_NODES="tlxdmz-nfs1,tlxdmz-nfs2"
#HA_CLUSTER_NODES="server1.lab.redhat.com,server2.lab.redhat.com,..."
#
# Virtual IPs for each of the nodes specified above.
VIP_server1="10.X.X.181"
VIP_server2="10.X.X.182"

תודה,
*חץ בן חמו*
אתם מוזמנים לבקר בבלוג היעוץ <http://linvirtstor.net/> או בבלוג הפרטי שלי
<http://benhamo.org>

On Thu, Dec 21, 2017 at 3:47 PM, Renaud Fortier <
renaud.fort...@fsaa.ulaval.ca> wrote:

> Hi,
> In your ganesha-ha.conf do you have your virtual ip adresses set something
> like this :
>
> VIP_tlxdmz-nfs1="192.168.22.33"
> VIP_tlxdmz-nfs2="192.168.22.34"
>
> Renaud
>
> De : gluster-users-boun...@gluster.org [mailto:gluster-users-bounces@
> gluster.org] De la part de Hetz Ben Hamo
> Envoyé : 20 décembre 2017 04:35
> À : gluster-users@gluster.org
> Objet : [Gluster-users] glusterfs, ganesh, and pcs rules
>
> Hi,
>
> I've just created again the gluster with NFS ganesha. Glusterfs version 3.8
>
> When I run the command  gluster nfs-ganesha enable - it returns a success.
> However, looking at the pcs status, I see this:
>
> [root@tlxdmz-nfs1 ~]# pcs status
> Cluster name: ganesha-nfs
> Stack: corosync
> Current DC: tlxdmz-nfs2 (version 1.1.16-12.el7_4.5-94ff4df) - partition
> with quorum
> Last updated: Wed Dec 20 09:20:44 2017
> Last change: Wed Dec 20 09:19:27 2017 by root via cibadmin on tlxdmz-nfs1
>
> 2 nodes configured
> 8 resources configured
>
> Online: [ tlxdmz-nfs1 tlxdmz-nfs2 ]
>
> Full list of resources:
>
>  Clone Set: nfs_setup-clone [nfs_setup]
>  Started: [ tlxdmz-nfs1 tlxdmz-nfs2 ]
>  Clone Set: nfs-mon-clone [nfs-mon]
>  Started: [ tlxdmz-nfs1 tlxdmz-nfs2 ]
>  Clone Set: nfs-grace-clone [nfs-grace]
>  Started: [ tlxdmz-nfs1 tlxdmz-nfs2 ]
>  tlxdmz-nfs1-cluster_ip-1   (ocf::heartbeat:IPaddr):Stopped
>  tlxdmz-nfs2-cluster_ip-1   (ocf::heartbeat:IPaddr):Stopped
>
> Failed Actions:
> * tlxdmz-nfs1-cluster_ip-1_monitor_0 on tlxdmz-nfs2 'not configured' (6):
> call=23, status=complete, exitreason='IP address (the ip parameter) is
> mandatory',
> last-rc-change='Wed Dec 20 09:19:28 2017', queued=0ms, exec=26ms
> * tlxdmz-nfs2-cluster_ip-1_monitor_0 on tlxdmz-nfs2 'not configured' (6):
> call=27, status=complete, exitreason='IP address (the ip parameter) is
> mandatory',
> last-rc-change='Wed Dec 20 09:19:28 2017', queued=0ms, exec=26ms
> * tlxdmz-nfs1-cluster_ip-1_monitor_0 on tlxdmz-nfs1 'not configured' (6):
> call=23, status=complete, exitreason='IP address (the ip parameter) is
> mandatory',
> last-rc-change='Wed Dec 20 09:19:28 2017', queued=0ms, exec=24ms
> * tlxdmz-nfs2-cluster_ip-1_monitor_0 on tlxdmz-nfs1 'not configured' (6):
> call=27, status=complete, exitreason='IP address (the ip parameter) is
> mandatory',
> last-rc-change='Wed Dec 20 09:19:28 2017', queued=0ms, exec=61ms
>
>
> Daemon Status:
>   corosync: active/disabled
>   pacemaker: active/disabled
>   pcsd: active/enabled
>
> Any suggestion how this can be fixed when enabling nfs-ganesha when
> invoking the above command or anything else that I can do to fixed the
> failed actions?
>
> Thanks
>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] glusterfs, ganesh, and pcs rules

2017-12-21 Thread Renaud Fortier
Hi,
In your ganesha-ha.conf do you have your virtual ip adresses set something like 
this : 

VIP_tlxdmz-nfs1="192.168.22.33" 
VIP_tlxdmz-nfs2="192.168.22.34"

Renaud

De : gluster-users-boun...@gluster.org 
[mailto:gluster-users-boun...@gluster.org] De la part de Hetz Ben Hamo
Envoyé : 20 décembre 2017 04:35
À : gluster-users@gluster.org
Objet : [Gluster-users] glusterfs, ganesh, and pcs rules

Hi,

I've just created again the gluster with NFS ganesha. Glusterfs version 3.8

When I run the command  gluster nfs-ganesha enable - it returns a success. 
However, looking at the pcs status, I see this:

[root@tlxdmz-nfs1 ~]# pcs status
Cluster name: ganesha-nfs
Stack: corosync
Current DC: tlxdmz-nfs2 (version 1.1.16-12.el7_4.5-94ff4df) - partition with 
quorum
Last updated: Wed Dec 20 09:20:44 2017
Last change: Wed Dec 20 09:19:27 2017 by root via cibadmin on tlxdmz-nfs1

2 nodes configured
8 resources configured

Online: [ tlxdmz-nfs1 tlxdmz-nfs2 ]

Full list of resources:

 Clone Set: nfs_setup-clone [nfs_setup]
     Started: [ tlxdmz-nfs1 tlxdmz-nfs2 ]
 Clone Set: nfs-mon-clone [nfs-mon]
     Started: [ tlxdmz-nfs1 tlxdmz-nfs2 ]
 Clone Set: nfs-grace-clone [nfs-grace]
     Started: [ tlxdmz-nfs1 tlxdmz-nfs2 ]
 tlxdmz-nfs1-cluster_ip-1       (ocf::heartbeat:IPaddr):        Stopped
 tlxdmz-nfs2-cluster_ip-1       (ocf::heartbeat:IPaddr):        Stopped

Failed Actions:
* tlxdmz-nfs1-cluster_ip-1_monitor_0 on tlxdmz-nfs2 'not configured' (6): 
call=23, status=complete, exitreason='IP address (the ip parameter) is 
mandatory',
    last-rc-change='Wed Dec 20 09:19:28 2017', queued=0ms, exec=26ms
* tlxdmz-nfs2-cluster_ip-1_monitor_0 on tlxdmz-nfs2 'not configured' (6): 
call=27, status=complete, exitreason='IP address (the ip parameter) is 
mandatory',
    last-rc-change='Wed Dec 20 09:19:28 2017', queued=0ms, exec=26ms
* tlxdmz-nfs1-cluster_ip-1_monitor_0 on tlxdmz-nfs1 'not configured' (6): 
call=23, status=complete, exitreason='IP address (the ip parameter) is 
mandatory',
    last-rc-change='Wed Dec 20 09:19:28 2017', queued=0ms, exec=24ms
* tlxdmz-nfs2-cluster_ip-1_monitor_0 on tlxdmz-nfs1 'not configured' (6): 
call=27, status=complete, exitreason='IP address (the ip parameter) is 
mandatory',
    last-rc-change='Wed Dec 20 09:19:28 2017', queued=0ms, exec=61ms


Daemon Status:
  corosync: active/disabled
  pacemaker: active/disabled
  pcsd: active/enabled

Any suggestion how this can be fixed when enabling nfs-ganesha when invoking 
the above command or anything else that I can do to fixed the failed actions?

Thanks

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] glusterfs, ganesh, and pcs rules

2017-12-20 Thread Hetz Ben Hamo
Hi,

I've just created again the gluster with NFS ganesha. Glusterfs version 3.8

When I run the command  gluster nfs-ganesha enable - it returns a success.
However, looking at the pcs status, I see this:

[root@tlxdmz-nfs1 ~]# pcs status
Cluster name: ganesha-nfs
Stack: corosync
Current DC: tlxdmz-nfs2 (version 1.1.16-12.el7_4.5-94ff4df) - partition
with quorum
Last updated: Wed Dec 20 09:20:44 2017
Last change: Wed Dec 20 09:19:27 2017 by root via cibadmin on tlxdmz-nfs1

2 nodes configured
8 resources configured

Online: [ tlxdmz-nfs1 tlxdmz-nfs2 ]

Full list of resources:

 Clone Set: nfs_setup-clone [nfs_setup]
 Started: [ tlxdmz-nfs1 tlxdmz-nfs2 ]
 Clone Set: nfs-mon-clone [nfs-mon]
 Started: [ tlxdmz-nfs1 tlxdmz-nfs2 ]
 Clone Set: nfs-grace-clone [nfs-grace]
 Started: [ tlxdmz-nfs1 tlxdmz-nfs2 ]
 tlxdmz-nfs1-cluster_ip-1   (ocf::heartbeat:IPaddr):Stopped
 tlxdmz-nfs2-cluster_ip-1   (ocf::heartbeat:IPaddr):Stopped

Failed Actions:
* tlxdmz-nfs1-cluster_ip-1_monitor_0 on tlxdmz-nfs2 'not configured' (6):
call=23, status=complete, exitreason='IP address (the ip parameter) is
mandatory',
last-rc-change='Wed Dec 20 09:19:28 2017', queued=0ms, exec=26ms
* tlxdmz-nfs2-cluster_ip-1_monitor_0 on tlxdmz-nfs2 'not configured' (6):
call=27, status=complete, exitreason='IP address (the ip parameter) is
mandatory',
last-rc-change='Wed Dec 20 09:19:28 2017', queued=0ms, exec=26ms
* tlxdmz-nfs1-cluster_ip-1_monitor_0 on tlxdmz-nfs1 'not configured' (6):
call=23, status=complete, exitreason='IP address (the ip parameter) is
mandatory',
last-rc-change='Wed Dec 20 09:19:28 2017', queued=0ms, exec=24ms
* tlxdmz-nfs2-cluster_ip-1_monitor_0 on tlxdmz-nfs1 'not configured' (6):
call=27, status=complete, exitreason='IP address (the ip parameter) is
mandatory',
last-rc-change='Wed Dec 20 09:19:28 2017', queued=0ms, exec=61ms


Daemon Status:
  corosync: active/disabled
  pacemaker: active/disabled
  pcsd: active/enabled

Any suggestion how this can be fixed when enabling nfs-ganesha when
invoking the above command or anything else that I can do to fixed the
failed actions?

Thanks
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users