Re: [Gluster-users] Configuring Ganesha and gluster on separate nodes?

2015-12-02 Thread Surya K Ghatty

Hi Soumya, Kaleb, all:

Thanks for the response!


Quick follow-up to this question - We tried running ganesha and gluster on
two separate machines and the configuration seems to be working without
issues.

Follow-up question I have is this: what changes do I need to make to put
the Ganesha in active active HA mode - where backend gluster and ganesha
will be on a different node. I am using the instructions here for putting
Ganesha in HA mode. http://www.slideshare.net/SoumyaKoduri/high-49117846.
This presentation refers to commands like gluster
cluster.enable-shared-storage to enable HA.

1. Here is the config I am hoping to achieve:
glusterA and glusterB on individual bare metals - both in Trusted pool,
with volume gvol0 up and running.



Ganesha 1 and 2 on machines ganesha1, and ganesha1. And my gluster storage
will be on a third machine gluster1. (with a peer on another machine
gluster2).

Ganesha node1: on a VM ganeshaA.
Ganesha node2: on another vm GaneshaB.

I would like to know what it takes to put ganeshaA and GaneshaB in Active
Active HA mode. Is it technically possible?

a. How do commands like cluster.enable-shared-storage work in this case?
b. where does this command need to be run? on the ganesha node, or on the
gluster nodes?


2. Also, is it possible to have multiple ganesha servers point to the same
gluster volume in the back end? say, in the configuration #1, I have
another ganesha server GaneshaC that is not clustered with ganeshaA or
ganeshaB. Can it export the volume gvol0 that ganeshaA and ganeshaB are
also exporting?

thank you!


Surya.

Regards,

Surya Ghatty

"This too shall pass"


Surya Ghatty | Software Engineer | IBM Cloud Infrastructure Services
Development | tel: (507) 316-0559 | gha...@us.ibm.com




From:   Soumya Koduri <skod...@redhat.com>
To:     Surya K Ghatty/Rochester/IBM@IBMUS, gluster-users@gluster.org
Date:   11/18/2015 05:08 AM
Subject:Re: [Gluster-users] Configuring Ganesha and gluster on separate
nodes?





On 11/17/2015 10:21 PM, Surya K Ghatty wrote:
> Hi:
>
> I am trying to understand if it is technically feasible to have gluster
> nodes on one machine, and export a volume from one of these nodes using
> a nfs-ganesha server installed on a totally different machine? I tried
> the below and showmount -e does not show my volume exported. Any
> suggestions will be appreciated.
>
> 1. Here is my configuration:
>
> Gluster nodes: glusterA and glusterB on individual bare metals - both in
> Trusted pool, with volume gvol0 up and running.
> Ganesha node: on bare metal ganeshaA.
>
> 2. my ganesha.conf looks like this with IP address of glusterA in the
FSAL.
>
> FSAL {
> Name = GLUSTER;
>
> # IP of one of the nodes in the trusted pool
> *hostname = "WW.ZZ.XX.YY" --> IP address of GlusterA.*
>
> # Volume name. Eg: "test_volume"
> volume = "gvol0";
> }
>
> 3. I disabled nfs on gvol0. As you can see, *nfs.disable is set to on.*
>
> [root@glusterA ~]# gluster vol info
>
> Volume Name: gvol0
> Type: Distribute
> Volume ID: 16015bcc-1d17-4ef1-bb8b-01b7fdf6efa0
> Status: Started
> Number of Bricks: 1
> Transport-type: tcp
> Bricks:
> Brick1: glusterA:/data/brick0/gvol0
> Options Reconfigured:
> *nfs.disable: on*
> nfs.export-volumes: off
> features.quota-deem-statfs: on
> features.inode-quota: on
> features.quota: on
> performance.readdir-ahead: on
>
> 4. I then ran ganesha.nfsd -f /etc/ganesha/ganesha.conf -L
> /var/log/ganesha.log -N NIV_FULL_DEBUG
> Ganesha server was put in grace, no errors.
>
> 17/11/2015 10:44:40 : epoch 564b5964 : ganeshaA:
> nfs-ganesha-26426[reaper] fridgethr_freeze :RW LOCK :F_DBG :Released
> mutex 0x7f21a92818d0 (>mtx) at
> /builddir/build/BUILD/nfs-ganesha-2.2.0/src/support/fridgethr.c:484
> 17/11/2015 10:44:40 : epoch 564b5964 : ganeshaA:
> nfs-ganesha-26426[reaper] nfs_in_grace :RW LOCK :F_DBG :Acquired mutex
> 0x7f21ad1f18e0 (_mutex) at
> /builddir/build/BUILD/nfs-ganesha-2.2.0/src/SAL/nfs4_recovery.c:129
> *17/11/2015 10:44:40 : epoch 564b5964 : ganeshaA :
> nfs-ganesha-26426[reaper] nfs_in_grace :STATE :DEBUG :NFS Server IN
GRACE*
> 17/11/2015 10:44:40 : epoch 564b5964 : ganeshaA :
> nfs-ganesha-26426[reaper] nfs_in_grace :RW LOCK :F_DBG :Released mutex
> 0x7f21ad1f18e0 (_mutex) at
> /builddir/build/BUILD/nfs-ganesha-2.2.0/src/SAL/nfs4_recovery.c:141
>

You shall still need gluster-client bits on the machine where
nfs-ganesha server is installed to export a gluster volume. Check if you
have got libgfapi.so installed on that machine.

Also, ganesha server does log the warnings if its unable to process the
EXPORT

Re: [Gluster-users] vol set ganesha.enable errors out

2015-11-17 Thread Surya K Ghatty

Hi Kaleb, Thanks!

Filed a report here: https://bugzilla.redhat.com/show_bug.cgi?id=1282837.

Regards,

Surya Ghatty

"This too shall pass"


Surya Ghatty | Software Engineer | IBM Cloud Infrastructure Services
Development | tel: (507) 316-0559 | gha...@us.ibm.com




From:   Kaleb KEITHLEY <kkeit...@redhat.com>
To:     Surya K Ghatty/Rochester/IBM@IBMUS
Cc: gluster-users@gluster.org
Date:   11/17/2015 08:47 AM
Subject:Re: [Gluster-users] vol set  ganesha.enable errors out



On 11/17/2015 09:30 AM, Surya K Ghatty wrote:
> Hi Kaleb,
>
> Sorry... here is the version from the other machine. Both have the same
> version.
>
> [root@conv-gls002 glusterfs]# gluster --version
> glusterfs 3.7.6 built on Nov 9 2015 15:20:26
> Repository revision: git://git.gluster.com/glusterfs.git
> Copyright (c) 2006-2011 Gluster Inc. <http://www.gluster.com
> <http://www.gluster.com/>>
> GlusterFS comes with ABSOLUTELY NO WARRANTY.
> You may redistribute copies of GlusterFS under the terms of the GNU
> General Public License.
>

Yup, I kinda suspected. ;-)

As a work-around, try skipping the `gluster volume set gvol0
ganesha.enable on`.

Write your own /etc/ganesha/ganesha.conf file. (Example in my blog post
at
http://blog.gluster.org/2015/10/linux-scale-out-nfsv4-using-nfs-ganesha-and-glusterfs-one-step-at-a-time/
)

And if you wouldn't mind filing a bug at
https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS we would
appreciate it.

Thanks

--

Kaleb


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] vol set ganesha.enable errors out

2015-11-17 Thread Surya K Ghatty


Hi:

I am running into the following error when trying to enable ganesha on my
system. This seems to be the same message as the one here:
https://bugzilla.redhat.com/show_bug.cgi?id=1004332.

[root@conv-gls002 ~]#  gluster volume set gvol0 ganesha.enable on
volume set: failed: Staging failed on gluster1. Error: One or more
connected clients cannot support the feature being set. These clients need
to be upgraded or disconnected before running this command again

However, I can execute some of the other gluster vol set commands.

Here is the log:
[2015-11-17 13:51:48.629507] E [MSGID: 106289]
[glusterd-syncop.c:1871:gd_sync_task_begin] 0-management: Failed to build
payload for operation 'Volume Set'
[2015-11-17 13:51:56.698145] E [MSGID: 106022]
[glusterd-utils.c:10154:glusterd_check_client_op_version_support]
0-management: One or more clients don't support the required op-version
[2015-11-17 13:51:56.698193] E [MSGID: 106301]
[glusterd-syncop.c:1274:gd_stage_op_phase] 0-management: Staging of
operation 'Volume Set' failed on localhost : One or more connected clients
cannot support the feature being set. These clients need to be upgraded or
disconnected before running this command again
[2015-11-17 13:54:32.759969] E [MSGID: 106022]
[glusterd-utils.c:10154:glusterd_check_client_op_version_support]
0-management: One or more clients don't support the required op-version
[2015-11-17 13:54:32.760017] E [MSGID: 106301]
[glusterd-syncop.c:1274:gd_stage_op_phase] 0-management: Staging of
operation 'Volume Set' failed on localhost : One or more connected clients
cannot support the feature being set. These clients need to be upgraded or
disconnected before running this command again
[2015-11-17 13:55:15.930722] E [MSGID: 106022]
[glusterd-utils.c:10154:glusterd_check_client_op_version_support]
0-management: One or more clients don't support the required op-version
[2015-11-17 13:55:15.930733] E [MSGID: 106301]
[glusterd-syncop.c:1274:gd_stage_op_phase] 0-management: Staging of
operation 'Volume Set' failed on localhost : One or more connected clients
cannot support the feature being set. These clients need to be upgraded or
disconnected before running this command again


The work around seems to upgrade the "clients" to a certain level or
disconnect them. What client is this message referring to? I am running in
a HA mode, and have two glusterfs nodes. Both have gluster at the same
level. (3.7.6). There are no lingering mounts, as far as I can tell.

[root@conv-gls001 glusterfs]# gluster --version
glusterfs 3.7.6 built on Nov  9 2015 15:20:26
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2011 Gluster Inc. 
GlusterFS comes with ABSOLUTELY NO WARRANTY.
You may redistribute copies of GlusterFS under the terms of the GNU General
Public License.

[root@conv-gls001 ~]# gluster --version
glusterfs 3.7.6 built on Nov  9 2015 15:20:26
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2011 Gluster Inc. 
GlusterFS comes with ABSOLUTELY NO WARRANTY.
You may redistribute copies of GlusterFS under the terms of the GNU General
Public License.

I would appreciate any help! Thanks!

Regards,

Surya Ghatty

"This too shall pass"


Surya Ghatty | Software Engineer | IBM Cloud Infrastructure Services
Development | tel: (507) 316-0559 | gha...@us.ibm.com
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] vol set ganesha.enable errors out

2015-11-17 Thread Surya K Ghatty
Hi Kaleb,

Sorry... here is the version from the other machine. Both have the same
version.

[root@conv-gls002 glusterfs]# gluster --version
glusterfs 3.7.6 built on Nov  9 2015 15:20:26
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2011 Gluster Inc. <http://www.gluster.com>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
You may redistribute copies of GlusterFS under the terms of the GNU General
Public License.


Regards,

Surya Ghatty

"This too shall pass"


Surya Ghatty | Software Engineer | IBM Cloud Infrastructure Services
Development | tel: (507) 316-0559 | gha...@us.ibm.com




From:   Kaleb KEITHLEY <kkeit...@redhat.com>
To: Surya K Ghatty/Rochester/IBM@IBMUS, gluster-users@gluster.org
Date:   11/17/2015 08:25 AM
Subject:Re: [Gluster-users] vol set  ganesha.enable errors out



On 11/17/2015 09:08 AM, Surya K Ghatty wrote:
> Hi:
>
> I am running into the following error when trying to enable ganesha on
> my system. This seems to be the same message as the one here:
> https://bugzilla.redhat.com/show_bug.cgi?id=1004332.
>
> [root@conv-gls002 ~]# gluster volume set gvol0 ganesha.enable on
> volume set: failed: Staging failed on gluster1. Error: One or more
> connected clients cannot support the feature being set. These clients
> need to be upgraded or disconnected before running this command again

Some "client" — client usually means a gluster fuse client mount, or a
gfapi client like the nfs-ganesha server — isn't using 3.7.x.

Based on what I think your setup is that seems really unlikely. But see
my comment/question at the end.


>
> However, I can execute some of the other gluster vol set commands.
>
> Here is the log:
> [2015-11-17 13:51:48.629507] E [MSGID: 106289]
> [glusterd-syncop.c:1871:gd_sync_task_begin] 0-management: Failed to
> build payload for operation 'Volume Set'
> [2015-11-17 13:51:56.698145] E [MSGID: 106022]
> [glusterd-utils.c:10154:glusterd_check_client_op_version_support]
> 0-management: One or more clients don't support the required op-version
> [2015-11-17 13:51:56.698193] E [MSGID: 106301]
> [glusterd-syncop.c:1274:gd_stage_op_phase] 0-management: Staging of
> operation 'Volume Set' failed on localhost : One or more connected
> clients cannot support the feature being set. These clients need to be
> upgraded or disconnected before running this command again
> [2015-11-17 13:54:32.759969] E [MSGID: 106022]
> [glusterd-utils.c:10154:glusterd_check_client_op_version_support]
> 0-management: One or more clients don't support the required op-version
> [2015-11-17 13:54:32.760017] E [MSGID: 106301]
> [glusterd-syncop.c:1274:gd_stage_op_phase] 0-management: Staging of
> operation 'Volume Set' failed on localhost : One or more connected
> clients cannot support the feature being set. These clients need to be
> upgraded or disconnected before running this command again
> [2015-11-17 13:55:15.930722] E [MSGID: 106022]
> [glusterd-utils.c:10154:glusterd_check_client_op_version_support]
> 0-management: One or more clients don't support the required op-version
> [2015-11-17 13:55:15.930733] E [MSGID: 106301]
> [glusterd-syncop.c:1274:gd_stage_op_phase] 0-management: Staging of
> operation 'Volume Set' failed on localhost : One or more connected
> clients cannot support the feature being set. These clients need to be
> upgraded or disconnected before running this command again
>
>
> The work around seems to upgrade the "clients" to a certain level or
> disconnect them. What client is this message referring to? I am running
> in a HA mode, and have two glusterfs nodes. Both have gluster at the
> same level. (3.7.6). There are no lingering mounts, as far as I can tell.
>
> [root@conv-gls001 glusterfs]# gluster --version
> glusterfs 3.7.6 built on Nov 9 2015 15:20:26
> ...
> [root@conv-gls001 ~]# gluster --version
> glusterfs 3.7.6 built on Nov 9 2015 15:20:26

These look like the same machine, i.e. conv-gls001.  Is that really
correct? Let's'see the output from the _other_ machine.

--

Kaleb





___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Configuring Ganesha and gluster on separate nodes?

2015-11-17 Thread Surya K Ghatty


Hi:

I am trying to understand if it is technically feasible to have gluster
nodes on one machine, and export a volume from one of these nodes using a
nfs-ganesha server installed on a totally different machine? I tried the
below and showmount -e does not show my volume exported. Any suggestions
will be appreciated.

1. Here is my configuration:

 Gluster nodes: glusterA and glusterB on individual bare metals - both in
Trusted pool, with volume gvol0 up and running.
Ganesha node: on bare metal ganeshaA.

2. my ganesha.conf looks like this with IP address of glusterA in the FSAL.

 FSAL {
Name = GLUSTER;

# IP of one of the nodes in the trusted pool
hostname = "WW.ZZ.XX.YY"  --> IP address of GlusterA.

# Volume name. Eg: "test_volume"
volume = "gvol0";
}

3. I disabled nfs on gvol0. As you can see, nfs.disable is set to on.

[root@glusterA ~]# gluster vol info

Volume Name: gvol0
Type: Distribute
Volume ID: 16015bcc-1d17-4ef1-bb8b-01b7fdf6efa0
Status: Started
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: glusterA:/data/brick0/gvol0
Options Reconfigured:
nfs.disable: on
nfs.export-volumes: off
features.quota-deem-statfs: on
features.inode-quota: on
features.quota: on
performance.readdir-ahead: on

4. I then ran ganesha.nfsd -f /etc/ganesha/ganesha.conf
-L /var/log/ganesha.log -N NIV_FULL_DEBUG
Ganesha server was put in grace, no errors.

17/11/2015 10:44:40 : epoch 564b5964 : ganeshaA: nfs-ganesha-26426[reaper]
fridgethr_freeze :RW LOCK :F_DBG :Released mutex 0x7f21a92818d0 (>mtx)
at /builddir/build/BUILD/nfs-ganesha-2.2.0/src/support/fridgethr.c:484
17/11/2015 10:44:40 : epoch 564b5964 : ganeshaA: nfs-ganesha-26426[reaper]
nfs_in_grace :RW LOCK :F_DBG :Acquired mutex 0x7f21ad1f18e0
(_mutex)
at /builddir/build/BUILD/nfs-ganesha-2.2.0/src/SAL/nfs4_recovery.c:129
17/11/2015 10:44:40 : epoch 564b5964 : ganeshaA : nfs-ganesha-26426[reaper]
nfs_in_grace :STATE :DEBUG :NFS Server IN GRACE
17/11/2015 10:44:40 : epoch 564b5964 : ganeshaA : nfs-ganesha-26426[reaper]
nfs_in_grace :RW LOCK :F_DBG :Released mutex 0x7f21ad1f18e0
(_mutex)
at /builddir/build/BUILD/nfs-ganesha-2.2.0/src/SAL/nfs4_recovery.c:141

5. [root@ganeshaA glusterfs]# showmount -e
Export list for ganeshaA:


Any suggestions on what I am missing?

Regards,

Surya Ghatty

"This too shall pass"


Surya Ghatty | Software Engineer | IBM Cloud Infrastructure Services
Development | tel: (507) 316-0559 | gha...@us.ibm.com
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Question on HA Active-Active Ganesha setup

2015-11-09 Thread Surya K Ghatty

Hi Soumya, Avra,

The gluster_shared_storage volume has not be created, and there is no
shared_storage directory that has been created. The gluster peer status
shows the two nodes (gluster1, and gluster2) are up and running. Also,
restarting the glusterd on either machines did not make any difference.

I used to have a three node setup (gluster3, in addition to the above
setup), and I detached one of them uninstalled gluster on it. jiffin and
kkeithley_ pointed on ganesha irc channel,  "I think it didn't work because
it wants three nodes/bricks to make a "replica 3" volume"

I cleaned up glusterd on the remaining machines by stopping the glusterd,
deleting /var/run/gluster/* and then restarted glusterd on both machines.
It is at this point that the cluster.enable-shared-storage seemed to work.

>Has the "gluster_shared_storage" volume creation itself failed? Verify
>in 'gluster volume info'.
>Also check if the gluster nodes are in healthy state 'gluster peer
>status'. Try restarting 'glusterd' service on both the nodes.

My observation:
1. I would expect the script to return unsuccessful if the creation of
gluster_shared_volume or shared directory fails.  The command currently
returns successful message.
2. The command should display an appropriate error message indicating what
went wrong. Instead, there is no indication either in the logs or on the
console that something failed.

Let me know what you think.

Regards,

Surya Ghatty

"This too shall pass"


Surya Ghatty | Software Engineer | IBM Cloud Infrastructure Services
Development | tel: (507) 316-0559 | gha...@us.ibm.com




From:   Soumya Koduri <skod...@redhat.com>
To: Surya K Ghatty/Rochester/IBM@IBMUS, gluster-users@gluster.org
Cc: Avra Sengupta <aseng...@redhat.com>
Date:   11/06/2015 11:18 AM
Subject:Re: [Gluster-users] Question on HA Active-Active Ganesha setup





On 11/05/2015 08:43 PM, Surya K Ghatty wrote:
> All... I need your help! I am trying to setup Highly available
> Active-Active Ganesha configuration on two glusterfs nodes based on
> instructions here:
>
>
https://gluster.readthedocs.org/en/latest/Administrator%20Guide/Configuring%20HA%20NFS%20Server/

> and
> http://www.slideshare.net/SoumyaKoduri/high-49117846 and
> https://www.youtube.com/watch?v=Z4mvTQC-efM.
>
>
> *My questions:*
>
> 1. what is the expected behvaior? Is the cluster.enable-shared-storage
> command expected to create shared storage? It seems odd to return a
> success message without creating the shared volume.
> 2. Any suggestions on how to get past this problem?
>
> *Details:*
> I am using glusterfs 3.7.5 and Ganesha 2.2.0.6 installable packages. I'm
> installing
>
> Also, I am using the following command
>
> gluster volume set all cluster.enable-shared-storage enable
>
> that would automatically setup the shared_storage directory under
> /run/gluster/ and automounts the shared volume for HA.
>
> This command was working perfectly fine, and I was able to setup ganesha
> HA successfully on cent OS 7.0 running on bare metals - until now.
>
>
>
> [root@qint-tor01-c7 gluster]# gluster vol set all
> cluster.enable-shared-storage enable
> volume set: success
>
> [root@qint-tor01-c7 gluster]# pwd
> /run/gluster
>
> [root@qint-tor01-c7 gluster]# ls
> 5027ba011969a8b2eca99ca5c9fb77ae.socket shared_storage
> changelog-9fe3f3fdd745db918d7d5c39fbe94017.sock snaps
> changelog-a9bf0a82aba38610df80c75a9adc45ad.sock
>
>
> Yesterday, we tried to deploy Ganesha HA with Gluster FSAL on a
> different cloud. and when I run the same command there, (same version of
> glusterfs and ganesha, same cent OS 7) - the command returned
> successfully, but it did not auto create the shared_storage directory.
> There were no logs either in
> /var/log/glusterfs/etc-glusterfs-glusterd.vol.log
>
> or /var/log/ganesha.log related to the command.
>
> However, I do see these logs written to the
etc-glusterfs-glusterd.vol.log
>
> [2015-11-05 14:43:00.692762] W [socket.c:588:__socket_rwv] 0-nfs: readv
> on /var/run/gluster/9d5e1ba5e44bd1aa3331d2ee752a806a.socket failed
> (Invalid argument)
>
> on both ganesha nodes independent of the commands I execute.
>
> regarding this error, I did a ss -x | grep
> /var/run/gluster/9d5e1ba5e44bd1aa3331d2ee752a806a.socket
>
> and it appears that no process was using these sockets, on either
machines.
>
> My questions:
>
> 1. what is the expected behvaior? Is the cluster.enable-shared-storage
> command expected to create shared storage? It seems odd to return a
> success message without creating the shared volume.
yes. This command creates the volume "gluster_sh

[Gluster-users] Question on HA Active-Active Ganesha setup

2015-11-05 Thread Surya K Ghatty


All... I need your help! I am trying to setup Highly available
Active-Active Ganesha configuration on two glusterfs nodes based on
instructions here:

https://gluster.readthedocs.org/en/latest/Administrator%20Guide/Configuring%20HA%20NFS%20Server/
 and
http://www.slideshare.net/SoumyaKoduri/high-49117846 and
https://www.youtube.com/watch?v=Z4mvTQC-efM.


My questions:

1. what is the expected behvaior? Is the cluster.enable-shared-storage
command expected to create shared storage? It seems odd to return a success
message without creating the shared volume.
2. Any suggestions on how to get past this problem?

Details:
I am using glusterfs 3.7.5 and Ganesha 2.2.0.6 installable packages. I'm
installing

Also, I am using the following command

gluster volume set all cluster.enable-shared-storage enable

that would automatically setup the shared_storage directory
under /run/gluster/ and automounts the shared volume for HA.

This command was working perfectly fine, and I was able to setup ganesha HA
successfully on cent OS 7.0 running on bare metals - until now.



[root@qint-tor01-c7 gluster]# gluster vol set all
cluster.enable-shared-storage enable
volume set: success

[root@qint-tor01-c7 gluster]# pwd
/run/gluster

[root@qint-tor01-c7 gluster]# ls
5027ba011969a8b2eca99ca5c9fb77ae.socket  shared_storage
changelog-9fe3f3fdd745db918d7d5c39fbe94017.sock  snaps
changelog-a9bf0a82aba38610df80c75a9adc45ad.sock


Yesterday, we tried to deploy Ganesha HA with Gluster FSAL on a different
cloud. and when I run the same command there, (same version of glusterfs
and ganesha, same cent OS 7) - the command returned successfully, but it
did not auto create the shared_storage directory. There were no logs either
in /var/log/glusterfs/etc-glusterfs-glusterd.vol.log

or  /var/log/ganesha.log related to the command.

However, I do see these logs written to the etc-glusterfs-glusterd.vol.log

[2015-11-05 14:43:00.692762] W [socket.c:588:__socket_rwv] 0-nfs: readv
on /var/run/gluster/9d5e1ba5e44bd1aa3331d2ee752a806a.socket failed (Invalid
argument)

on both ganesha nodes independent of the commands I execute.

regarding this error, I did a ss -x |
grep /var/run/gluster/9d5e1ba5e44bd1aa3331d2ee752a806a.socket

and it appears that no process was using these sockets, on either machines.

My questions:

1. what is the expected behvaior? Is the cluster.enable-shared-storage
command expected to create shared storage? It seems odd to return a success
message without creating the shared volume.
2. Any suggestions on how to get past this problem?
Regards,

Surya Ghatty

"This too shall pass"


Surya Ghatty | Software Engineer | IBM Cloud Infrastructure Services
Development | tel: (507) 316-0559 | gha...@us.ibm.com
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users