Re: [Gluster-devel] Huge VSZ (VIRT) usage by glustershd on dummy node

2016-06-07 Thread Pranith Kumar Karampuri
Oleksandr,
Could you take statedump of the shd process once in 5-10 minutes and send
may be 5 samples of them when it starts to increase? This will help us find
what datatypes are being allocated a lot and can lead to coming up with
possible theories for the increase.

On Wed, Jun 8, 2016 at 12:03 PM, Oleksandr Natalenko <
oleksa...@natalenko.name> wrote:

> Also, I've checked shd log files, and found out that for some reason shd
> constantly reconnects to bricks: [1]
>
> Please note that suggested fix [2] by Pranith does not help, VIRT value
> still grows:
>
> ===
> root  1010  0.0  9.6 7415248 374688 ?  Ssl  чер07   0:14
> /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p
> /var/lib/glusterd/glustershd/run/glustershd.pid -l
> /var/log/glusterfs/glustershd.log -S
> /var/run/gluster/7848e17764dd4ba80f4623aecb91b07a.socket --xlator-option
> *replicate*.node-uuid=80bc95e1-2027-4a96-bb66-d9c8ade624d7
> ===
>
> I do not know the reason why it is reconnecting, but I suspect leak to
> happen on that reconnect.
>
> CCing Pranith.
>
> [1] http://termbin.com/brob
> [2] http://review.gluster.org/#/c/14053/
>
> 06.06.2016 12:21, Kaushal M написав:
>
>> Has multi-threaded SHD been merged into 3.7.* by any chance? If not,
>>
>> what I'm saying below doesn't apply.
>>
>> We saw problems when encrypted transports were used, because the RPC
>> layer was not reaping threads (doing pthread_join) when a connection
>> ended. This lead to similar observations of huge VIRT and relatively
>> small RSS.
>>
>> I'm not sure how multi-threaded shd works, but it could be leaking
>> threads in a similar way.
>>
>> On Mon, Jun 6, 2016 at 1:54 PM, Oleksandr Natalenko
>>  wrote:
>>
>>> Hello.
>>>
>>> We use v3.7.11, replica 2 setup between 2 nodes + 1 dummy node for
>>> keeping
>>> volumes metadata.
>>>
>>> Now we observe huge VSZ (VIRT) usage by glustershd on dummy node:
>>>
>>> ===
>>> root 15109  0.0 13.7 76552820 535272 ? Ssl  тра26   2:11
>>> /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p
>>> /var/lib/glusterd/glustershd/run/glustershd.pid -l
>>> /var/log/glusterfs/glustershd.log -S
>>> /var/run/gluster/7848e17764dd4ba80f4623aecb91b07a.socket --xlator-option
>>> *replicate*.node-uuid=80bc95e1-2027-4a96-bb66-d9c8ade624d7
>>> ===
>>>
>>> that is ~73G. RSS seems to be OK (~522M). Here is the statedump of
>>> glustershd process: [1]
>>>
>>> Also, here is sum of sizes, presented in statedump:
>>>
>>> ===
>>> # cat /var/run/gluster/glusterdump.15109.dump.1465200139 | awk -F '='
>>> 'BEGIN
>>> {sum=0} /^size=/ {sum+=$2} END {print sum}'
>>> 353276406
>>> ===
>>>
>>> That is ~337 MiB.
>>>
>>> Also, here are VIRT values from 2 replica nodes:
>>>
>>> ===
>>> root 24659  0.0  0.3 5645836 451796 ?  Ssl  тра24   3:28
>>> /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p
>>> /var/lib/glusterd/glustershd/run/glustershd.pid -l
>>> /var/log/glusterfs/glustershd.log -S
>>> /var/run/gluster/44ec3f29003eccedf894865107d5db90.socket --xlator-option
>>> *replicate*.node-uuid=a19afcc2-e26c-43ce-bca6-d27dc1713e87
>>> root 18312  0.0  0.3 6137500 477472 ?  Ssl  тра19   6:37
>>> /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p
>>> /var/lib/glusterd/glustershd/run/glustershd.pid -l
>>> /var/log/glusterfs/glustershd.log -S
>>> /var/run/gluster/1670a3abbd1eea968126eb6f5be20322.socket --xlator-option
>>> *replicate*.node-uuid=52dca21b-c81c-48b5-9de2-1ed37987fbc2
>>> ===
>>>
>>> Those are 5 to 6G, which is much less than dummy node has, but still look
>>> too big for us.
>>>
>>> Should we care about huge VIRT value on dummy node? Also, how one would
>>> debug that?
>>>
>>> Regards,
>>>   Oleksandr.
>>>
>>> [1] https://gist.github.com/d2cfa25251136512580220fcdb8a6ce6
>>> ___
>>> Gluster-devel mailing list
>>> Gluster-devel@gluster.org
>>> http://www.gluster.org/mailman/listinfo/gluster-devel
>>>
>>


-- 
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Huge VSZ (VIRT) usage by glustershd on dummy node

2016-06-07 Thread Oleksandr Natalenko
Also, I've checked shd log files, and found out that for some reason shd 
constantly reconnects to bricks: [1]


Please note that suggested fix [2] by Pranith does not help, VIRT value 
still grows:


===
root  1010  0.0  9.6 7415248 374688 ?  Ssl  чер07   0:14 
/usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p 
/var/lib/glusterd/glustershd/run/glustershd.pid -l 
/var/log/glusterfs/glustershd.log -S 
/var/run/gluster/7848e17764dd4ba80f4623aecb91b07a.socket --xlator-option 
*replicate*.node-uuid=80bc95e1-2027-4a96-bb66-d9c8ade624d7

===

I do not know the reason why it is reconnecting, but I suspect leak to 
happen on that reconnect.


CCing Pranith.

[1] http://termbin.com/brob
[2] http://review.gluster.org/#/c/14053/

06.06.2016 12:21, Kaushal M написав:

Has multi-threaded SHD been merged into 3.7.* by any chance? If not,
what I'm saying below doesn't apply.

We saw problems when encrypted transports were used, because the RPC
layer was not reaping threads (doing pthread_join) when a connection
ended. This lead to similar observations of huge VIRT and relatively
small RSS.

I'm not sure how multi-threaded shd works, but it could be leaking
threads in a similar way.

On Mon, Jun 6, 2016 at 1:54 PM, Oleksandr Natalenko
 wrote:

Hello.

We use v3.7.11, replica 2 setup between 2 nodes + 1 dummy node for 
keeping

volumes metadata.

Now we observe huge VSZ (VIRT) usage by glustershd on dummy node:

===
root 15109  0.0 13.7 76552820 535272 ? Ssl  тра26   2:11
/usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p
/var/lib/glusterd/glustershd/run/glustershd.pid -l
/var/log/glusterfs/glustershd.log -S
/var/run/gluster/7848e17764dd4ba80f4623aecb91b07a.socket 
--xlator-option

*replicate*.node-uuid=80bc95e1-2027-4a96-bb66-d9c8ade624d7
===

that is ~73G. RSS seems to be OK (~522M). Here is the statedump of
glustershd process: [1]

Also, here is sum of sizes, presented in statedump:

===
# cat /var/run/gluster/glusterdump.15109.dump.1465200139 | awk -F '=' 
'BEGIN

{sum=0} /^size=/ {sum+=$2} END {print sum}'
353276406
===

That is ~337 MiB.

Also, here are VIRT values from 2 replica nodes:

===
root 24659  0.0  0.3 5645836 451796 ?  Ssl  тра24   3:28
/usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p
/var/lib/glusterd/glustershd/run/glustershd.pid -l
/var/log/glusterfs/glustershd.log -S
/var/run/gluster/44ec3f29003eccedf894865107d5db90.socket 
--xlator-option

*replicate*.node-uuid=a19afcc2-e26c-43ce-bca6-d27dc1713e87
root 18312  0.0  0.3 6137500 477472 ?  Ssl  тра19   6:37
/usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p
/var/lib/glusterd/glustershd/run/glustershd.pid -l
/var/log/glusterfs/glustershd.log -S
/var/run/gluster/1670a3abbd1eea968126eb6f5be20322.socket 
--xlator-option

*replicate*.node-uuid=52dca21b-c81c-48b5-9de2-1ed37987fbc2
===

Those are 5 to 6G, which is much less than dummy node has, but still 
look

too big for us.

Should we care about huge VIRT value on dummy node? Also, how one 
would

debug that?

Regards,
  Oleksandr.

[1] https://gist.github.com/d2cfa25251136512580220fcdb8a6ce6
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] scripts to get incoming bugs on components, number of reviews from gerrit

2016-06-07 Thread Pranith Kumar Karampuri
hi,
Does anyone know/have any scripts to get this information from
bugzilla/gerrit?

-- 
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] chown: changing ownership of `/build/install/bin/fusermount-glusterfs': Operation not permitted

2016-06-07 Thread Niels de Vos
On Wed, Jun 08, 2016 at 01:55:23AM +0200, Michael Scherer wrote:
> Le mardi 07 juin 2016 à 23:30 +0200, Niels de Vos a écrit :
> > On Tue, Jun 07, 2016 at 09:08:09PM +0530, Nigel Babu wrote:
> > > Hello,
> > > 
> > > misc and I haven't been trying to debug the regression test failures
> > > without much success. Does anyone know why the build fails with this:
> > > 
> > > chown: changing ownership of `/build/install/bin/fusermount-glusterfs':
> > > Operation not permitted
> > > 
> > > See error log for more context:
> > > https://build.gluster.org/job/rackspace-regression-2GB-triggered/21512/consoleFull
> > 
> > From contrib/fuse-util/Makefile.am:
> > 
> >   install-exec-hook:
> >   -chown root $(DESTDIR)$(bindir)/fusermount-glusterfs
> >   chmod u+s $(DESTDIR)$(bindir)/fusermount-glusterfs
> 
> I guess we need to add a '-' here.

We could do that, but I think the issue is pretty fatal in case setting
the +s fails. The whole point of fusermount-glusterfs is to allow
mounting as a non-root user (no idea if anyone uses it though, none of
the current testcases does). The "chown" should actually also fail and
give an error...

This error is not fatal for the smoke tests though, so *shrug*.
Successful smoke tests also have this error:
  https://build.gluster.org/job/smoke/28458/console

> (I know, I could make a patch, but after last time, I would rather wait
> after the release if there is a non zero risk of me screwing up review)
> 
> > Is /build maybe mounted with nosuid or similar? I do not think it
> > matters for our regression tests.
> 
> No, the script is running as jenkins, rather than root. I would say
> that's a bug if make install assume to be run as root, since rpm do run
> it as a non privileged user.

Oh, right, this is an environment where "make install" does not run as
root to prevent it from installing files in other locations.

Maybe one day we can just drop fusermount-glusterfs altogether and make
sure all non-upstream changes to fusermount have been merged upstream.

Niels


signature.asc
Description: PGP signature
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] chown: changing ownership of `/build/install/bin/fusermount-glusterfs': Operation not permitted

2016-06-07 Thread Michael Scherer
Le mardi 07 juin 2016 à 23:30 +0200, Niels de Vos a écrit :
> On Tue, Jun 07, 2016 at 09:08:09PM +0530, Nigel Babu wrote:
> > Hello,
> > 
> > misc and I haven't been trying to debug the regression test failures
> > without much success. Does anyone know why the build fails with this:
> > 
> > chown: changing ownership of `/build/install/bin/fusermount-glusterfs':
> > Operation not permitted
> > 
> > See error log for more context:
> > https://build.gluster.org/job/rackspace-regression-2GB-triggered/21512/consoleFull
> 
> From contrib/fuse-util/Makefile.am:
> 
>   install-exec-hook:
>   -chown root $(DESTDIR)$(bindir)/fusermount-glusterfs
>   chmod u+s $(DESTDIR)$(bindir)/fusermount-glusterfs

I guess we need to add a '-' here.
(I know, I could make a patch, but after last time, I would rather wait
after the release if there is a non zero risk of me screwing up review)

> Is /build maybe mounted with nosuid or similar? I do not think it
> matters for our regression tests.

No, the script is running as jenkins, rather than root. I would say
that's a bug if make install assume to be run as root, since rpm do run
it as a non privileged user.

-- 
Michael Scherer
Sysadmin, Community Infrastructure and Platform, OSAS




signature.asc
Description: This is a digitally signed message part
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] chown: changing ownership of `/build/install/bin/fusermount-glusterfs': Operation not permitted

2016-06-07 Thread Niels de Vos
On Tue, Jun 07, 2016 at 09:08:09PM +0530, Nigel Babu wrote:
> Hello,
> 
> misc and I haven't been trying to debug the regression test failures
> without much success. Does anyone know why the build fails with this:
> 
> chown: changing ownership of `/build/install/bin/fusermount-glusterfs':
> Operation not permitted
> 
> See error log for more context:
> https://build.gluster.org/job/rackspace-regression-2GB-triggered/21512/consoleFull

From contrib/fuse-util/Makefile.am:

  install-exec-hook:
  -chown root $(DESTDIR)$(bindir)/fusermount-glusterfs
  chmod u+s $(DESTDIR)$(bindir)/fusermount-glusterfs

Is /build maybe mounted with nosuid or similar? I do not think it
matters for our regression tests.

HTH,
Niels


signature.asc
Description: PGP signature
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] chown: changing ownership of `/build/install/bin/fusermount-glusterfs': Operation not permitted

2016-06-07 Thread Nigel Babu
Hello,

misc and I haven't been trying to debug the regression test failures
without much success. Does anyone know why the build fails with this:

chown: changing ownership of `/build/install/bin/fusermount-glusterfs':
Operation not permitted

See error log for more context:
https://build.gluster.org/job/rackspace-regression-2GB-triggered/21512/consoleFull

-- 
nigelb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] REMINDER: Gluster Community Bug Triage meeting at 12:00 UTC ~(in 45 minutes)

2016-06-07 Thread Jiffin Tony Thottan

Hi,

This meeting is scheduled for anyone, who is interested in learning more
about, or assisting with the Bug Triage.

Meeting details:
- location: #gluster-meeting on Freenode IRC
(https://webchat.freenode.net/?channels=gluster-meeting )
- date: every Tuesday
- time: 12:00 UTC
  (in your terminal, run: date -d "12:00 UTC")
- agenda: https://public.pad.fsfe.org/p/gluster-bug-triage

Currently the following items are listed:
* Roll Call
* Status of last weeks action items
* Group Triage
* Open Floor

The last two topics have space for additions. If you have a suitable bug
or topic to discuss, please add it to the agenda.

Appreciate your participation.

Thanks,
Jiffin

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] gluster source installation

2016-06-07 Thread Atin Mukherjee


On 06/07/2016 03:17 PM, jayakrishnan mm wrote:
> Hi,
> 
> Host system : Ububtu 14.04
> Gluster ver : 3.7.11
> I did ./autogen.sh
> ./configure --enable-debug
> make
> make install
> 
> After  that  when I try to manually start  the gluster  daemon, It give 
> below  error.
> 
> jk@jk:/usr/local/lib$ sudo killall glusterfsd
> glusterfsd: no process found
> jk@jk:/usr/local/lib$ sudo killall glusterfs
> glusterfs: no process found
> jk@jk:/usr/local/lib$ sudo killall glusterd
> glusterd: no process found
> jk@jk:/usr/local/lib$ sudo /etc/init.d/glusterd
> Usage: /etc/init.d/glusterd {start|stop|status|restart|force-reload}
> jk@jk:/usr/local/lib$ sudo /etc/init.d/glusterd start
>  **Starting glusterd service glusterd   
> [fail] *

Can you attach the glusterd log file please?

> /usr/local/sbin/glusterd: option requires an argument -- 'f'
> Try `glusterd --help' or `glusterd --usage' for more information.
> 
> Also  tried : 
> jk@jk:~/gluster/glusterfs-3.7.11$ sudo service glusterfs-server start
> *start: Job failed to start*
> jk@jk:~/gluster/glusterfs-3.7.11$
> 
> 
> The libraries  are  installed  in /usr/local/lib (libglusterfs.so.0.0.1)
> and .so  is in  /usr/local/lib/glusterfs/3.7.11
> 
> 
> jk@jk:/usr/local/lib$ cd glusterfs/
> jk@jk:/usr/local/lib/glusterfs$ ll
> total 12
> drwxr-xr-x 3 root root 4096 Jun  7 16:45 ./
> drwxr-xr-x 7 root root 4096 Jun  7 17:14 ../
> drwxr-xr-x 5 root root 4096 Jun  7 16:45 3.7.11/
> jk@jk:/usr/local/lib/glusterfs$ cd 3.7.11/
> jk@jk:/usr/local/lib/glusterfs/3.7.11$ ll
> total 20
> drwxr-xr-x  5 root root 4096 Jun  7 16:45 ./
> drwxr-xr-x  3 root root 4096 Jun  7 16:45 ../
> drwxr-xr-x  2 root root 4096 Jun  7 17:14 auth/
> drwxr-xr-x  2 root root 4096 Jun  7 17:14 rpc-transport/
> drwxr-xr-x 14 root root 4096 Jun  7 17:14 xlator/
> jk@jk:/usr/local/lib/glusterfs/3.7.11$
> 
> 
> Another  question: where  can I  set  the path for lib install ?
> (Currently in /usr/local/lib)
> I want to change to /usr/lib/i386-linux-gnu. Because  the daemon is
> looking  at this path  for the shared  libs. (For ow I set softlinks)
> 
> Pls. Help
> 
> Best regards
> JK
> 
> 
> 
> 
> 
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
> 
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] gluster source installation

2016-06-07 Thread jayakrishnan mm
Hi,

Host system : Ububtu 14.04
Gluster ver : 3.7.11
I did ./autogen.sh
./configure --enable-debug
make
make install

After  that  when I try to manually start  the gluster  daemon, It give
below  error.

jk@jk:/usr/local/lib$ sudo killall glusterfsd
glusterfsd: no process found
jk@jk:/usr/local/lib$ sudo killall glusterfs
glusterfs: no process found
jk@jk:/usr/local/lib$ sudo killall glusterd
glusterd: no process found
jk@jk:/usr/local/lib$ sudo /etc/init.d/glusterd
Usage: /etc/init.d/glusterd {start|stop|status|restart|force-reload}
jk@jk:/usr/local/lib$ sudo /etc/init.d/glusterd start
 ** Starting glusterd service glusterd
[fail] *
/usr/local/sbin/glusterd: option requires an argument -- 'f'
Try `glusterd --help' or `glusterd --usage' for more information.

Also  tried :
jk@jk:~/gluster/glusterfs-3.7.11$ sudo service glusterfs-server start
*start: Job failed to start*
jk@jk:~/gluster/glusterfs-3.7.11$


The libraries  are  installed  in /usr/local/lib (libglusterfs.so.0.0.1)
and .so  is in  /usr/local/lib/glusterfs/3.7.11


jk@jk:/usr/local/lib$ cd glusterfs/
jk@jk:/usr/local/lib/glusterfs$ ll
total 12
drwxr-xr-x 3 root root 4096 Jun  7 16:45 ./
drwxr-xr-x 7 root root 4096 Jun  7 17:14 ../
drwxr-xr-x 5 root root 4096 Jun  7 16:45 3.7.11/
jk@jk:/usr/local/lib/glusterfs$ cd 3.7.11/
jk@jk:/usr/local/lib/glusterfs/3.7.11$ ll
total 20
drwxr-xr-x  5 root root 4096 Jun  7 16:45 ./
drwxr-xr-x  3 root root 4096 Jun  7 16:45 ../
drwxr-xr-x  2 root root 4096 Jun  7 17:14 auth/
drwxr-xr-x  2 root root 4096 Jun  7 17:14 rpc-transport/
drwxr-xr-x 14 root root 4096 Jun  7 17:14 xlator/
jk@jk:/usr/local/lib/glusterfs/3.7.11$


Another  question: where  can I  set  the path for lib install ? (Currently
in /usr/local/lib)
I want to change to /usr/lib/i386-linux-gnu. Because  the daemon is
looking  at this path  for the shared  libs. (For ow I set softlinks)

Pls. Help

Best regards
JK
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Some error appear when create glusterfs volume

2016-06-07 Thread Kaushal M
On Tue, Jun 7, 2016 at 11:37 AM, Atin Mukherjee  wrote:
> So I have an answer to this problem now. I was not aware that we have a
> limited number of commands (read only) which can be supported with
> --remote-host option in CLI and here is the list:
>
> [GLUSTER_CLI_LIST_FRIENDS]   = { "LIST_FRIENDS",
> GLUSTER_CLI_LIST_FRIENDS, glusterd_handle_cli_list_friends,
> NULL, 0, DRC_NA},
> [GLUSTER_CLI_UUID_GET]   = { "UUID_GET",
> GLUSTER_CLI_UUID_GET, glusterd_handle_cli_uuid_get,
> NULL, 0, DRC_NA},
> [GLUSTER_CLI_DEPROBE]= { "FRIEND_REMOVE",
> GLUSTER_CLI_DEPROBE,  glusterd_handle_cli_deprobe,
> NULL, 0, DRC_NA},
> [GLUSTER_CLI_GET_VOLUME] = { "GET_VOLUME",
> GLUSTER_CLI_GET_VOLUME,   glusterd_handle_cli_get_volume,
> NULL, 0, DRC_NA},
> [GLUSTER_CLI_GETWD]  = { "GETWD",
> GLUSTER_CLI_GETWD,glusterd_handle_getwd,
> NULL, 1, DRC_NA},
> [GLUSTER_CLI_STATUS_VOLUME]  = {"STATUS_VOLUME",
> GLUSTER_CLI_STATUS_VOLUME,glusterd_handle_status_volume,
> NULL, 0, DRC_NA},
> [GLUSTER_CLI_LIST_VOLUME]= {"LIST_VOLUME",
> GLUSTER_CLI_LIST_VOLUME,  glusterd_handle_cli_list_volume,
> NULL, 0, DRC_NA},
> [GLUSTER_CLI_MOUNT]  = { "MOUNT",
> GLUSTER_CLI_MOUNT,glusterd_handle_mount,
> NULL, 1, DRC_NA},
> [GLUSTER_CLI_UMOUNT] = { "UMOUNT",
> GLUSTER_CLI_UMOUNT,   glusterd_handle_umount,
> NULL, 1, DRC_NA},
>

Yup. This was done as a security fix. The `--remote-host` option
allowed the gluster CLI to connect with and command any glusterd, not
just those in the trusted storage pool.
This had major security concerns as it allowed external users complete
control over the pool.

To solve, we created the glusterd.socket file and started using it for
CLI commands. I wanted to completely remove the `--remote-host`
option, but oVirt depended on it, so the support for --remote-host was
retained but the supported commands was reduced to only the read-only
commands.

> HTH,
> Atin
>
>
>
>
> On 06/06/2016 02:51 PM, Jiang, Jet (Nokia - CN/Hangzhou) wrote:
>> Hi,
>>
>> Ok, thank you very much~~
>>
>> Thanks,
>> Br,
>> Jet
>>
>> -Original Message-
>> From: Atin Mukherjee [mailto:amukh...@redhat.com]
>> Sent: Monday, June 06, 2016 5:07 PM
>> To: Jiang, Jet (Nokia - CN/Hangzhou) ; Pranith Kumar 
>> Karampuri 
>> Cc: prani...@gluster.com; Madappa, Kaushal 
>> Subject: Re: Some error appear when create glusterfs volume
>>
>> One more thing to add here is remote-host option usage at CLI is not
>> always safe when you run heterogeneous cluster since our CLI code is not
>> backward compatible. However I'll look into this issue and update you.
>>
>> ~Atin
>>
>> On 06/06/2016 02:19 PM, Atin Mukherjee wrote:
>>> I am looking into it. This does look like a bug as it stands. Will update.
>>>
>>> ~Atin
>>>
>>> On 06/06/2016 01:49 PM, Jiang, Jet (Nokia - CN/Hangzhou) wrote:
 Hi,
 Sorry to the late response. The root cause of the issue is the wrong 
 configuration of Kubernets.

 Another question to bother. About the command "remote-host".
 When I execute the remote-host to query the related glusterfs info , it is 
 ok like following:

 [root@482cde6d9191 deploy]# gluster --remote-host=6b4bae6cd3da.vtas.local  
 pool list
 UUIDHostnameState
 b3d06f4e-6c70-4ce0-aeaa-5fd73824755flocalhost   Connected
 [root@482cde6d9191 deploy]#

 But when I use remote-host to make peer , it seems not make effective,

 [root@482cde6d9191 deploy]# gluster  peer probe ca8404991844.vtas.local  
 --remote-host=6b4bae6cd3da.vtas.local
 [root@482cde6d9191 deploy]#
 On the host ca8404991844.vtas.local, there no gluster cluster.

 Does the "remote-host" just only support the query command?
 My gluster version is 3.7.11.

 Thanks,
 Br,
 Jet

 -Original Message-
 From: Atin Mukherjee [mailto:amukh...@redhat.com]
 Sent: Tuesday, May 24, 2016 3:03 PM
 To: Jiang, Jet (Nokia - CN/Hangzhou) ; Pranith Kumar 
 Karampuri 
 Cc: prani...@gluster.com; Madappa, Kaushal 
 Subject: Re: Some error appear when create glusterfs volume



 On 05/24/2016 11:42 AM, Jiang, Jet (Nokia - CN/Hangzhou) wrote:
> Hi,
> Thanks for you quickly response.
> I try as you suggest but still failed again.
> Another question:  what cause the related rpc error?
 Does gluster peer status still show the other node as connected? If so
 then something is weird. Along with genuine RPC failures (in case node
 loss its connection to the other peers) you may also see this error
 message if CLI times out.

 Could you install glusterfs-debuginfo package and take glusterd running
 process into gdb and print the backtrace after issuing volume create
 command