Re: [Gluster-users] Adding a slack for communication?

2017-11-08 Thread Bartosz Zięba

It's great idea! :)

But think about creating Slack for all RedHat provided opensource 
projects. For example one Slack workspace with separated Gluster, Ceph, 
Fedora etc. channels.


I can't wait for it!

Bartosz


On 08.11.2017 22:22, Amye Scavarda wrote:

 From today's community meeting, we had an item from the issue queue:
https://github.com/gluster/community/issues/13

Should we have a Gluster Community slack team? I'm interested in
everyone's thoughts on this.
- amye




___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Adding a slack for communication?

2017-11-08 Thread Jim Kinney
The archival process of the mailing list makes searching for past issues 
possible. Slack, and irc in general, is a more closed garden than a public 
archived mailing list. 

That said, irc/slack is good for immediate interaction between people, say, 
gluster user with a nightmare and a knowledgeable developer with deep 
understanding and willingness to assist.

If there's a way to make a debug/help/fix session publicly available, and 
crucially, referenced in the mailing list archive, then irc/slack is a great 
additional communication channel.

On November 8, 2017 4:22:44 PM EST, Amye Scavarda  wrote:
>From today's community meeting, we had an item from the issue queue:
>https://github.com/gluster/community/issues/13
>
>Should we have a Gluster Community slack team? I'm interested in
>everyone's thoughts on this.
>- amye
>
>-- 
>Amye Scavarda | a...@redhat.com | Gluster Community Lead
>___
>Gluster-users mailing list
>Gluster-users@gluster.org
>http://lists.gluster.org/mailman/listinfo/gluster-users

-- 
Sent from my Android device with K-9 Mail. All tyopes are thumb related and 
reflect authenticity.___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Adding a slack for communication?

2017-11-08 Thread Amye Scavarda
On Wed, Nov 8, 2017 at 3:23 PM, Jim Kinney  wrote:
> The archival process of the mailing list makes searching for past issues
> possible. Slack, and irc in general, is a more closed garden than a public
> archived mailing list.
>
> That said, irc/slack is good for immediate interaction between people, say,
> gluster user with a nightmare and a knowledgeable developer with deep
> understanding and willingness to assist.
>
> If there's a way to make a debug/help/fix session publicly available, and
> crucially, referenced in the mailing list archive, then irc/slack is a great
> additional communication channel.

So at the moment, we do have the logs from IRC made public. We could
probably do the same thing for a possible slack instance, but that's
not really an improvement.
I think being able to get some of the debug sessions into
documentation might improve that problem.

However, willing to hear more about what else we should be doing to
support users!
- amye

>
> On November 8, 2017 4:22:44 PM EST, Amye Scavarda  wrote:
>>
>> From today's community meeting, we had an item from the issue queue:
>> https://github.com/gluster/community/issues/13
>>
>> Should we have a Gluster Community slack team? I'm interested in
>> everyone's thoughts on this.
>> - amye
>
>
> --
> Sent from my Android device with K-9 Mail. All tyopes are thumb related and
> reflect authenticity.



-- 
Amye Scavarda | a...@redhat.com | Gluster Community Lead
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Adding a slack for communication?

2017-11-08 Thread Vijay Bellur
On Wed, Nov 8, 2017 at 4:22 PM, Amye Scavarda  wrote:

> From today's community meeting, we had an item from the issue queue:
> https://github.com/gluster/community/issues/13
>
> Should we have a Gluster Community slack team? I'm interested in
> everyone's thoughts on this.
>
>

+1 to the idea.

One of the limitations I have encountered in a few slack channels is the
lack of archiving & hence limited search capabilities. If we establish a
gluster channel, what would be the archiving strategy?

Thanks,
Vijay
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Adding a slack for communication?

2017-11-08 Thread Amye Scavarda
On Wed, Nov 8, 2017 at 3:09 PM, Vijay Bellur  wrote:
>
>
> On Wed, Nov 8, 2017 at 4:22 PM, Amye Scavarda  wrote:
>>
>> From today's community meeting, we had an item from the issue queue:
>> https://github.com/gluster/community/issues/13
>>
>> Should we have a Gluster Community slack team? I'm interested in
>> everyone's thoughts on this.
>>
>
>
> +1 to the idea.
>
> One of the limitations I have encountered in a few slack channels is the
> lack of archiving & hence limited search capabilities. If we establish a
> gluster channel, what would be the archiving strategy?
>
> Thanks,
> Vijay
>

If we want this, we can use something like slackarchive to be able to
archive this and then publish it on the website.
We could also look at being able to use Gitter more, which we already
have, or a Slack-IRC bridge.
Many options!
- amye


-- 
Amye Scavarda | a...@redhat.com | Gluster Community Lead
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Gluster clients can't see directories that exist or are created within a mounted volume, but can enter them.

2017-11-08 Thread Sam McLeod

> On 8 Nov 2017, at 9:03 pm, Nithya Balachandran  wrote:
> 
> 
> That is not the log for the mount. Please check  
> /var/log/glusterfs/var-lib-mountedgluster.log on the system on which you are 
> running the mount process.
> 
> Please provide the volume config details as well (gluster volume info) from 
> one of the server nodes.
> 

Oh I'm sorry, I totally misread that - didn't realise it was on the client.

Clarification for below logs:

- 'dev_static' is the gluster volume.
- 'int-kube-01' is the gluster client.
- '10.51.70.151' is the first node in a three node (2 replica, 1 arbiter) 
gluster cluster.
- '/var/lib/kubelet/./iss3dev-static' is a directory on the client that 
should be mounting '10.51.70.151:/dev_static/iss3dev-static', where 
'iss3dev-static' is a directory inside the gluster mount 'dev_static'.
- FYI only, we are essentially mounting directories inside gluster volumes as 
per Kubernetes example: 
https://github.com/kubernetes/examples/blob/master/staging/volumes/glusterfs/README.md
 


root@int-kube-01:/var/log/glusterfs  # tail -20 mnt.log

[2017-11-07 21:15:03.561470] I [MSGID: 100030] [glusterfsd.c:2524:main] 
0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.12.2 
(args: /usr/sbin/glusterfs --volfile-server=10.51.70.151 
--volfile-id=/dev_static /mnt)
[2017-11-07 21:15:03.571205] W [MSGID: 101002] [options.c:995:xl_opt_validate] 
0-glusterfs: option 'address-family' is deprecated, preferred is 
'transport.address-family', continuing with correction
[2017-11-07 21:15:03.584098] I [MSGID: 101190] 
[event-epoll.c:613:event_dispatch_epoll_worker] 0-epoll: Started thread with 
index 1


No new log entries appear after executing the following to demonstrate the 
problem:


root@int-kube-01:/var/lib/kubelet/pods/434cba8e-bf87-11e7-8389-1aa903709357/volumes/kubernetes.io~glusterfs/iss3dev-static
 # touch test
root@int-kube-01:/var/lib/kubelet/pods/434cba8e-bf87-11e7-8389-1aa903709357/volumes/kubernetes.io~glusterfs/iss3dev-static
 # mkdir testdir2

root@int-kube-01:/var/lib/kubelet/pods/434cba8e-bf87-11e7-8389-1aa903709357/volumes/kubernetes.io~glusterfs/iss3dev-static
  # ls -ltar
total 0
-rw-r--r--. 1 root root 0 Nov  6 10:10 test

root@int-kube-01:/var/lib/kubelet/pods/434cba8e-bf87-11e7-8389-1aa903709357/volumes/kubernetes.io~glusterfs/iss3dev-static
  # cd testdir2
root@int-kube-01:/var/lib/kubelet/pods/434cba8e-bf87-11e7-8389-1aa903709357/volumes/kubernetes.io~glusterfs/iss3dev-static/testdir2
  # ls -la
total 0

root@int-kube-01:/var/lib/kubelet/pods/434cba8e-bf87-11e7-8389-1aa903709357/volumes/kubernetes.io~glusterfs/iss3dev-static/testdir2
  # cd ..
root@int-kube-01:/var/lib/kubelet/pods/434cba8e-bf87-11e7-8389-1aa903709357/volumes/kubernetes.io~glusterfs/iss3dev-static#
 ls -ltar
total 0
-rw-r--r--. 1 root root 0 Nov  6 10:10 test


The mount entry looks like:

root@int-kube-01:~ # mount | grep iss3dev-static
10.51.70.151:/dev_static on 
/var/lib/kubelet/pods/434cba8e-bf87-11e7-8389-1aa903709357/volumes/kubernetes.io~glusterfs/iss3dev-static
 type fuse.glusterfs 
(rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Enabling Halo sets volume RO

2017-11-08 Thread Jon Cope
Thank you! That did the trick. 

For anyone else who encounters this, here are the settings that worked for me: 

cluster.quorum-type fixed 
cluster.quorum-count 2 
cluster.halo-enabled yes 
cluster.halo-min-replicas 2 
cluster.halo-max-latency 10 

- Original Message -

| From: "Mohammed Rafi K C" 
| To: "Jon Cope" , gluster-users@gluster.org
| Sent: Wednesday, November 8, 2017 3:34:07 AM
| Subject: Re: [Gluster-users] Enabling Halo sets volume RO
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] file shred

2017-11-08 Thread Kingsley Tart
Hi,

if we were to use shred to delete a file on a gluster volume, will the
correct blocks be overwritten on the bricks?

(still using Gluster 3.6.3 as have been too cautious to upgrade a
mission critical live system).

Cheers,
Kingsley.

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Adding a slack for communication?

2017-11-08 Thread Amye Scavarda
>From today's community meeting, we had an item from the issue queue:
https://github.com/gluster/community/issues/13

Should we have a Gluster Community slack team? I'm interested in
everyone's thoughts on this.
- amye

-- 
Amye Scavarda | a...@redhat.com | Gluster Community Lead
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] glusterfs brick server use too high memory

2017-11-08 Thread Yao Guotao
Hi all,
I'm glad to add glusterfs community.


I have a glusterfs cluster:
Nodes: 4

System: Centos7.1
Glusterfs: 3.8.9
Each Node:
CPU: 48 core
Mem: 128GB
Disk: 1*4T


There is one Distributed Replicated volume. There are ~160 k8s pods as clients 
connecting to glusterfs. But, the memory of  glusterfsd process is too high, 
gradually increase to 100G every node.
Then, I reboot the glusterfsd process. But the memory increase during 
approximate a week.
How can I debug the problem?


Thanks.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] BLQ Gluster community meeting anyone?

2017-11-08 Thread Ivan Rossi
Hello community,

My company is willing to host a Gluster-community meeting in Bologna
(Italy) on March 8th 2018, back-to-back with Incontro Devops Italia (
http://2018.incontrodevops.it) and in the same venue as the conference.

I think that having 2-3 good technical talk, plus some BOFs/lightning
talks/open-space discussions will make for a nice half-a-day event.  It is
also probable that one or more of the devs may be on-site. (they
half-promised...)

However I would like to understand if there is enough interest in the
community to grant the effort.

What do you think about it? Anyone interested in attending/contributing?

Ivan
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] BUG: After stop and start wrong port is advertised

2017-11-08 Thread Atin Mukherjee
We've a fix in release-3.10 branch which is merged and should be available
in the next 3.10 update.

On Wed, Nov 8, 2017 at 4:58 PM, Mike Hulsman  wrote:

> Hi,
>
> This bug is hitting me hard on two different clients.
> In RHGS 3.3 and on glusterfs 3.10.2 on Centos 7.4
> in once case I had 59 differences in a total of 203 bricks.
>
> I wrote a quick and dirty script to check all ports against the brick file
> and the running process.
> #!/bin/bash
>
> Host=`uname -n| awk -F"." '{print $1}'`
> GlusterVol=`ps -eaf | grep /usr/sbin/glusterfsd| grep -v grep | awk
> '{print $NF}'| awk -F"-server" '{print $1}'|sort | uniq`
> Port=`ps -eaf | grep /usr/sbin/glusterfsd| grep -v grep | awk '{print
> $NF}'| awk -F"." '{print $NF}'`
>
> for Volumes in ${GlusterVol};
> do
> cd /var/lib/glusterd/vols/${Volumes}/bricks
> Bricks=`ls ${Host}*`
> for Brick in ${Bricks};
> do
> Onfile=`grep ^listen-port "${Brick}"`
> BrickDir=`echo "${Brick}"| awk -F":" '{print $2}'| cut -c2-`
> Daemon=`ps -eaf | grep "\${BrickDir}.pid" |grep -v grep | awk '{print
> $NF}' | awk -F"." '{print $2}'`
> #echo Onfile: ${Onfile}
> #echo Daemon: ${Daemon}
> if [ "${Onfile}" = "${Daemon}" ]; then
> echo "OK For ${Brick}"
> else
> echo "!!! NOT OK For ${Brick}"
> fi
> done
> done
>
>
> Met vriendelijke groet,
>
> Mike Hulsman
>
> Proxy Managed Services B.V. | www.proxy.nl | Enterprise IT-Infra, Open
> Source and Cloud Technology
> Delftweg 128 3043 NB Rotterdam The Netherlands
> 
> | +31 10 307 0965
>
> --
>
> *From: *"Jo Goossens" 
> *To: *"Atin Mukherjee" 
> *Cc: *gluster-users@gluster.org
> *Sent: *Friday, October 27, 2017 11:06:35 PM
> *Subject: *Re: [Gluster-users] BUG: After stop and start wrong port is
> advertised
>
> RE: [Gluster-users] BUG: After stop and start wrong port is advertised
>
> Hello Atin,
>
>
>
>
>
> I just read it and very happy you found the issue. We really hope this
> will be fixed in the next 3.10.7 version!
>
>
>
>
>
> PS: Wow nice all that c code and those "goto out" statements (not always
> considered clean but the best way often I think). Can remember the days I
> wrote kernel drivers myself in c :)
>
>
>
>
>
> Regards
>
> Jo Goossens
>
>
>
>
>
>
>
>
> -Original message-
> *From:* Atin Mukherjee 
> *Sent:* Fri 27-10-2017 21:01
> *Subject:* Re: [Gluster-users] BUG: After stop and start wrong port is
> advertised
> *To:* Jo Goossens ;
> *CC:* gluster-users@gluster.org;
> We (finally) figured out the root cause, Jo!
>
> Patch https://review.gluster.org/#/c/18579 posted upstream for review.
>
> On Thu, Sep 21, 2017 at 2:08 PM, Jo Goossens  > wrote:
>
> Hi,
>
>
>
>
>
> We use glusterfs 3.10.5 on Debian 9.
>
>
>
> When we stop or restart the service, e.g.: service glusterfs-server restart
>
>
>
> We see that the wrong port get's advertised afterwards. For example:
>
>
>
> Before restart:
>
>
> Status of volume: public
> Gluster process TCP Port  RDMA Port  Online
>  Pid
> 
> --
> Brick 192.168.140.41:/gluster/public49153 0  Y
> 6364
> Brick 192.168.140.42:/gluster/public49152 0  Y
> 1483
> Brick 192.168.140.43:/gluster/public49152 0  Y
> 5913
> Self-heal Daemon on localhost   N/A   N/AY
> 5932
> Self-heal Daemon on 192.168.140.42  N/A   N/AY
> 13084
> Self-heal Daemon on 192.168.140.41  N/A   N/AY
> 15499
>
> Task Status of Volume public
> 
> --
> There are no active volume tasks
>
>
> After restart of the service on one of the nodes (192.168.140.43) the port
> seems to have changed (but it didn't):
>
> root@app3:/var/log/glusterfs#  gluster volume status
> Status of volume: public
> Gluster process TCP Port  RDMA Port  Online
>  Pid
> 
> --
> Brick 192.168.140.41:/gluster/public49153 0  Y
> 6364
> Brick 192.168.140.42:/gluster/public49152 0  Y
> 1483
> Brick 192.168.140.43:/gluster/public49154 0  Y
> 5913
> Self-heal Daemon on localhost   N/A   N/AY
> 4628
> Self-heal Daemon on 192.168.140.42  N/A   N/AY
> 3077
> Self-heal Daemon on 192.168.140.41  N/A   N/AY
> 28777
>
> Task Status of Volume public
> 
> --
> There are no active volume tasks
>
>
> However the active process is STILL the same pid AND still listening on
> the old port
>
> 

[Gluster-users] Community Meeting 2017-11-08

2017-11-08 Thread Kaushal M
!!REMINDER!!
Community meeting is back after 4 weeks off.
Today's community meeting is scheduled in about 3 hours from now, at 1500UTC.

Please add any topics you want to discuss and any updates you want to
share with the community into the meeting pad at
https://bit.ly/gluster-community-meetings

See you in #gluster-meeting!

~kaushal
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Gluster clients can't see directories that exist or are created within a mounted volume, but can enter them.

2017-11-08 Thread Nithya Balachandran
On 8 November 2017 at 02:47, Sam McLeod  wrote:

>
> On 6 Nov 2017, at 3:32 pm, Laura Bailey  wrote:
>
> Do the users have permission to see/interact with the directories, in
> addition to the files?
>
>
> Yes, full access to directories and files.
> Also testing using the root user.
>
>
> On Mon, Nov 6, 2017 at 1:55 PM, Nithya Balachandran 
> wrote:
>
>> Hi,
>>
>> Please provide the gluster volume info. Do you see any errors in the
>> client mount log file (/var/log/glusterfs/var-lib-mountedgluster.log)?
>>
>
>
> root@int-gluster-01:/var/log/glusterfs  # grep 'dev_static' *.log|grep -v
> cmd_history
>
> glusterd.log:[2017-11-05 22:37:06.934787] W 
> [glusterd-locks.c:675:glusterd_mgmt_v3_unlock]
> (-->/usr/lib64/glusterfs/3.12.2/xlator/mgmt/glusterd.so(+0x22e5a)
> [0x7f5047169e5a] 
> -->/usr/lib64/glusterfs/3.12.2/xlator/mgmt/glusterd.so(+0x2cdc8)
> [0x7f5047173dc8] 
> -->/usr/lib64/glusterfs/3.12.2/xlator/mgmt/glusterd.so(+0xe372a)
> [0x7f504722a72a] ) 0-management: Lock for vol dev_static not held
> glusterd.log:[2017-11-05 22:37:06.934806] W [MSGID: 106118]
> [glusterd-handler.c:6309:__glusterd_peer_rpc_notify] 0-management: Lock
> not released for dev_static
> glusterd.log:[2017-11-05 22:39:49.924472] W 
> [glusterd-locks.c:675:glusterd_mgmt_v3_unlock]
> (-->/usr/lib64/glusterfs/3.12.2/xlator/mgmt/glusterd.so(+0x22e5a)
> [0x7fde97921e5a] 
> -->/usr/lib64/glusterfs/3.12.2/xlator/mgmt/glusterd.so(+0x2cdc8)
> [0x7fde9792bdc8] 
> -->/usr/lib64/glusterfs/3.12.2/xlator/mgmt/glusterd.so(+0xe372a)
> [0x7fde979e272a] ) 0-management: Lock for vol dev_static not held
> glusterd.log:[2017-11-05 22:39:49.924494] W [MSGID: 106118]
> [glusterd-handler.c:6309:__glusterd_peer_rpc_notify] 0-management: Lock
> not released for dev_static
> glusterd.log:[2017-11-05 22:41:42.565123] W 
> [glusterd-locks.c:675:glusterd_mgmt_v3_unlock]
> (-->/usr/lib64/glusterfs/3.12.2/xlator/mgmt/glusterd.so(+0x22e5a)
> [0x7fde97921e5a] 
> -->/usr/lib64/glusterfs/3.12.2/xlator/mgmt/glusterd.so(+0x2cdc8)
> [0x7fde9792bdc8] 
> -->/usr/lib64/glusterfs/3.12.2/xlator/mgmt/glusterd.so(+0xe372a)
> [0x7fde979e272a] ) 0-management: Lock for vol dev_static not held
> glusterd.log:[2017-11-05 22:41:42.565227] W [MSGID: 106118]
> [glusterd-handler.c:6309:__glusterd_peer_rpc_notify] 0-management: Lock
> not released for dev_static
> glusterd.log:[2017-11-05 22:42:06.931060] W 
> [glusterd-locks.c:675:glusterd_mgmt_v3_unlock]
> (-->/usr/lib64/glusterfs/3.12.2/xlator/mgmt/glusterd.so(+0x22e5a)
> [0x7fde97921e5a] 
> -->/usr/lib64/glusterfs/3.12.2/xlator/mgmt/glusterd.so(+0x2cdc8)
> [0x7fde9792bdc8] 
> -->/usr/lib64/glusterfs/3.12.2/xlator/mgmt/glusterd.so(+0xe372a)
> [0x7fde979e272a] ) 0-management: Lock for vol dev_static not held
> glusterd.log:[2017-11-05 22:42:06.931090] W [MSGID: 106118]
> [glusterd-handler.c:6309:__glusterd_peer_rpc_notify] 0-management: Lock
> not released for dev_static
>
>
>>
That is not the log for the mount. Please check
 /var/log/glusterfs/var-lib-mountedgluster.log on the system on which you
are running the mount process.

Please provide the volume config details as well (gluster volume info) from
one of the server nodes.



>> Thanks,
>> Nithya
>>
>> On 6 November 2017 at 05:13, Sam McLeod  wrote:
>>
>>> We've got an issue with Gluster (3.12.x) where clients can't see
>>> directories that exist or are created within a mounted volume.
>>>
>>>
>>> We can see files, but not directories (with ls, find etc...)
>>> We can enter (cd) into directories, even if we can't see them.
>>>
>>> - Host typology is: 2 replica, 1 arbiter.
>>> - Volumes are: replicated and running on XFS on the hosts.
>>> - Clients are: GlusterFS native fuse client (mount.glusterfs), the same
>>> version and op-version as the hosts.
>>> - Gluster server and client version: 3.12.2 (also found on 3.12.1,
>>> unsure about previous versions) running on CentOS 7.
>>>
>>>
>>> Examples:
>>>
>>>
>>> mount:
>>> 192.168.0.151:/gluster_vol on /var/lib/mountedgluster type
>>> fuse.glusterfs (rw,relatime,user_id=0,group_i
>>> d=0,default_permissions,allow_other,max_read=131072)
>>>
>>> root@gluster-client:/var/lib/mountedgluster  # ls -la
>>> total 0
>>>
>>> (note no . or .. directories)
>>>
>>> root@gluster-client:/var/lib/mountedgluster  # touch test
>>> root@gluster-client:/var/lib/mountedgluster  # ls -la
>>> total 0
>>> -rw-r--r--. 1 root root 0 Nov  6 10:10 test
>>>
>>> ("test" file shows up. Still no . or .. directories.)
>>>
>>> root@gluster-client:/var/lib/mountedgluster  # mkdir testdir
>>> root@gluster-client:/var/lib/mountedgluster  # ls -la
>>> total 0
>>> -rw-r--r--. 1 root root 0 Nov  6 10:10 test
>>>
>>> (directory was made, but doesn't show in ls)
>>>
>>> root@gluster-client:/var/lib/mountedgluster  # cd testdir
>>> root@gluster-client:/var/lib/mountedgluster/testdir  # ls -la
>>> total 0
>>>
>>> (cd works, no . or .. shown in ls though)
>>>
>>> 

Re: [Gluster-users] Enabling Halo sets volume RO

2017-11-08 Thread Mohammed Rafi K C
I think the problem here is by default the quorum is playing around
here, to get rid of this you can change quorum type as fixed and the
value as 2 , or you can disable the quorum.


Regards

Rafi KC


On 11/08/2017 04:03 AM, Jon Cope wrote:
> Hi all,
>
> I'm taking a stab at deploying a storage cluster to explore the Halo
> AFR feature and running into some trouble.  In GCE, I have 4
> instances, each with one 10gb brick.  2 instances are in the US and
> the other 2 are in Asia (with the hope that it will drive up latency
> sufficiently).  The bricks make up a Replica-4 volume.  Before I
> enable halo, I can mount to volume and r/w files.
>
> The issue is when I set the `cluster.halo-enabled yes`, I can no
> longer write to the volume:
>
> [root@jcope-rhs-g2fn vol]# touch /mnt/vol/test1
> touch: setting times of ‘test1’: Read-only file system
>
> This can be fixed by turning halo off again.  While halo is enabled
> and writes return the above message, the mount still shows it to be r/w:
>
> [root@jcope-rhs-g2fn vol]# mount
> gce-node1:gv0 on /mnt/vol type fuse.glusterfs
> (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
>
>
> Thanks in advace,
> -Jon
>
>
> Setup info
> CentOS Linux release 7.4.1708 (Core)
> 4 GCE Instances (2 US, 2 Asia)
> 1 10gb Brick/Instance
> replica 4 volume
>
> Packages:
>
> glusterfs-client-xlators-3.12.1-2.el7.x86_64
> glusterfs-cli-3.12.1-2.el7.x86_64
> python2-gluster-3.12.1-2.el7.x86_64
> glusterfs-3.12.1-2.el7.x86_64
> glusterfs-api-3.12.1-2.el7.x86_64
> glusterfs-fuse-3.12.1-2.el7.x86_64
> glusterfs-server-3.12.1-2.el7.x86_64
> glusterfs-libs-3.12.1-2.el7.x86_64
> glusterfs-geo-replication-3.12.1-2.el7.x86_64
>
>
>  
> Logs, beginning when halo is enabled:
>
> [2017-11-07 22:20:15.029298] W [MSGID: 101095]
> [xlator.c:213:xlator_dynload] 0-xlator:
> /usr/lib64/glusterfs/3.12.1/xlator/nfs/server.so: cannot open shared
> object file: No such file or directory
> [2017-11-07 22:20:15.204241] W [MSGID: 101095]
> [xlator.c:162:xlator_volopt_dynload] 0-xlator:
> /usr/lib64/glusterfs/3.12.1/xlator/nfs/server.so: cannot open shared
> object file: No such file or directory
> [2017-11-07 22:20:15.232176] I [MSGID: 106600]
> [glusterd-nfs-svc.c:163:glusterd_nfssvc_reconfigure] 0-management:
> nfs/server.so xlator is not installed
> [2017-11-07 22:20:15.235481] I [MSGID: 106132]
> [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: quotad
> already stopped
> [2017-11-07 22:20:15.235512] I [MSGID: 106568]
> [glusterd-svc-mgmt.c:229:glusterd_svc_stop] 0-management: quotad
> service is stopped
> [2017-11-07 22:20:15.235572] I [MSGID: 106132]
> [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: bitd
> already stopped
> [2017-11-07 22:20:15.235585] I [MSGID: 106568]
> [glusterd-svc-mgmt.c:229:glusterd_svc_stop] 0-management: bitd service
> is stopped
> [2017-11-07 22:20:15.235638] I [MSGID: 106132]
> [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: scrub
> already stopped
> [2017-11-07 22:20:15.235650] I [MSGID: 106568]
> [glusterd-svc-mgmt.c:229:glusterd_svc_stop] 0-management: scrub
> service is stopped
> [2017-11-07 22:20:15.250297] I [run.c:190:runner_log]
> (-->/usr/lib64/glusterfs/3.12.1/xlator/mgmt/glusterd.so(+0xde17a)
> [0x7fc23442117a]
> -->/usr/lib64/glusterfs/3.12.1/xlator/mgmt/glusterd.so(+0xddc3d)
> [0x7fc234420c3d] -->/lib64/libglusterfs.so.0(runner_log+0x115)
> [0x7fc23f915da5] ) 0-management: Ran script: /var/lib
> /glusterd/hooks/1/set/post/S30samba-set.sh --volname=gv0 -o
> cluster.halo-enabled=yes --gd-workdir=/var/lib/glusterd
> [2017-11-07 22:20:15.255777] I [run.c:190:runner_log]
> (-->/usr/lib64/glusterfs/3.12.1/xlator/mgmt/glusterd.so(+0xde17a)
> [0x7fc23442117a]
> -->/usr/lib64/glusterfs/3.12.1/xlator/mgmt/glusterd.so(+0xddc3d)
> [0x7fc234420c3d] -->/lib64/libglusterfs.so.0(runner_log+0x115)
> [0x7fc23f915da5] ) 0-management: Ran script: /var/lib
> /glusterd/hooks/1/set/post/S32gluster_enable_shared_storage.sh
> --volname=gv0 -o cluster.halo-enabled=yes --gd-workdir=/var/lib/glusterd
> [2017-11-07 22:20:47.420098] W [MSGID: 101095]
> [xlator.c:213:xlator_dynload] 0-xlator:
> /usr/lib64/glusterfs/3.12.1/xlator/nfs/server.so: cannot open shared
> object file: No such file or directory
> [2017-11-07 22:20:47.595960] W [MSGID: 101095]
> [xlator.c:162:xlator_volopt_dynload] 0-xlator:
> /usr/lib64/glusterfs/3.12.1/xlator/nfs/server.so: cannot open shared
> object file: No such file or directory
> [2017-11-07 22:20:47.631833] I [MSGID: 106600]
> [glusterd-nfs-svc.c:163:glusterd_nfssvc_reconfigure] 0-management:
> nfs/server.so xlator is not installed
> [2017-11-07 22:20:47.635109] I [MSGID: 106132]
> [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: quotad
> already stopped
> [2017-11-07 22:20:47.635136] I [MSGID: 106568]
> [glusterd-svc-mgmt.c:229:glusterd_svc_stop] 0-management: quotad
> service is stopped
> [2017-11-07 22:20:47.635201] I [MSGID: 106132]
> [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 

Re: [Gluster-users] Problem with getting restapi up

2017-11-08 Thread Aravinda
This project is not maintained in favor of Glusterd2 project. Let me 
know if you need this to be fixed to be used with Gluster 3.x series.



On Tuesday 07 November 2017 03:21 PM, InterNetX - Juergen Gotteswinter 
wrote:

Hi,

i am currently struggling around with gluster restapi (not heketi),
somehow i am a bit stuck. During startup of glusterrestd service it
drops some python errors, heres a error log output with increased loglevel.

Maybe someone can give me a hint how to fix this

-- snip --
[2017-11-07 10:29:04 +] [30982] [DEBUG] Current configuration:
   proxy_protocol: False
   worker_connections: 1000
   statsd_host: None
   max_requests_jitter: 0
   post_fork: 
   errorlog: /var/log/glusterrest/errors.log
   enable_stdio_inheritance: False
   worker_class: sync
   ssl_version: 2
   suppress_ragged_eofs: True
   syslog: False
   syslog_facility: user
   when_ready: 
   pre_fork: 
   cert_reqs: 0
   preload_app: False
   keepalive: 2
   accesslog: /var/log/glusterrest/access.log
   group: 0
   graceful_timeout: 30
   do_handshake_on_connect: False
   spew: False
   workers: 2
   proc_name: None
   sendfile: None
   pidfile: /var/run/glusterrest.pid
   umask: 0
   on_reload: 
   pre_exec: 
   worker_tmp_dir: None
   limit_request_fields: 100
   pythonpath: None
   on_exit: 
   config: /usr/local/etc/glusterrest/gunicorn_config.py
   logconfig: None
   check_config: False
   statsd_prefix:
   secure_scheme_headers: {'X-FORWARDED-PROTOCOL': 'ssl',
'X-FORWARDED-PROTO': 'https', 'X-FORWARDED-SSL': 'on'}
   reload_engine: auto
   proxy_allow_ips: ['127.0.0.1']
   pre_request: 
   post_request: 
   forwarded_allow_ips: ['127.0.0.1']
   worker_int: 
   raw_paste_global_conf: []
   threads: 1
   max_requests: 0
   chdir: /usr/libexec/glusterfs/glusterrest
   daemon: False
   user: 0
   limit_request_line: 4094
   access_log_format: %(h)s %(l)s %(u)s %(t)s "%(r)s" %(s)s %(b)s "%(f)s"
"%(a)s"
   certfile: None
   on_starting: 
   post_worker_init: 
   child_exit: 
   worker_exit: 
   paste: None
   default_proc_name: main:app
   syslog_addr: udp://localhost:514
   syslog_prefix: None
   ciphers: TLSv1
   worker_abort: 
   loglevel: debug
   bind: [':8080']
   raw_env: []
   initgroups: False
   capture_output: False
   reload: False
   limit_request_field_size: 8190
   nworkers_changed: 
   timeout: 30
   keyfile: None
   ca_certs: None
   tmp_upload_dir: None
   backlog: 2048
   logger_class: gunicorn.glogging.Logger
[2017-11-07 10:29:04 +] [30982] [INFO] Starting gunicorn 19.7.1
[2017-11-07 10:29:04 +] [30982] [DEBUG] Arbiter booted
[2017-11-07 10:29:04 +] [30982] [INFO] Listening at:
http://0.0.0.0:8080 (30982)
[2017-11-07 10:29:04 +] [30982] [INFO] Using worker: sync
[2017-11-07 10:29:04 +] [30991] [INFO] Booting worker with pid: 30991
[2017-11-07 10:29:04 +] [30991] [ERROR] Exception in worker process
Traceback (most recent call last):
   File "/usr/lib/python2.7/site-packages/gunicorn/arbiter.py", line 578,
in spawn_worker
 worker.init_process()
   File "/usr/lib/python2.7/site-packages/gunicorn/workers/base.py", line
126, in init_process
 self.load_wsgi()
   File "/usr/lib/python2.7/site-packages/gunicorn/workers/base.py", line
135, in load_wsgi
 self.wsgi = self.app.wsgi()
   File "/usr/lib/python2.7/site-packages/gunicorn/app/base.py", line 67,
in wsgi
 self.callable = self.load()
   File "/usr/lib/python2.7/site-packages/gunicorn/app/wsgiapp.py", line
65, in load
 return self.load_wsgiapp()
   File "/usr/lib/python2.7/site-packages/gunicorn/app/wsgiapp.py", line
52, in load_wsgiapp
 return util.import_app(self.app_uri)
   File "/usr/lib/python2.7/site-packages/gunicorn/util.py", line 352, in
import_app
 __import__(module)
ImportError: No module named main
[2017-11-07 10:29:04 +] [30991] [INFO] Worker exiting (pid: 30991)
[2017-11-07 10:29:04 +] [30982] [INFO] Shutting down: Master
[2017-11-07 10:29:04 +] [30993] [INFO] Booting worker with pid: 30993
[2017-11-07 10:29:04 +] [30982] [INFO] Reason: Worker failed to boot.
[2017-11-07 10:29:04 +] [30993] [ERROR] Exception in worker process
Traceback (most recent call last):
   File "/usr/lib/python2.7/site-packages/gunicorn/arbiter.py", line 578,
in spawn_worker
 worker.init_process()
   File "/usr/lib/python2.7/site-packages/gunicorn/workers/base.py", line
126, in init_process
 self.load_wsgi()
   File "/usr/lib/python2.7/site-packages/gunicorn/workers/base.py", line
135, in load_wsgi
 self.wsgi = self.app.wsgi()
   File "/usr/lib/python2.7/site-packages/gunicorn/app/base.py", line 67,
in wsgi
 self.callable = self.load()
   File "/usr/lib/python2.7/site-packages/gunicorn/app/wsgiapp.py", line
65, in load
 return self.load_wsgiapp()
   File "/usr/lib/python2.7/site-packages/gunicorn/app/wsgiapp.py", line
52, in load_wsgiapp
 return util.import_app(self.app_uri)
   File "/usr/lib/python2.7/site-packages/gunicorn/util.py", line 352, in
import_app
 

[Gluster-users] Gluster Summit BOF - Testing

2017-11-08 Thread Jonathan Holloway
Hi all,

We had a BoF about Upstream Testing and increasing coverage.

Discussion included:
 - More docs on using the gluster-specific libraries.
 - Templates, examples, and testcase scripts with common functionality as a 
jumping off point to create a new test script.
 - Reduce the number of systems required by existing libraries (but scale as 
needed). e.g., two instead of eight.
 - Providing scripts, etc. for leveraging Docker, Vagrant, virsh, etc. to 
easily create test environment on laptops, workstations, servers.
 - Access to logs for Jenkins tests.
 - Access to systems for live debugging.
 - What do we test? Maybe need to create upstream test plans.
 - Discussion here on gluster-users and updates in testing section of community 
meeting agenda.

Since returning from Gluster Summit, some of these are already being worked on. 
:-)

Thank you to all the birds of a feather that participated in the discussion!!!
Sweta, did I miss anything in that list?

Cheers,
Jonathan
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users