Re: [Gluster-users] Gluster Volume mounted but not able to show the files from mount point

2016-06-06 Thread ABHISHEK PALIWAL
i am still facing this issue any suggestion On Fri, May 27, 2016 at 10:48 AM, ABHISHEK PALIWAL wrote: > any hint from the logs.. > > On Thu, May 26, 2016 at 11:59 AM, ABHISHEK PALIWAL < > abhishpali...@gmail.com> wrote: > >> >> >> On Thu, May 26, 2016 at 11:54 AM,

[Gluster-users] Weekly Community Meeting - 01/June/2016

2016-06-06 Thread Mohammed Rafi K C
The meeting minutes for this weeks meeting are available at Minutes: https://meetbot.fedoraproject.org/gluster-meeting/2016-06-01/weekly_community_meeting_01june2016.2016-06-01-12.00.html Minutes (text) :

Re: [Gluster-users] snapshot removal failed on one node how to recover (3.7.11)

2016-06-06 Thread Alastair Neil
No one has any suggestions? Would this scenario I have been toying with work: remove the brick from the node with the out of sync snapshots, destroy all associated logical volumes, and then add the brick back as an arbiter node? On 1 June 2016 at 13:40, Alastair Neil

Re: [Gluster-users] files on glusterfs disappears

2016-06-06 Thread Jeff Darcy
> This could be because of nufa xlator. As you say the files are present on the > brick I don't suspect RDMA here. Agreed. > Is nufa still supported? Could this a bug in nufa + dht? Until we explicitly decide to stop building and distributing it, it's still "supported" in some sense, but only

Re: [Gluster-users] files on glusterfs disappears

2016-06-06 Thread Raghavendra Talur
This could be because of nufa xlator. As you say the files are present on the brick I don't suspect RDMA here. Raghavendra, Is nufa still supported? Could this a bug in nufa + dht? On 06-Jun-2016 10:29 PM, "Fedele Stabile" wrote: > I add some information about my

Re: [Gluster-users] files on glusterfs disappears

2016-06-06 Thread Fedele Stabile
I add some information about my cluster: [root@wn001 glusterfs]# gluster volume info   Volume Name: scratch Type: Distribute Volume ID: fc6f18b6-a06c-4fdf-ac08-23e9b4f8053e Status: Started Number of Bricks: 32 Transport-type: rdma Bricks: Brick1: ib-wn001:/bricks/brick1/gscratch0 Brick2:

[Gluster-users] files on glusterfs disappears

2016-06-06 Thread Fedele Stabile
Good morning to all the community. I have a problem and I would like to ask for your kind help.  It happens that newly written files from an MPI-application that uses InfiniBand disappear from glusterfs but they are present in a brick...  Please, can anyone help me to solve this problem?

Re: [Gluster-users] implementation of RPC in glusterFS

2016-06-06 Thread 袁仲
thanks for your reply On Sun, Jun 5, 2016 at 10:42 PM, Kaleb Keithley wrote: > > > - Original Message - > > From: "袁仲" > > > > > > > I have check the source code of glusterFS, the communication between cli > and > > glustered , glustered and

Re: [Gluster-users] [Gluster-devel] Huge VSZ (VIRT) usage by glustershd on dummy node

2016-06-06 Thread Oleksandr Natalenko
Also, I see lots of entries in pmap output: === 7ef9ff8f3000 4K - [ anon ] 7ef9ff8f4000 8192K rw--- [ anon ] 7efa000f4000 4K - [ anon ] 7efa000f5000 8192K rw--- [ anon ] === If I sum them, I get the following: === # pmap 15109 | grep '[ anon ]' |

Re: [Gluster-users] [Gluster-devel] Huge VSZ (VIRT) usage by glustershd on dummy node

2016-06-06 Thread Oleksandr Natalenko
I believe, multi-threaded shd has not been merged at least into 3.7 branch prior to 3.7.11 (incl.), because I've found this [1]. [1] https://www.gluster.org/pipermail/maintainers/2016-April/000628.html 06.06.2016 12:21, Kaushal M написав: Has multi-threaded SHD been merged into 3.7.* by any

Re: [Gluster-users] [Gluster-devel] Huge VSZ (VIRT) usage by glustershd on dummy node

2016-06-06 Thread Kaushal M
Has multi-threaded SHD been merged into 3.7.* by any chance? If not, what I'm saying below doesn't apply. We saw problems when encrypted transports were used, because the RPC layer was not reaping threads (doing pthread_join) when a connection ended. This lead to similar observations of huge VIRT

[Gluster-users] Huge VSZ (VIRT) usage by glustershd on dummy node

2016-06-06 Thread Oleksandr Natalenko
Hello. We use v3.7.11, replica 2 setup between 2 nodes + 1 dummy node for keeping volumes metadata. Now we observe huge VSZ (VIRT) usage by glustershd on dummy node: === root 15109 0.0 13.7 76552820 535272 ? Ssl тра26 2:11 /usr/sbin/glusterfs -s localhost --volfile-id