Re: [Gluster-devel] [Gluster-users] Announcing GlusterFS-3.7.11

2016-04-19 Thread Lindsay Mathieson
On 19 April 2016 at 16:50, Kaushal M  wrote:
> I'm pleased to announce the release of GlusterFS version 3.7.11.


Installed and running quite smoothly here, thanks.

-- 
Lindsay
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] High CPU consumption on Windows guest OS when libgfapi is used

2016-04-19 Thread Vijay Bellur
On Tue, Apr 19, 2016 at 1:24 AM, qingwei wei  wrote:
> Hi Vijay,
>
> I rerun the test with gluster 3.7.11and found that the utilization still
> high when i use libgfapi. The write performance is also no good.
>
> Below are the info and results:
>
> Hypervisor host:
> libvirt 1.2.17-13.el7_2.4
> qemu-kvm 2.1.2-23.el7.1
>
>
> WIndows VM:
> Windows 2008R2
> IOmeter
>
> Ubuntu VM:
> Ubuntu 14.04
> fio 2.1.3
>
>
> Windows VM (libgfapi)
>
> 4k random read
>
> IOPS 22920.76
> Avg. response time (ms) 0.65
> CPU utilization total (%) 82.02
> CPU Privilegde time (%) 77.76
>
> 4k random write
>
> IOPS 4526.39
> Avg. response time (ms) 3.26
> CPU utilization total (%) 93.61
> CPU Privilegde time (%) 90.24
>
> Windows VM (fuse)
>
> 4k random read
>
> IOPS 14662.86
> Avg. response time (ms) 1.08
> CPU utilization total (%) 27.66
> CPU Privilegde time (%) 24.45
>
> 4k random write
>
> IOPS 16911.66
> Avg. response time (ms) 0.94
> CPU utilization total (%) 26.74
> CPU Privilegde time (%) 22.64
>
> Ubuntu VM (libgfapi)
>
> 4k random read
>
> IOPS 34364
> Avg. response time (ms) 0.46
> CPU utilization total (%) 6.09
>
> 4k random write
>
> IOPS 4531
> Avg. response time (ms) 3.53
> CPU utilization total (%) 1.2
>
> Ubuntu VM (fuse)
>
> 4k random read
>
> IOPS 17341
> Avg. response time (ms) 0.92
> CPU utilization total (%) 4.22
>
> 4k random write
>
> IOPS 17611
> Avg. response time (ms) 0.91
> CPU utilization total (%) 4.65
>
> Any comments on this or things i should try?
>

Can you please share your gluster volume configuration? It might be
worth checking if the tunables in profile virt
(extras/group-virt.example) are applied on this volume.

Additionally I would also try to:
1 disable write-behind in gluster to see if there is any peformance
difference for writes
2 use perf record -p  followed by perf annotate to observe the
hot threads.

HTH,
Vijay
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Core generated by trash.t

2016-04-19 Thread Atin Mukherjee
Regression run [1] failed from trash.t, however the same doesn't talk
about any core file, but when I run it with and with out my changes the
same generates a core.

[1]
https://build.gluster.org/job/rackspace-netbsd7-regression-triggered/15971/consoleFull

~Atin
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Fwd: disperse volume file to subvolume mapping

2016-04-19 Thread Serkan Çoban
Hi, I just reinstalled fresh 3.7.11 and I am seeing the same behavior.
50 clients copying part-0- named files using mapreduce to gluster
using one thread per server and they are using only 20 servers out of
60. On the other hand fio tests use all the servers. Anything I can do
to solve the issue?

Thanks,
Serkan


-- Forwarded message --
From: Serkan Çoban 
Date: Mon, Apr 18, 2016 at 2:39 PM
Subject: disperse volume file to subvolume mapping
To: Gluster Users 


Hi, I have a problem where clients are using only 1/3 of nodes in
disperse volume for writing.
I am testing from 50 clients using 1 to 10 threads with file names part-0-.
What I see is clients only use 20 nodes for writing. How is the file
name to sub volume hashing is done? Is this related to file names are
similar?

My cluster is 3.7.10 with 60 nodes each has 26 disks. Disperse volume
is 78 x (16+4). Only 26 out of 78 sub volumes used during writes..
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] disperse volume file to subvolume mapping

2016-04-19 Thread Serkan Çoban
I am copying 10.000 files to gluster volume using mapreduce on
clients. Each map process took one file at a time and copy it to
gluster volume.
My disperse volume consist of 78 subvolumes of 16+4 disk each. So If I
copy >78 files parallel I expect each file goes to different subvolume
right?
In my tests during tests with fio I can see every file goes to
different subvolume, but when I start mapreduce process from clients
only 78/3=26 subvolumes used for writing files.
I see that clearly from network traffic. Mapreduce on client side can
be run multi thread. I tested with 1-5-10 threads on each client but
every time only 26 subvolumes used.
How can I debug the issue further?

On Tue, Apr 19, 2016 at 11:22 AM, Xavier Hernandez
 wrote:
> Hi Serkan,
>
> On 19/04/16 09:18, Serkan Çoban wrote:
>>
>> Hi, I just reinstalled fresh 3.7.11 and I am seeing the same behavior.
>> 50 clients copying part-0- named files using mapreduce to gluster
>> using one thread per server and they are using only 20 servers out of
>> 60. On the other hand fio tests use all the servers. Anything I can do
>> to solve the issue?
>
>
> Distribution of files to ec sets is done by dht. In theory if you create
> many files each ec set will receive the same amount of files. However when
> the number of files is small enough, statistics can fail.
>
> Not sure what you are doing exactly, but a mapreduce procedure generally
> only creates a single output. In that case it makes sense that only one ec
> set is used. If you want to use all ec sets for a single file, you should
> enable sharding (I haven't tested that) or split the result in multiple
> files.
>
> Xavi
>
>
>>
>> Thanks,
>> Serkan
>>
>>
>> -- Forwarded message --
>> From: Serkan Çoban 
>> Date: Mon, Apr 18, 2016 at 2:39 PM
>> Subject: disperse volume file to subvolume mapping
>> To: Gluster Users 
>>
>>
>> Hi, I have a problem where clients are using only 1/3 of nodes in
>> disperse volume for writing.
>> I am testing from 50 clients using 1 to 10 threads with file names
>> part-0-.
>> What I see is clients only use 20 nodes for writing. How is the file
>> name to sub volume hashing is done? Is this related to file names are
>> similar?
>>
>> My cluster is 3.7.10 with 60 nodes each has 26 disks. Disperse volume
>> is 78 x (16+4). Only 26 out of 78 sub volumes used during writes..
>>
>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-users] Assertion failed: ec_get_inode_size

2016-04-19 Thread Ashish Pandey
Hi Serkan, 

I have gone through the logs and can see there are some blocked inode lock 
requests. 
We have observed that some other user have also faced this issue with similar 
logs. 
I think you have tried some rolling update on your setup or some NODES , on 
which you have collected these statedumps, must have gone down for one or other 
reason. 

We will further dig it up and will try to find out the root cause. Till than 
you can resolve this issue by restarting the volume which will restart nfs and 
shd and will release any locks taken by these process. 

"gluster volume start  force" will do the same. 

Regards, 
Ashish 


- Original Message -

From: "Serkan Çoban"  
To: "Ashish Pandey"  
Cc: "Gluster Users" , "Gluster Devel" 
 
Sent: Monday, April 18, 2016 11:51:37 AM 
Subject: Re: [Gluster-users] Assertion failed: ec_get_inode_size 

You can find the statedumps of server and client in below link. 
Gluster version is 3.7.10, 78x(16+4) disperse setup. 60 nodes named 
node185..node244 
https://www.dropbox.com/s/cc2dgsxwuk48mba/gluster_statedumps.zip?dl=0 


On Fri, Apr 15, 2016 at 9:52 PM, Ashish Pandey  wrote: 
> 
> Actually it was my mistake I overlooked the configuration you provided..It 
> will be huge. 
> I would suggest to take statedump on all the nodes and try to grep for 
> "BLOCKED" in statedump files on all the nodes. 
> See if you can see any such line in any file and send those files. No need 
> to send statedump of all the bricks.. 
> 
> 
> 
> 
>  
> From: "Serkan Çoban"  
> To: "Ashish Pandey"  
> Cc: "Gluster Users" , "Gluster Devel" 
>  
> Sent: Friday, April 15, 2016 6:07:00 PM 
> 
> Subject: Re: [Gluster-users] Assertion failed: ec_get_inode_size 
> 
> Hi Asish, 
> 
> Sorry for the question but do you want all brick statedumps from all 
> servers or all brick dumps from one server? 
> All server brick dumps is nearly 700MB zipped.. 
> 
> On Fri, Apr 15, 2016 at 2:16 PM, Ashish Pandey  wrote: 
>> 
>> To get the state dump of fuse client- 
>> 1 - get the PID of fuse mount process 
>> 2 - kill -USR1  
>> 
>> statedump can be found in the same directory where u get for brick 
>> process. 
>> 
>> Following link could be helpful for future reference - 
>> 
>> https://github.com/gluster/glusterfs/blob/master/doc/debugging/statedump.md 
>> 
>> Ashish 
>> 
>>  
>> From: "Serkan Çoban"  
>> To: "Ashish Pandey"  
>> Cc: "Gluster Users" , "Gluster Devel" 
>>  
>> Sent: Friday, April 15, 2016 4:02:20 PM 
>> Subject: Re: [Gluster-users] Assertion failed: ec_get_inode_size 
>> 
>> Yes it is only one brick which error appears. I can send all other 
>> brick dumps too.. 
>> How can I get state dump in fuse client? There is no gluster command 
>> there.. 
>> ___ 
>> Gluster-users mailing list 
>> gluster-us...@gluster.org 
>> http://www.gluster.org/mailman/listinfo/gluster-users 
>> 
> ___ 
> Gluster-users mailing list 
> gluster-us...@gluster.org 
> http://www.gluster.org/mailman/listinfo/gluster-users 
> 
___ 
Gluster-users mailing list 
gluster-us...@gluster.org 
http://www.gluster.org/mailman/listinfo/gluster-users 

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-users] Assertion failed: ec_get_inode_size

2016-04-19 Thread Serkan Çoban
I did one upgrade only from 3.7.9 to 3.7.10 and it is not a rolling
upgrade,  I stopped volume and then upgrade all the components.
I will try restarting the volume and see if it helps..

On Mon, Apr 18, 2016 at 10:17 AM, Ashish Pandey  wrote:
> Hi Serkan,
>
> I have gone through the logs and can see there are some blocked inode lock
> requests.
> We have observed that some other user have also faced this issue with
> similar logs.
> I think you  have tried some rolling update on your setup or some NODES , on
> which you have collected these statedumps, must have gone down for one or
> other reason.
>
> We will further dig it up and will try to find out the root cause. Till than
> you can resolve this issue by restarting the volume which will restart nfs
> and shd and will release any locks taken by these process.
>
> "gluster volume start  force" will do the same.
>
> Regards,
> Ashish
>
>
> 
> From: "Serkan Çoban" 
> To: "Ashish Pandey" 
> Cc: "Gluster Users" , "Gluster Devel"
> 
> Sent: Monday, April 18, 2016 11:51:37 AM
>
> Subject: Re: [Gluster-users] Assertion failed: ec_get_inode_size
>
> You can find the statedumps of server and client in below link.
> Gluster version is 3.7.10, 78x(16+4) disperse setup. 60 nodes named
> node185..node244
> https://www.dropbox.com/s/cc2dgsxwuk48mba/gluster_statedumps.zip?dl=0
>
>
> On Fri, Apr 15, 2016 at 9:52 PM, Ashish Pandey  wrote:
>>
>> Actually it was my mistake I overlooked the configuration you provided..It
>> will be huge.
>> I would suggest to take statedump on all the nodes and try to grep for
>> "BLOCKED" in statedump files on all the nodes.
>> See if you can see any such line in any file and send those files. No need
>> to send statedump of all the bricks..
>>
>>
>>
>>
>> 
>> From: "Serkan Çoban" 
>> To: "Ashish Pandey" 
>> Cc: "Gluster Users" , "Gluster Devel"
>> 
>> Sent: Friday, April 15, 2016 6:07:00 PM
>>
>> Subject: Re: [Gluster-users] Assertion failed: ec_get_inode_size
>>
>> Hi Asish,
>>
>> Sorry for the question but do you want all brick statedumps from all
>> servers or all brick dumps from one server?
>> All server brick dumps is nearly 700MB zipped..
>>
>> On Fri, Apr 15, 2016 at 2:16 PM, Ashish Pandey 
>> wrote:
>>>
>>> To get the state dump of fuse client-
>>> 1 - get the PID of fuse mount process
>>> 2 - kill -USR1 
>>>
>>> statedump can be found in the same directory where u get for brick
>>> process.
>>>
>>> Following link could be helpful for future reference -
>>>
>>>
>>> https://github.com/gluster/glusterfs/blob/master/doc/debugging/statedump.md
>>>
>>> Ashish
>>>
>>> 
>>> From: "Serkan Çoban" 
>>> To: "Ashish Pandey" 
>>> Cc: "Gluster Users" , "Gluster Devel"
>>> 
>>> Sent: Friday, April 15, 2016 4:02:20 PM
>>> Subject: Re: [Gluster-users] Assertion failed: ec_get_inode_size
>>>
>>> Yes it is only one brick which error appears. I can send all other
>>> brick dumps too..
>>> How can I get state dump in fuse client? There is no gluster command
>>> there..
>>> ___
>>> Gluster-users mailing list
>>> gluster-us...@gluster.org
>>> http://www.gluster.org/mailman/listinfo/gluster-users
>>>
>> ___
>> Gluster-users mailing list
>> gluster-us...@gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-users
>>
> ___
> Gluster-users mailing list
> gluster-us...@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-users] Assertion failed: ec_get_inode_size

2016-04-19 Thread Serkan Çoban
You can find the statedumps of server and client in below link.
Gluster version is 3.7.10, 78x(16+4) disperse setup. 60 nodes named
node185..node244
https://www.dropbox.com/s/cc2dgsxwuk48mba/gluster_statedumps.zip?dl=0


On Fri, Apr 15, 2016 at 9:52 PM, Ashish Pandey  wrote:
>
> Actually it was my mistake I overlooked the configuration you provided..It
> will be huge.
> I would suggest to take statedump on all the nodes and try to grep for
> "BLOCKED" in statedump files on all the nodes.
> See if you can see any such line in any file and send those files. No need
> to send statedump of all the bricks..
>
>
>
>
> 
> From: "Serkan Çoban" 
> To: "Ashish Pandey" 
> Cc: "Gluster Users" , "Gluster Devel"
> 
> Sent: Friday, April 15, 2016 6:07:00 PM
>
> Subject: Re: [Gluster-users] Assertion failed: ec_get_inode_size
>
> Hi Asish,
>
> Sorry for the question but do you want all brick statedumps from all
> servers or all brick dumps from one server?
> All server brick dumps is nearly 700MB zipped..
>
> On Fri, Apr 15, 2016 at 2:16 PM, Ashish Pandey  wrote:
>>
>> To get the state dump of fuse client-
>> 1 - get the PID of fuse mount process
>> 2 - kill -USR1 
>>
>> statedump can be found in the same directory where u get for brick
>> process.
>>
>> Following link could be helpful for future reference -
>>
>> https://github.com/gluster/glusterfs/blob/master/doc/debugging/statedump.md
>>
>> Ashish
>>
>> 
>> From: "Serkan Çoban" 
>> To: "Ashish Pandey" 
>> Cc: "Gluster Users" , "Gluster Devel"
>> 
>> Sent: Friday, April 15, 2016 4:02:20 PM
>> Subject: Re: [Gluster-users] Assertion failed: ec_get_inode_size
>>
>> Yes it is only one brick which error appears. I can send all other
>> brick dumps too..
>> How can I get state dump in fuse client? There is no gluster command
>> there..
>> ___
>> Gluster-users mailing list
>> gluster-us...@gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-users
>>
> ___
> Gluster-users mailing list
> gluster-us...@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] NetBSD FUSE and filehandles

2016-04-19 Thread Emmanuel Dreyfus
On Tue, Apr 19, 2016 at 04:25:07PM +0200, Csaba Henk wrote:
> I also have a vague memory that in Linux VFS the file operations
> are dispatched to file objects in quite a pure oop manner (which
> suggests itself to practices like "storing the file handle identifier
> along with the file object"), while in traditional BSD VFS the file
> ops just get the vnode (from which modernization efforts departed
> to various degree across the recent BSD variants).

Yes, NetBSD VFS has not clue about upper representation of the file,
it just has a reference on the vnode. That one will be difficult to 
implement.

-- 
Emmanuel Dreyfus
m...@netbsd.org
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] NetBSD FUSE and filehandles

2016-04-19 Thread Csaba Henk
Hi Manu,

My memories of FUSE internals are a bit rusty, but I try
to give a usable answer.

Your description is essentially correct, but some of it
needs to be addressed more carefully. The Linux kernel's
VFS internally maintains file objects. They are tied to a given
call of open(2). The file handle is an user defined entity and
as such can be unique to open calls or shared between some
 of them (eg. like opens dispatched to the same
backing resource; or in a completely stateless file system
might choose it to be a constant). Anyway, the kernel
will store the file handle identifier along with the file object,
so if you choose them to be unique per open call, it will be
mapped one-to-one to the in-kernel file objects.

However, (pid, fd) won't be one-to-one mapped to the in-kernel
file objects, because process of pid1 might fork a process of pid2,
whereby files inherit, so (pid1, fd) and (pid2, fd) will be associated
with the same file object.

I hope I did not make a mistake in this recollection.

I also have a vague memory that in Linux VFS the file operations
are dispatched to file objects in quite a pure oop manner (which
suggests itself to practices like "storing the file handle identifier
along with the file object"), while in traditional BSD VFS the file
ops just get the vnode (from which modernization efforts departed
to various degree across the recent BSD variants).

Csaba

On Mon, Apr 4, 2016 at 6:43 AM, Emmanuel Dreyfus  wrote:
> Hi
>
> Anoop C S asked me about a NetBSD FUSE bug that prevented mandatory
> locks to properly work. In order to work on a fix, I need confirmation
> about how it works on Linux, which is the reference implementation of
> FUSE.
>
> Here is what I understand, please tell me if there is something wrong:
>
> Each time a process opens a file within the FUSE filesystem, the kernel
> will call the FUSE open method, and the filesystem shall return a
> filehandle. For subsequent operations on the open file descriptor, the
> kernel will include the adequate filehandle in the FUSE requests.
>
> The filehandle is tied to the couple (calling process, file descriptor
> within calling process). Each time the calling process calls open again
> on the same file, a new file handle is returned. This means this creates
> two distincts file handles;
>
> fd1 = open("/mnt/foo", O_RDWR);
> fd2 = open("/mnt/foo", O_RDWR);
>
> Is all of that correct? If it is, here is the problem I face now: the
> NetBSD kernel implements PUFFS, an interface smilar but incompatible
> with FUSE that was developped before FUSE became the de-facto standard.
> I have being maintaining the PUFFS to FUSE compatibility layer we use to
> run Glusterfs on NetBSD.
>
> PUFFS sends userland requests about vnode operations, the userland
> filesystems gets references to the vnode, it can also get the calling
> process PID, but currently the file descriptor within calling process is
> not provided to the userland filesystem.
>
> If my understanding of FUSE filehandle semantics is correct, that means
> I will have to modify the PUFFS interface so that operations on open
> files get a reference about the file descriptor within calling process,
> since this is a requirement to retreive the appropriate filehandle for
> FUSE.
>
> Anyone can confirm?
>
> --
> Emmanuel Dreyfus
> http://hcpnet.free.fr/pubz
> m...@netbsd.org
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-infra] freebsd-smoke failures

2016-04-19 Thread Michael Scherer
Le mardi 19 avril 2016 à 09:58 -0400, Jeff Darcy a écrit :
> > So can a workable solution be pushed to git, because I plan to force the
> > checkout to be like git, and it will break again (and this time, no
> > workaround will be possible).
> > 
> 
> It has been pushed to git, but AFAICT pull requests for that repo go into
> a black hole.

Indeed, I even made the same PR twice:
https://github.com/gluster/glusterfs-patch-acceptance-tests/pull/12

I guess no one is officially taking ownership of it, which is kinda bad,
but can be solved.

No volunteer ?

-- 
Michael Scherer
Sysadmin, Community Infrastructure and Platform, OSAS




signature.asc
Description: This is a digitally signed message part
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-infra] freebsd-smoke failures

2016-04-19 Thread Jeff Darcy
> So can a workable solution be pushed to git, because I plan to force the
> checkout to be like git, and it will break again (and this time, no
> workaround will be possible).
> 

It has been pushed to git, but AFAICT pull requests for that repo go into
a black hole.
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Minutes from todays Gluster Community Bug Triage meeting (2016-04-19)

2016-04-19 Thread Saravanakumar Arumugam

Hi,
Thanks for the participation.  Please find meeting summary below.

Meeting ended Tue Apr 19 12:58:58 2016 UTC. Information about MeetBot at 
http://wiki.debian.org/MeetBot .
Minutes: 
https://meetbot.fedoraproject.org/gluster-meeting/2016-04-19/gluster_bug_triage.2016-04-19-12.00.html
Minutes (text): 
https://meetbot.fedoraproject.org/gluster-meeting/2016-04-19/gluster_bug_triage.2016-04-19-12.00.txt
Log: 
https://meetbot.fedoraproject.org/gluster-meeting/2016-04-19/gluster_bug_triage.2016-04-19-12.00.log.html


Meeting started by Saravanakmr at 12:00:36 UTC (full logs 
). 




 Meeting summary

1.
1. agenda: https://public.pad.fsfe.org/p/gluster-bug-triage
   (Saravanakmr
   
,
   12:01:01)

2. *Roll Call* (Saravanakmr
   
,
   12:01:13)
3. *msvbhat will look into lalatenduM's automated Coverity setup in
   Jenkins which need assistance from an admin with more permissions*
   (Saravanakmr
   
,
   12:07:44)
1. /ACTION/: msvbhat will look into lalatenduM's automated Coverity
   setup in Jenkins which need assistance from an admin with more
   permissions (Saravanakmr
   
,
   12:09:08)

4. *ndevos need to decide on how to provide/use debug builds*
   (Saravanakmr
   
,
   12:09:31)
1. /ACTION/: ndevos need to decide on how to provide/use debug
   builds (Saravanakmr
   
,
   12:10:13)

5. *Manikandan to followup with kashlm to get access to gluster-infra*
   (Saravanakmr
   
,
   12:10:33)
1. /ACTION/: Manikandan to followup with kashlm to get access to
   gluster-infra (Saravanakmr
   
,
   12:11:44)
2. /ACTION/: Manikandan and Nandaja will update on bug automation
   (Saravanakmr
   
,
   12:11:54)

6. *msvbhat provide a simple step/walk-through on how to provide
   testcases for the nightly rpm tests* (Saravanakmr
   
,
   12:12:08)
1. /ACTION/: msvbhat provide a simple step/walk-through on how to
   provide testcases for the nightly rpm tests (Saravanakmr
   
,
   12:12:18)
2. /ACTION/: ndevos to propose some test-cases for minimal libgfapi
   test (Saravanakmr
   
,
   12:12:27)

7. *rafi needs to followup on #bug 1323895* (Saravanakmr
   
,
   12:12:36)
1. /ACTION/: rafi needs to followup on #bug 1323895 (Saravanakmr
   
,
   12:14:04)

8. *need to discuss about writing a script to update bug assignee from
   gerrit patch* (Saravanakmr
   
,
   12:14:27)
1. /ACTION/: ndevos need to discuss about writing a script to
   update bug assignee from gerrit patch (Saravanakmr
   
,
   12:18:29)

9. *hari to send a request asking developers to setup notification for
   bugs being filed* (Saravanakmr
   
,
   12:18:52)
1. http://www.spinics.net/lists/gluster-devel/msg19169.html
   (Saravanakmr
   
,
   12:22:20)

10. *Group Triage* (Saravanakmr
   


Re: [Gluster-devel] [Gluster-infra] freebsd-smoke failures

2016-04-19 Thread Michael Scherer
Le samedi 02 avril 2016 à 07:53 -0400, Jeff Darcy a écrit :
> > IIRC, this happens because in the build job use "--enable-bd-xlator"
> > option while configure
> 
> I came to the same conclusion, and set --enable-bd-xlator=no on the
> slave.  I also had to remove -Werror because that was also causing
> failures.  FreeBSD smoke is now succeeding.

But direct modification of the slave broke automated deployment however.

So can a workable solution be pushed to git, because I plan to force the
checkout to be like git, and it will break again (and this time, no
workaround will be possible). 


-- 
Michael Scherer
Sysadmin, Community Infrastructure and Platform, OSAS




signature.asc
Description: This is a digitally signed message part
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Mandatory Locks support for GlusterFS

2016-04-19 Thread Anoop C S
On Wed, 2015-12-02 at 06:29 +0530, Anoop C S wrote:
> Hi all,
> 
> As part of preparing GlusterFS to cope with Multi-protocol
> environment,
> it is necessary to have mandatory locks support within the file
> system
> with respect to its integration to protocols like NFS or SMB. This
> will
> allow clients which require mandatory locks to apply for the same
> instead of the default advisory mode locks. GlusterFS native clients
> can still live with their current advisory lock semantics. Apart from
> linux-style mandatory locks, which is enforced based on file mode
> bits,
> this feature enables us to force the entire volume to either strictly
> follow mandatory lock semantics or to have a mixed environment where
> both advisory and mandatory locks exists on their own. Please
> see/review [1] for detailed design doc and add your
> suggestions/comments as early as possible.
> 
> [1] http://review.gluster.org/#/c/12014/
> 

Updated the design doc with explanation on nature of proposed change.

> Thanks,
> --Anoop C S.
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] REMINDER: Gluster Community Bug Triage meeting at 12:00 UTC ~(in 3 hours)

2016-04-19 Thread Saravanakumar Arumugam

Hi,

This meeting is scheduled for anyone, who is interested in learning more
about, or assisting with the Bug Triage.

Meeting details:
- location: #gluster-meeting on Freenode IRC
(https://webchat.freenode.net/?channels=gluster-meeting )
- date: every Tuesday
- time: 12:00 UTC
 (in your terminal, run: date -d "12:00 UTC")
- agenda: https://public.pad.fsfe.org/p/gluster-bug-triage

Currently the following items are listed:
* Roll Call
* Status of last weeks action items
* Group Triage
* Open Floor

The last two topics have space for additions. If you have a suitable bug
or topic to discuss, please add it to the agenda.

Appreciate your participation.

Thanks,
Saravana
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] disperse volume file to subvolume mapping

2016-04-19 Thread Xavier Hernandez

Hi Serkan,

On 19/04/16 09:18, Serkan Çoban wrote:

Hi, I just reinstalled fresh 3.7.11 and I am seeing the same behavior.
50 clients copying part-0- named files using mapreduce to gluster
using one thread per server and they are using only 20 servers out of
60. On the other hand fio tests use all the servers. Anything I can do
to solve the issue?


Distribution of files to ec sets is done by dht. In theory if you create 
many files each ec set will receive the same amount of files. However 
when the number of files is small enough, statistics can fail.


Not sure what you are doing exactly, but a mapreduce procedure generally 
only creates a single output. In that case it makes sense that only one 
ec set is used. If you want to use all ec sets for a single file, you 
should enable sharding (I haven't tested that) or split the result in 
multiple files.


Xavi



Thanks,
Serkan


-- Forwarded message --
From: Serkan Çoban 
Date: Mon, Apr 18, 2016 at 2:39 PM
Subject: disperse volume file to subvolume mapping
To: Gluster Users 


Hi, I have a problem where clients are using only 1/3 of nodes in
disperse volume for writing.
I am testing from 50 clients using 1 to 10 threads with file names part-0-.
What I see is clients only use 20 nodes for writing. How is the file
name to sub volume hashing is done? Is this related to file names are
similar?

My cluster is 3.7.10 with 60 nodes each has 26 disks. Disperse volume
is 78 x (16+4). Only 26 out of 78 sub volumes used during writes..


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-users] Announcing GlusterFS-3.7.11

2016-04-19 Thread Lindsay Mathieson
Cool - thanks!

On 19 April 2016 at 16:50, Kaushal M  wrote:
> Packages for Debian Stretch, Jessie and Wheezy are available on
> download.gluster.org.


I think
   http://download.gluster.org/pub/gluster/glusterfs/LATEST/Debian/

is still pointing to 3.7.10

-- 
Lindsay
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Gluster Brick Offline after reboot!!

2016-04-19 Thread ABHISHEK PALIWAL
Hi Atin,

Thanks.

Have more doubts here.

Brick and glusterd connected by unix domain socket.It is just a local
socket then why it is disconnect in below logs:

 1667 [2016-04-03 10:12:32.984331] I [MSGID: 106005]
[glusterd-handler.c:4908:__glusterd_brick_rpc_notify] 0-management:
Brick 10.32.   1.144:/opt/lvmdir/c2/brick has disconnected from
glusterd.
 1668 [2016-04-03 10:12:32.984366] D [MSGID: 0]
[glusterd-utils.c:4872:glusterd_set_brick_status] 0-glusterd: Setting
brick 10.32.1.144:/opt/lvmdir/c2/brick status to stopped


Regards,
Abhishek

On Fri, Apr 15, 2016 at 9:14 AM, Atin Mukherjee  wrote:

>
>
> On 04/14/2016 04:07 PM, ABHISHEK PALIWAL wrote:
> >
> >
> > On Thu, Apr 14, 2016 at 2:33 PM, Atin Mukherjee  > > wrote:
> >
> >
> >
> > On 04/05/2016 03:35 PM, ABHISHEK PALIWAL wrote:
> > >
> > >
> > > On Tue, Apr 5, 2016 at 2:22 PM, Atin Mukherjee <
> amukh...@redhat.com 
> > > >> wrote:
> > >
> > >
> > >
> > > On 04/05/2016 01:04 PM, ABHISHEK PALIWAL wrote:
> > > > Hi Team,
> > > >
> > > > We are using Gluster 3.7.6 and facing one problem in which
> > brick is not
> > > > comming online after restart the board.
> > > >
> > > > To understand our setup, please look the following steps:
> > > > 1. We have two boards A and B on which Gluster volume is
> > running in
> > > > replicated mode having one brick on each board.
> > > > 2. Gluster mount point is present on the Board A which is
> > sharable
> > > > between number of processes.
> > > > 3. Till now our volume is in sync and everthing is working
> fine.
> > > > 4. Now we have test case in which we'll stop the glusterd,
> > reboot the
> > > > Board B and when this board comes up, starts the glusterd
> > again on it.
> > > > 5. We repeated Steps 4 multiple times to check the
> > reliability of system.
> > > > 6. After the Step 4, sometimes system comes in working state
> > (i.e. in
> > > > sync) but sometime we faces that brick of Board B is present
> in
> > > > “gluster volume status” command but not be online even
> > waiting for
> > > > more than a minute.
> > > As I mentioned in another email thread until and unless the
> > log shows
> > > the evidence that there was a reboot nothing can be concluded.
> > The last
> > > log what you shared with us few days back didn't give any
> > indication
> > > that brick process wasn't running.
> > >
> > > How can we identify that the brick process is running in brick
> logs?
> > >
> > > > 7. When the Step 4 is executing at the same time on Board A
> some
> > > > processes are started accessing the files from the Gluster
> > mount point.
> > > >
> > > > As a solution to make this brick online, we found some
> > existing issues
> > > > in gluster mailing list giving suggestion to use “gluster
> > volume start
> > > >  force” to make the brick 'offline' to 'online'.
> > > >
> > > > If we use “gluster volume start  force” command.
> > It will kill
> > > > the existing volume process and started the new process then
> > what will
> > > > happen if other processes are accessing the same volume at
> > the time when
> > > > volume process is killed by this command internally. Will it
> > impact any
> > > > failure on these processes?
> > > This is not true, volume start force will start the brick
> > processes only
> > > if they are not running. Running brick processes will not be
> > > interrupted.
> > >
> > > we have tried and check the pid of process before force start and
> > after
> > > force start.
> > > the pid has been changed after force start.
> > >
> > > Please find the logs at the time of failure attached once again
> with
> > > log-level=debug.
> > >
> > > if you can give me the exact line where you are able to find out
> that
> > > the brick process
> > > is running in brick log file please give me the line number of
> > that file.
> >
> > Here is the sequence at which glusterd and respective brick process
> is
> > restarted.
> >
> > 1. glusterd restart trigger - line number 1014 in glusterd.log file:
> >
> > [2016-04-03 10:12:29.051735] I [MSGID: 100030]
> [glusterfsd.c:2318:main]
> > 0-/usr/sbin/glusterd: Started running /usr/sbin/
> glusterd
> > version 3.7.6 (args: /usr/sbin/glusterd -p /var/run/glusterd.pid
> > --log-level DEBUG)
> >
> > 2. brick start trigger - line number 190 in opt-lvmdir-c2-brick.log
> >
> > [2016-04-03 10:14:25.268833] I [MSGID: 100030]
> [glusterfs

Re: [Gluster-devel] Announcing GlusterFS-3.7.11

2016-04-19 Thread Kaushal M
On Tue, Apr 19, 2016 at 12:20 PM, Kaushal M  wrote:
> Hi All,
>
> I'm pleased to announce the release of GlusterFS version 3.7.11.
>
> GlusterFS-3.7.11 has been a quick release to fix some regressions
> found in GlusterFS-3.7.10. If anyone has been wondering why there
> hasn't been a proper release announcement for 3.7.10 please refer to
> my mail on this subject
> https://www.gluster.org/pipermail/gluster-users/2016-April/026164.html.
>
> Release-notes for GlusterFS-3.7.11 are available at
> https://github.com/gluster/glusterfs/blob/release-3.7/doc/release-notes/3.7.11.md.
>
> The tarball for 3.7.11 can be downloaded from
> http://download.gluster.org/pub/gluster/glusterfs/3.7/3.7.11/
>
> Packages for Fedora 23, 24, 25 available via Fedora Updates or 
> Updates-Testing.
>
> Packages for Fedora 22 and EPEL {5,6,7} are available on download.gluster.org.
>
> Packages for Debian Stretch, Jessie and Wheezy are available on
> download.gluster.org.
>
> Packages for Ubuntu are in Launchpad PPAs at https://launchpad.net/~gluster
>
> Packages for SLES-12, OpenSuSE-13, and Leap42.1 are in SuSE Build Service at
> https://build.opensuse.org/project/subprojects/home:kkeithleatredhat
>
> Packages for other distributions should be available soon in their
> respective distribution channels.

NetBSD pkgsrc has been updated to 3.7.11, and should be available in
little while on
http://ftp.netbsd.org/pub/pkgsrc/current/pkgsrc/filesystems/glusterfs/README.html

>
>
> Thank you to all the contributors to this release.
>
> Regards,
> Kaushal
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Glusterfs core report.

2016-04-19 Thread yang . bin18
Thanks for your response.

We use glusterfs 3.6.7.

Sure ,We use Centos7.0.

Related log show bellow:

143 [2016-04-13 06:33:54.236013] W 
[glusterfsd.c:1211:cleanup_and_exit] (--> 0-: received signum (15), 
shutting down
144 [2016-04-13 06:33:54.236081] I [fuse-bridge.c:5607:fini] 0-fuse: 
Unmounting '/var/lib/nova/mnt/a77401a594b06b2b56cc52ee61bb4def'.
145 [2016-04-13 06:55:11.365558] I [MSGID: 100030] 
[glusterfsd.c:2035:main] 0-/usr/sbin/glusterfs: Started running 
/usr/sbin/glusterfs version 3.6.7 (args: /usr/sbin/glusterfs 
--volfile-server=10.74.125.254 --volfile-id=/xitos-volume 
/var/lib/nova/mnt/a77401a594b06b2b56cc52ee61bb4def)
146 [2016-04-13 06:55:11.402762] I [dht-shared.c:337:dht_init_regex] 
0-xitos-volume-dht: using regex rsync-hash-regex = ^\.(.+)\.[^.]+$
147 [2016-04-13 06:55:11.404087] I [client.c:2268:notify] 
0-xitos-volume-client-0: parent translators are ready, attempting conn  
ect on transport
148 Final graph:
149 
+--+
150   1: volume xitos-volume-client-0
151   2: type protocol/client
152   3: option ping-timeout 42
153   4: option remote-host 10.74.125.247
154   5: option remote-subvolume /mnt/dht/volume
155   6: option transport-type socket
156   7: option send-gids true
157   8: end-volume
158   9: 
159  10: volume xitos-volume-dht
160  11: type cluster/distribute
161  12: subvolumes xitos-volume-client-0
162  13: end-volume
163  14: 
164  15: volume xitos-volume-write-behind
165  16: type performance/write-behind
166  17: subvolumes xitos-volume-dht
167  18: end-volume
168  19: 
169  20: volume xitos-volume-read-ahead
170  21: type performance/read-ahead
171  22: subvolumes xitos-volume-write-behind
172  23: end-volume
173  24: 
174  25: volume xitos-volume-io-cache
175  26: type performance/io-cache
176  27: subvolumes xitos-volume-read-ahead
177  28: end-volume
178  29:
179  30: volume xitos-volume-quick-read
180  31: type performance/quick-read
181  32: subvolumes xitos-volume-io-cache
182  33: end-volume
183  34:
184  35: volume xitos-volume-open-behind
185  36: type performance/open-behind
186  37: subvolumes xitos-volume-quick-read
187  38: end-volume
188  39:
189  40: volume xitos-volume-md-cache
190  41: type performance/md-cache
191  42: subvolumes xitos-volume-open-behind
192  43: end-volume
193  44:
194  45: volume xitos-volume
195  46: type debug/io-stats
196  47: option latency-measurement off
197  48: option count-fop-hits off
198  49: subvolumes xitos-volume-md-cache
199  50: end-volume
200  51: 
201  52: volume meta-autoload
202  53: type meta
203  54: subvolumes xitos-volume
204  55: end-volume
205  56: 
206 
+--+
207 [2016-04-13 06:55:11.408891] I [rpc-clnt.c:1761:rpc_clnt_reconfig] 
0-xitos-volume-client-0: changing port to 49152 (from 0)
208 [2016-04-13 06:55:11.413254] I 
[client-handshake.c:1413:select_server_supported_programs] 
0-xitos-volume-client-0: Using Program GlusterFS 3.3, Num 
(1298437), Version (330)
209 [2016-04-13 06:55:11.413766] I 
[client-handshake.c:1200:client_setvolume_cbk] 0-xitos-volume-client-0: 
Connected to xitos-volume-client-0, attached to remote volume 
'/mnt/dht/volume'.
210 [2016-04-13 06:55:11.413795] I 
[client-handshake.c:1210:client_setvolume_cbk] 0-xitos-volume-client-0: 
Server and Client lk-version numbers are not same, reopening the 
fds
211 [2016-04-13 06:55:11.420494] I 
[fuse-bridge.c:5086:fuse_graph_setup] 0-fuse: switched to graph 0
212 [2016-04-13 06:55:11.420691] I 
[client-handshake.c:188:client_set_lk_version_cbk] 
0-xitos-volume-client-0: Server lk version = 1
213 [2016-04-13 06:55:11.420921] I [fuse-bridge.c:4015:fuse_init] 
0-glusterfs-fuse: FUSE inited with protocol versions: glusterfs 
7.22 kernel 7.22
214 pending frames:
215 frame : type(0) op(0)
216 frame : type(0) op(0)
217 frame : type(0) op(0)
218 frame : type(0) op(0)
219 frame : type(0) op(0)
220 frame : type(0) op(0)
221 frame : type(0) op(0)
222 patchset: git://git.gluster.com/glusterfs.git
223 signal received: 6
224 time of crash:
225 2016-04-13 08:15:37
226 configuration details:
227 argp 1
228 backtrace 1
229 dlfcn 1
230 libpthread 1
231 llistxattr 1
232 setfsid 1
233 spinlock 1
234 epoll.h 1
235 xattr.h 1
236 st_atim.tv_nsec 1
237 package-string: glusterfs 3.6.7
238 
/lib64/libglusterfs.so.0(_gf_msg_backtrace_nomem+0xb2)[0x7f308d9e93d2]

Re: [Gluster-devel] Glusterfs core report.

2016-04-19 Thread yang . bin18
Thanks for your reponse.

Our glibc is:

[root@host-247 glusterfs]# rpm -qa | grep glibc
glibc-common-2.17-55.el7.x86_64
glibc-devel-2.17-55.el7.i686
glibc-2.17-55.el7.x86_64
glibc-static-2.17-55.el7.x86_64
compat-glibc-headers-2.12-4.el7.x86_64
glibc-headers-2.17-55.el7.x86_64
glibc-devel-2.17-55.el7.x86_64
glibc-2.17-55.el7.i686
compat-glibc-2.12-4.el7.x86_64










- Original Message -
> From: "Vijay Bellur" 
> To: "yang bin18" 
> Cc: "Gluster Devel" 
> Sent: Tuesday, April 19, 2016 9:17:42 AM
> Subject: Re: [Gluster-devel] Glusterfs core report.
> 
> 
> 
> On Fri, Apr 15, 2016 at 2:43 AM, yang.bi...@zte.com.cn <
> yang.bi...@zte.com.cn > wrote:
> 
> 
> Glusterfs core when mounting. here is the backtree.
> 
> 
> Program terminated with signal 6, Aborted.
> #0 0x7f308ca04989 in raise () from /lib64/libc.so.6
> Missing separate debuginfos, use: debuginfo-install 
glibc-2.17-55.el7.x86_64
> keyutils-libs-1.5.8-3.el7.x86_64 krb5-libs-1.11.3-49.el7.x86_64
> libcom_err-1.42.9-4.el7.x86_64 libgcc-4.8.2-16.el7.x86_64
> libselinux-2.2.2-6.el7.x86_64 openssl-libs-1.0.1e-34.el7.7.x86_64
> pcre-8.32-12.el7.x86_64 sssd-client-1.11.2-65.el7.x86_64
> xz-libs-5.1.2-8alpha.el7.x86_64 zlib-1.2.7-13.el7.x86_64
> (gdb) bt
> #0 0x7f308ca04989 in raise () from /lib64/libc.so.6
> #1 0x7f308ca06098 in abort () from /lib64/libc.so.6
> #2 0x7f308ca45197 in __libc_message () from /lib64/libc.so.6
> #3 0x7f308ca4c56d in _int_free () from /lib64/libc.so.6
> #4 0x7f308096ebc1 in dht_local_wipe (this=0x7f308f306460,
> local=0x7f3080042880) at dht-helper.c:475
> #5 0x7f308099b9fd in dht_writev_cbk (frame=0x7f308bc11e5c,
> cookie=, this=, op_ret=131072,
> op_errno=0, prebuf=, postbuf=0x7fff2020c870, xdata=0x0) 
at
> dht-inode-write.c:84
> #6 0x7f3080be4512 in client3_3_writev_cbk (req=,
> iov=, count=,
> myframe=0x7f308bc116f8) at client-rpc-fops.c:856
> #7 0x7f308d7bd100 in rpc_clnt_handle_reply
> (clnt=clnt@entry=0x7f308f3292d0, pollin=pollin@entry=0x7f308f39bb10)
> at rpc-clnt.c:763
> #8 0x7f308d7bd374 in rpc_clnt_notify (trans=,
> mydata=0x7f308f329300, event=,
> data=0x7f308f39bb10) at rpc-clnt.c:891
> #9 0x7f308d7b92c3 in rpc_transport_notify
> (this=this@entry=0x7f308f35f3f0,
> event=event@entry=RPC_TRANSPORT_MSG_RECEIVED,
> data=data@entry=0x7f308f39bb10) at rpc-transport.c:516
> #10 0x7f3082ac17a0 in socket_event_poll_in
> (this=this@entry=0x7f308f35f3f0) at socket.c:2234
> #11 0x7f3082ac3f94 in socket_event_handler (fd=, 
idx=1,
> data=data@entry=0x7f308f35f3f0, poll_in=1, poll_out=0,
> poll_err=0) at socket.c:2347
> #12 0x7f308da3e8c2 in event_dispatch_epoll_handler (i=,
> events=0x7f308f2febd0, event_pool=0x7f308f2b76c0)
> at event-epoll.c:384
> #13 event_dispatch_epoll (event_pool=0x7f308f2b76c0) at 
event-epoll.c:445
> #14 0x7f308de92fe2 in main (argc=4, argv=0x7fff2020de78) at
> glusterfsd.c:2060
> 
> 
> Thanks for the report. What version of gluster is being used here? I 
assume
> this is on Centos 7.

What is the version of glibc? recently we identified a corruption caused 
by glibc:
https://bugzilla.redhat.com/show_bug.cgi?id=892601

The issue you are seeing might as well be related.

> 
> Would it be possible to share the client log file?
> 
> Regards,
> Vijay
> 
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel


ZTE Information Security Notice: The information contained in this mail (and 
any attachment transmitted herewith) is privileged and confidential and is 
intended for the exclusive use of the addressee(s).  If you are not an intended 
recipient, any disclosure, reproduction, distribution or other dissemination or 
use of the information contained is strictly prohibited.  If you have received 
this mail in error, please delete it and notify us immediately.
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel