Re: [Gluster-devel] NetBSD regression fixes

2016-01-18 Thread Emmanuel Dreyfus
Hi all

I have the followif changes awaiting code review/merge:
http://review.gluster.org/13204
http://review.gluster.org/13205
http://review.gluster.org/13245
http://review.gluster.org/13247

-- 
Emmanuel Dreyfus
m...@netbsd.org
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] GlusterFS FUSE client hangs on rsyncing lots of file

2016-01-18 Thread Oleksandr Natalenko
XFS. Server side works OK, I'm able to mount volume again. Brick is 30% full.

On понеділок, 18 січня 2016 р. 15:07:18 EET baul jianguo wrote:
> What is your brick file system? and the glusterfsd process and all
> thread status?
> I met same issue when client app such as rsync stay in D status,and
> the brick process and relate thread also be in the D status.
> And the brick dev disk util is 100% .
> 
> On Sun, Jan 17, 2016 at 6:13 AM, Oleksandr Natalenko
> 
>  wrote:
> > Wrong assumption, rsync hung again.
> > 
> > On субота, 16 січня 2016 р. 22:53:04 EET Oleksandr Natalenko wrote:
> >> One possible reason:
> >> 
> >> cluster.lookup-optimize: on
> >> cluster.readdir-optimize: on
> >> 
> >> I've disabled both optimizations, and at least as of now rsync still does
> >> its job with no issues. I would like to find out what option causes such
> >> a
> >> behavior and why. Will test more.
> >> 
> >> On пʼятниця, 15 січня 2016 р. 16:09:51 EET Oleksandr Natalenko wrote:
> >> > Another observation: if rsyncing is resumed after hang, rsync itself
> >> > hangs a lot faster because it does stat of already copied files. So,
> >> > the
> >> > reason may be not writing itself, but massive stat on GlusterFS volume
> >> > as well.
> >> > 
> >> > 15.01.2016 09:40, Oleksandr Natalenko написав:
> >> > > While doing rsync over millions of files from ordinary partition to
> >> > > GlusterFS volume, just after approx. first 2 million rsync hang
> >> > > happens, and the following info appears in dmesg:
> >> > > 
> >> > > ===
> >> > > [17075038.924481] INFO: task rsync:10310 blocked for more than 120
> >> > > seconds.
> >> > > [17075038.931948] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
> >> > > disables this message.
> >> > > [17075038.940748] rsync   D 88207fc13680 0 10310
> >> > > 10309 0x0080
> >> > > [17075038.940752]  8809c578be18 0086 8809c578bfd8
> >> > > 00013680
> >> > > [17075038.940756]  8809c578bfd8 00013680 880310cbe660
> >> > > 881159d16a30
> >> > > [17075038.940759]  881e3aa25800 8809c578be48 881159d16b10
> >> > > 88087d553980
> >> > > [17075038.940762] Call Trace:
> >> > > [17075038.940770]  [] schedule+0x29/0x70
> >> > > [17075038.940797]  []
> >> > > __fuse_request_send+0x13d/0x2c0
> >> > > [fuse]
> >> > > [17075038.940801]  [] ?
> >> > > fuse_get_req_nofail_nopages+0xc0/0x1e0 [fuse]
> >> > > [17075038.940805]  [] ? wake_up_bit+0x30/0x30
> >> > > [17075038.940809]  [] fuse_request_send+0x12/0x20
> >> > > [fuse]
> >> > > [17075038.940813]  [] fuse_flush+0xff/0x150 [fuse]
> >> > > [17075038.940817]  [] filp_close+0x34/0x80
> >> > > [17075038.940821]  [] __close_fd+0x78/0xa0
> >> > > [17075038.940824]  [] SyS_close+0x23/0x50
> >> > > [17075038.940828]  []
> >> > > system_call_fastpath+0x16/0x1b
> >> > > ===
> >> > > 
> >> > > rsync blocks in D state, and to kill it, I have to do umount --lazy
> >> > > on
> >> > > GlusterFS mountpoint, and then kill corresponding client glusterfs
> >> > > process. Then rsync exits.
> >> > > 
> >> > > Here is GlusterFS volume info:
> >> > > 
> >> > > ===
> >> > > Volume Name: asterisk_records
> >> > > Type: Distributed-Replicate
> >> > > Volume ID: dc1fe561-fa3a-4f2e-8330-ec7e52c75ba4
> >> > > Status: Started
> >> > > Number of Bricks: 3 x 2 = 6
> >> > > Transport-type: tcp
> >> > > Bricks:
> >> > > Brick1:
> >> > > server1:/bricks/10_megaraid_0_3_9_x_0_4_3_hdd_r1_nolvm_hdd_storage_01
> >> > > /as
> >> > > te
> >> > > risk/records Brick2:
> >> > > server2:/bricks/10_megaraid_8_5_14_x_8_6_16_hdd_r1_nolvm_hdd_storage_
> >> > > 01/
> >> > > as
> >> > > terisk/records Brick3:
> >> > > server1:/bricks/11_megaraid_0_5_4_x_0_6_5_hdd_r1_nolvm_hdd_storage_02
> >> > > /as
> >> > > te
> >> > > risk/records Brick4:
> >> > > server2:/bricks/11_megaraid_8_7_15_x_8_8_20_hdd_r1_nolvm_hdd_storage_
> >> > > 02/
> >> > > as
> >> > > terisk/records Brick5:
> >> > > server1:/bricks/12_megaraid_0_7_6_x_0_13_14_hdd_r1_nolvm_hdd_storage_
> >> > > 03/
> >> > > as
> >> > > terisk/records Brick6:
> >> > > server2:/bricks/12_megaraid_8_9_19_x_8_13_24_hdd_r1_nolvm_hdd_storage
> >> > > _03
> >> > > /a
> >> > > sterisk/records Options Reconfigured:
> >> > > cluster.lookup-optimize: on
> >> > > cluster.readdir-optimize: on
> >> > > client.event-threads: 2
> >> > > network.inode-lru-limit: 4096
> >> > > server.event-threads: 4
> >> > > performance.client-io-threads: on
> >> > > storage.linux-aio: on
> >> > > performance.write-behind-window-size: 4194304
> >> > > performance.stat-prefetch: on
> >> > > performance.quick-read: on
> >> > > performance.read-ahead: on
> >> > > performance.flush-behind: on
> >> > > performance.write-behind: on
> >> > > performance.io-thread-count: 2
> >> > > performance.cache-max-file-size: 1048576
> >> > > performance.cache-size: 33554432
> >> > > features.cache-invalidation: on
> >> > > performance.readdir-ahead: on
> >> > > ===
> >> > > 
> >> > > The issue reproduces each time I rsync such an 

[Gluster-devel] Python 3 is coming!

2016-01-18 Thread Kaleb S. KEITHLEY

Python 3.5 is approved for Fedora 24[1], which is scheduled to ship in
May[2]

We have several places in the source where Python 2 is an explicit
requirement.

Do we want to rethink the hard requirement for Python 2?  I suspect we
should.

[1]https://fedoraproject.org/wiki/Releases/24/ChangeSet

[2] https://fedoraproject.org/wiki/Releases/24/Schedule

-- 

Kaleb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Core from gNFS process

2016-01-18 Thread Raghavendra Talur
On Sat, Jan 16, 2016 at 2:25 AM, Vijay Bellur  wrote:

> On 01/15/2016 08:38 AM, Soumya Koduri wrote:
>
>>
>>
>>
>> On 01/15/2016 06:52 PM, Soumya Koduri wrote:
>>
>>>
>>>
>>> On 01/14/2016 08:41 PM, Vijay Bellur wrote:
>>>
 On 01/14/2016 04:11 AM, Jiffin Tony Thottan wrote:

>
>
> On 14/01/16 14:28, Jiffin Tony Thottan wrote:
>
>> Hi,
>>
>> The core generated when encryption xlator is enabled
>>
>> [2016-01-14 08:13:15.740835] E
>> [crypt.c:4298:master_set_master_vol_key] 0-test1-crypt: FATAL: missing
>> master key
>> [2016-01-14 08:13:15.740859] E [MSGID: 101019]
>> [xlator.c:429:xlator_init] 0-test1-crypt: Initialization of volume
>> 'test1-crypt' failed, review your volfile again
>> [2016-01-14 08:13:15.740890] E [MSGID: 101066]
>> [graph.c:324:glusterfs_graph_init] 0-test1-crypt: initializing
>> translator failed
>> [2016-01-14 08:13:15.740904] E [MSGID: 101176]
>> [graph.c:670:glusterfs_graph_activate] 0-graph: init failed
>> [2016-01-14 08:13:15.741676] W [glusterfsd.c:1231:cleanup_and_exit]
>> (-->/usr/sbin/glusterfs(mgmt_getspec_cbk+0x307) [0x40d287]
>> -->/usr/sbin/glusterfs(glusterfs_process_volfp+0x117) [0x4086c7]
>> -->/usr/sbin/glusterfs(cleanup_and_exit+0x4d) [0x407e1d] ) 0-:
>> received signum (0), shutting down
>>
>>
>>
> Forgot to mention this last mail,  for crypt xlator needs master key
> before enabling the translator which cause the issue
> --
>

 Irrespective of the problem, the nfs process should not crash. Can we
 check why there is a memory corruption during cleanup_and_exit()?

 That's right. This issue was reported quite a few times earlier in
>>> gluster-devel and it is not specific to gluster-nfs process. As updated
>>> in [1], we have raised bug1293594[2] against lib-gcc team to further
>>> investigate this.
>>>
>>
> The segmentation fault in gcc is while attempting to print a backtrace
> upon glusterfs receiving a SIGSEGV. It would be good to isolate the reason
> for the initial SIGSEGV whose signal handler causes the further crash.


I wasn't able to check this today. Will check tomorrow.


>
>
> -Vijay
>
>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] FreeBSD port of GlusterFS racks up a lot of CPU usage

2016-01-18 Thread Rick Macklem
Raghavendra Gowdappa wrote:
> 
> 
> - Original Message -
> > From: "Rick Macklem" 
> > To: "Jeff Darcy" 
> > Cc: "Raghavendra G" , "freebsd-fs"
> > , "Hubbard Jordan"
> > , "Xavier Hernandez" , "Gluster
> > Devel" 
> > Sent: Saturday, January 9, 2016 7:29:59 AM
> > Subject: Re: [Gluster-devel] FreeBSD port of GlusterFS racks up a lot of
> > CPU usage
> > 
> > Jeff Darcy wrote:
> > > > > I don't know anything about gluster's poll implementation so I may
> > > > > be totally wrong, but would it be possible to use an eventfd (or a
> > > > > pipe if eventfd is not supported) to signal the need to add more
> > > > > file descriptors to the poll call ?
> > > > >
> > > > >
> > > > > The poll call should listen on this new fd. When we need to change
> > > > > the fd list, we should simply write to the eventfd or pipe from
> > > > > another thread.  This will cause the poll call to return and we will
> > > > > be able to change the fd list without having a short timeout nor
> > > > > having to decide on any trade-off.
> > > > 
> > > >
> > > > Thats a nice idea. Based on my understanding of why timeouts are being
> > > > used, this approach can work.
> > > 
> > > The own-thread code which preceded the current poll implementation did
> > > something similar, using a pipe fd to be woken up for new *outgoing*
> > > messages.  That code still exists, and might provide some insight into
> > > how to do this for the current poll code.
> > I took a look at event-poll.c and found something interesting...
> > - A pipe called "breaker" is already set up by event_pool_new_poll() and
> >   closed by event_pool_destroy_poll(), however it never gets used for
> >   anything.
> 
> I did a check on history, but couldn't find any information on why it was
> removed. Can you send this patch to http://review.gluster.org ? We can
> review and merge the patch over there. If you are not aware, development
> work flow can be found at:
> 
> http://www.gluster.org/community/documentation/index.php/Developers
> 
Actually, the patch turned out to be a flop. Sometimes a fuse mount would end
up with an empty file system with the patch. (I don't know why it was broken,
but maybe the original author tan into issues as well?)

Anyhow, I am now using the 3.7.6 event-poll.c code except that I have increased
the timeout from 1msec->10msec. (Going from 1->5->10 didn't seem to cause a
problem, but I got slower test runs when I increased to 20msec, so I've settled 
on
10mses. This does reduce the CPU usage when the GlusterFS file systems aren't 
active.)
I will submit this one line change to your workflow if it continues to test ok.

Thanks for everyone's input, rick

> > 
> > So, I added a few lines of code that writes a byte to it whenever the list
> > of
> > file descriptors is changed and read when poll() returns, if its revents is
> > set.
> > I also changed the timeout to -1 (infinity) and it seems to work for a
> > trivial
> > test.
> > --> Btw, I also noticed the "changed" variable gets set to 1 on a change,
> > but
> > never reset to 0. I didn't change this, since it looks "racey". (ie. I
> > think you could easily get a race between a thread that clears it and
> > one
> > that adds a new fd.)
> > 
> > A slightly safer version of the patch would set a long (100msec ??) timeout
> > instead
> > of -1.
> > 
> > Anyhow, I've attached the patch in case anyone would like to try it and
> > will
> > create a bug report for this after I've had more time to test it.
> > (I only use a couple of laptops, so my testing will be minimal.)
> > 
> > Thanks for all the help, rick
> > 
> > > ___
> > > freebsd...@freebsd.org mailing list
> > > https://lists.freebsd.org/mailman/listinfo/freebsd-fs
> > > To unsubscribe, send any mail to "freebsd-fs-unsubscr...@freebsd.org"
> > > 
> > 
> 
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] rm -r problem on FreeBSD port

2016-01-18 Thread Rick Macklem
Hi,

I have a simple gluster volume made up of 2 bricks using
distribute (running on the FreeBSD port of 3.7.6).
When I do a "rm -rf " for a fairly large tree, I
sometimes get "Directory not empty" errors.
When I look in the directory (even after an unmount, shutdown,
restart, remount) the "ls -l" output for the fuse mounted
volume and the 2 underlying bricks looks like...

# ls -l of fuse mounted gv0: directory

total 1
--  1 root  wheel  345891 Jan 18 14:49 Makefile
--  1 root  wheel  345891 Jan 18 14:49 Makefile
--  1 root  wheel1195 Jan 18 14:49 ata_if.c
--  1 root  wheel1195 Jan 18 14:49 ata_if.c
--  1 root  wheel 576 Jan 18 14:49 fb_if.c
--  1 root  wheel 576 Jan 18 14:49 fb_if.c
--  1 root  wheel1787 Jan 18 14:49 hdac_if.c
--  1 root  wheel1787 Jan 18 14:49 hdac_if.c
--  1 root  wheel 753 Jan 18 14:49 mmcbus_if.c
--  1 root  wheel 753 Jan 18 14:49 mmcbus_if.c
drwxr-xr-x  3 root  wheel 512 Jan 14 12:34 modules
--  1 root  wheel 602 Jan 18 14:49 power_if.c
--  1 root  wheel 602 Jan 18 14:49 power_if.c
--  1 root  wheel1819 Jan 18 14:49 uart_if.c
--  1 root  wheel1819 Jan 18 14:49 uart_if.c

# ls -l of the directory in one of the 2 bricks

total 36
--  2 root  wheel0 Jan 18 14:49 Makefile
--  2 root  wheel0 Jan 18 14:49 ata_if.c
--  2 root  wheel0 Jan 18 14:49 fb_if.c
--  2 root  wheel0 Jan 18 14:49 hdac_if.c
--  2 root  wheel0 Jan 18 14:49 mmcbus_if.c
drwxr-xr-x  3 root  wheel  512 Jan 14 17:34 modules
--  2 root  wheel0 Jan 18 14:49 power_if.c
--  2 root  wheel0 Jan 18 14:49 uart_if.c

# and ls -l of the underlying directory in the other brick

total 400
-rw-r--r--  2 root  wheel  345891 Jan 14 11:50 Makefile
-rw-r--r--  2 root  wheel1195 Jan 14 13:19 ata_if.c
-rw-r--r--  2 root  wheel 576 Jan 14 12:13 fb_if.c
-rw-r--r--  2 root  wheel1787 Jan 14 12:13 hdac_if.c
-rw-r--r--  2 root  wheel 753 Jan 14 12:13 mmcbus_if.c
drwxr-xr-x  3 root  wheel 512 Jan 14 12:34 modules
-rw-r--r--  2 root  wheel 602 Jan 14 12:13 power_if.c
-rw-r--r--  2 root  wheel1819 Jan 14 12:13 uart_if.c

Anyone have an idea of what is causing this?
(I am thinking something like the FreeBSD fuse interface
 is holding opens on the files when I am doing "rm -rf ",
 even though they shouldn't be open and the GlusterFS is doing
 some trickery similar to NFS's silly rename to avoid the
 files being removed before being closed?
 But I have no idea if this theory holds any water.;-)

Thanks in advance for any hints, rick
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] rm -r problem on FreeBSD port

2016-01-18 Thread Vijay Bellur

On 01/18/2016 06:18 PM, Rick Macklem wrote:

Hi,

I have a simple gluster volume made up of 2 bricks using
distribute (running on the FreeBSD port of 3.7.6).
When I do a "rm -rf " for a fairly large tree, I
sometimes get "Directory not empty" errors.
When I look in the directory (even after an unmount, shutdown,
restart, remount) the "ls -l" output for the fuse mounted
volume and the 2 underlying bricks looks like...

# ls -l of fuse mounted gv0: directory

total 1
--  1 root  wheel  345891 Jan 18 14:49 Makefile
--  1 root  wheel  345891 Jan 18 14:49 Makefile
--  1 root  wheel1195 Jan 18 14:49 ata_if.c
--  1 root  wheel1195 Jan 18 14:49 ata_if.c
--  1 root  wheel 576 Jan 18 14:49 fb_if.c
--  1 root  wheel 576 Jan 18 14:49 fb_if.c
--  1 root  wheel1787 Jan 18 14:49 hdac_if.c
--  1 root  wheel1787 Jan 18 14:49 hdac_if.c
--  1 root  wheel 753 Jan 18 14:49 mmcbus_if.c
--  1 root  wheel 753 Jan 18 14:49 mmcbus_if.c
drwxr-xr-x  3 root  wheel 512 Jan 14 12:34 modules
--  1 root  wheel 602 Jan 18 14:49 power_if.c
--  1 root  wheel 602 Jan 18 14:49 power_if.c
--  1 root  wheel1819 Jan 18 14:49 uart_if.c
--  1 root  wheel1819 Jan 18 14:49 uart_if.c

# ls -l of the directory in one of the 2 bricks

total 36
--  2 root  wheel0 Jan 18 14:49 Makefile
--  2 root  wheel0 Jan 18 14:49 ata_if.c
--  2 root  wheel0 Jan 18 14:49 fb_if.c
--  2 root  wheel0 Jan 18 14:49 hdac_if.c
--  2 root  wheel0 Jan 18 14:49 mmcbus_if.c
drwxr-xr-x  3 root  wheel  512 Jan 14 17:34 modules
--  2 root  wheel0 Jan 18 14:49 power_if.c
--  2 root  wheel0 Jan 18 14:49 uart_if.c

# and ls -l of the underlying directory in the other brick

total 400
-rw-r--r--  2 root  wheel  345891 Jan 14 11:50 Makefile
-rw-r--r--  2 root  wheel1195 Jan 14 13:19 ata_if.c
-rw-r--r--  2 root  wheel 576 Jan 14 12:13 fb_if.c
-rw-r--r--  2 root  wheel1787 Jan 14 12:13 hdac_if.c
-rw-r--r--  2 root  wheel 753 Jan 14 12:13 mmcbus_if.c
drwxr-xr-x  3 root  wheel 512 Jan 14 12:34 modules
-rw-r--r--  2 root  wheel 602 Jan 14 12:13 power_if.c
-rw-r--r--  2 root  wheel1819 Jan 14 12:13 uart_if.c

Anyone have an idea of what is causing this?
(I am thinking something like the FreeBSD fuse interface
  is holding opens on the files when I am doing "rm -rf ",
  even though they shouldn't be open and the GlusterFS is doing
  some trickery similar to NFS's silly rename to avoid the
  files being removed before being closed?
  But I have no idea if this theory holds any water.;-)

Thanks in advance for any hints, rick



Debugging by isolation could be a good technique to figure out the layer 
that is causing the problem here. You probably could start with a 
minimal volume file [1] and mounting by pointing to a volume file 
instead of fetching from (glusterfs -s  -f  
/mnt/point) glusterd to determine if the problem is with the fuse 
implementation in FreeBSD. If that doesn't turn out to be a problem, I 
would create a brick with a single brick and see if the problem happens 
there. If not, we could further test with a volume having multiple 
bricks and so on.


HTH,
Vijay

[1] http://paste.fedoraproject.org/312178/45316267/
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Gluster Monthly Newsletter, January 2015 Edition

2016-01-18 Thread Amye Scavarda
We're kicking off an updated Monthly Newsletter, coming out mid-month.
We'll highlight special posts, news and noteworthy threads from the
mailing lists, events, and other things that are important for the
Gluster community.

== Community Survey Followup ==
http://blog.gluster.org/2016/01/gluster-community-survey-report-2015/

== News and Noteworthy Threads from the Mailing Lists ==
++ gluster-users ++

Lindsay Mathieson and Krutika collaborated on testing sharding
improvements and performance

as part of Krutika's post on the improvements to sharding translator
specific to the VM image store use case


observations that the libgfapi based access to volumes are slower than
FUSE received a bit of discussion including debugging. Further in this
thread the specific workbench exhibiting the behavior was described


Amye notes that in the interest of making the documentation usable a
review of the MediaWiki based content was completed before plans to
switch on the Github based wiki pages. More at


Defect resolution discussion around diagnosing volume rebalance
failure - 
.
Sakshi and Susant work through the sequence of questions and responses
in order to identify the root cause and provide resolution.

++ gluster-devel ++
Vijay proposes a new plan for 3.8, looking to bring some 4.0 features
into 3.8, testing in distaf, adding forward compatibility and naming a
release manager for 3.8. This pushes out the timeline for 3.8 to end
of May/early June. More at


As part of his work on the eventing framework for Gluster, Samikshan
posted a summary of the ideas and work completed. He has been working
on ensuring notifications from each node is available for
introspection as well as aggregating all the signals/notifications
across a deployed cluster. More at


Niels put together a high level task breakdown to support SELinux over
FUSE mounts - 
.
At present one cannot set a SELinux context over a FUSE mount as FUSE
does not include support.

Krutika has worked on the sharding feature for GlusterFS with specific
focus on enabling the VM image store use case. Now she has provided a
small note on the features and more technical description of the
internals. 


Pranith has been using different forums to discuss the compound FOP
(re)design ideas with the intent to reduce network round-trips. He
puts together a first pass summary of his ideas at

The post received extensive discussion with participation from both
NFSv4 and SMB developers. The reviews were summed up in another draft
and posted. This, in turn, had a number of queries from Jeff Darcy


The design for the User and Group Quota support in GlusterFS has been
posted for review


Vijay expresses concern about the backlog in reviews

and seeks to have some ideas as to how to create a way to reduce the
backlog and make it more manageable.


Luis Pabon and Sachidananda Urs collaborate to understand integration
work between gDeploy and Heketi

Atin followed up on the RFC call on the GlusterD2.0 ReST API design


The test failure and hangs on NetBSD have been a topic of conversation
through the month of December. This month Pranith and Krutika have
volunteered time to help make the tests better for NetBSD. More at


Atin requested the feature leads to update the Gluster 4.0 Roadmap
Page 
with description of the features and specifications.

Raghavendra Talur's work on some test profiling enabled a set of
observations. Vijay responded with suggestions on using ramdisk for
/var/lib/glusterd with the objective of improving latencies for the
volume operations.


++ gluster-infra ++
Raghavendra Talur initiates a discussion around enabling developers to
build 

Re: [Gluster-devel] rm -r problem on FreeBSD port

2016-01-18 Thread Raghavendra Gowdappa


- Original Message -
> From: "Rick Macklem" 
> To: "Gluster Devel" 
> Sent: Tuesday, January 19, 2016 4:48:21 AM
> Subject: [Gluster-devel] rm -r problem on FreeBSD port
> 
> Hi,
> 
> I have a simple gluster volume made up of 2 bricks using
> distribute (running on the FreeBSD port of 3.7.6).
> When I do a "rm -rf " for a fairly large tree, I
> sometimes get "Directory not empty" errors.
> When I look in the directory (even after an unmount, shutdown,
> restart, remount) the "ls -l" output for the fuse mounted
> volume and the 2 underlying bricks looks like...
> 
> # ls -l of fuse mounted gv0: directory
> 
> total 1
> --  1 root  wheel  345891 Jan 18 14:49 Makefile
> --  1 root  wheel  345891 Jan 18 14:49 Makefile
> --  1 root  wheel1195 Jan 18 14:49 ata_if.c
> --  1 root  wheel1195 Jan 18 14:49 ata_if.c
> --  1 root  wheel 576 Jan 18 14:49 fb_if.c
> --  1 root  wheel 576 Jan 18 14:49 fb_if.c
> --  1 root  wheel1787 Jan 18 14:49 hdac_if.c
> --  1 root  wheel1787 Jan 18 14:49 hdac_if.c
> --  1 root  wheel 753 Jan 18 14:49 mmcbus_if.c
> --  1 root  wheel 753 Jan 18 14:49 mmcbus_if.c
> drwxr-xr-x  3 root  wheel 512 Jan 14 12:34 modules
> --  1 root  wheel 602 Jan 18 14:49 power_if.c
> --  1 root  wheel 602 Jan 18 14:49 power_if.c
> --  1 root  wheel1819 Jan 18 14:49 uart_if.c
> --  1 root  wheel1819 Jan 18 14:49 uart_if.c
> 
> # ls -l of the directory in one of the 2 bricks
> 
> total 36
> --  2 root  wheel0 Jan 18 14:49 Makefile
> --  2 root  wheel0 Jan 18 14:49 ata_if.c
> --  2 root  wheel0 Jan 18 14:49 fb_if.c
> --  2 root  wheel0 Jan 18 14:49 hdac_if.c
> --  2 root  wheel0 Jan 18 14:49 mmcbus_if.c
> drwxr-xr-x  3 root  wheel  512 Jan 14 17:34 modules
> --  2 root  wheel0 Jan 18 14:49 power_if.c
> --  2 root  wheel0 Jan 18 14:49 uart_if.c
> 
> # and ls -l of the underlying directory in the other brick
> 
> total 400
> -rw-r--r--  2 root  wheel  345891 Jan 14 11:50 Makefile
> -rw-r--r--  2 root  wheel1195 Jan 14 13:19 ata_if.c
> -rw-r--r--  2 root  wheel 576 Jan 14 12:13 fb_if.c
> -rw-r--r--  2 root  wheel1787 Jan 14 12:13 hdac_if.c
> -rw-r--r--  2 root  wheel 753 Jan 14 12:13 mmcbus_if.c
> drwxr-xr-x  3 root  wheel 512 Jan 14 12:34 modules
> -rw-r--r--  2 root  wheel 602 Jan 14 12:13 power_if.c
> -rw-r--r--  2 root  wheel1819 Jan 14 12:13 uart_if.c
> 
> Anyone have an idea of what is causing this?

There is a similar known issue explained in bz:
https://bugzilla.redhat.com/show_bug.cgi?id=1245065

However as per our understanding, the contents of the directory whose rmdir 
failed with ENOTEMPTY should be _only_ empty directories. In your case there 
are non-directories. So, it might be a different issue.

Adding Sakshi, who is working on this class of bugs and might add something 
worthwhile.

> (I am thinking something like the FreeBSD fuse interface
>  is holding opens on the files when I am doing "rm -rf ",
>  even though they shouldn't be open and the GlusterFS is doing
>  some trickery similar to NFS's silly rename to avoid the
>  files being removed before being closed?
>  But I have no idea if this theory holds any water.;-)
> 
> Thanks in advance for any hints, rick
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
> 
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] ./tests/bugs/changelog/bug-1208470.t failed NetBSD

2016-01-18 Thread Ravishankar N

Hi Saravna,
./tests/bugs/changelog/bug-1208470.t seems to have failed a NetBSD run: 
https://build.gluster.org/job/rackspace-regression-2GB-triggered/17651/consoleFull
 Not sure if it is spurious as it passed in the subsequent run. Please 
have a look.

Thanks,
Ravi
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] ./tests/bugs/changelog/bug-1208470.t failed NetBSD

2016-01-18 Thread Venky Shankar



Ravishankar N wrote:

Hi Saravna,
./tests/bugs/changelog/bug-1208470.t seems to have failed a NetBSD run:
https://build.gluster.org/job/rackspace-regression-2GB-triggered/17651/consoleFull

Not sure if it is spurious as it passed in the subsequent run. Please
have a look.


Might be too soon to check the presence of the changelog file after 
enabling changelog.



Thanks,
Ravi
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel



Venky
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] Gluster Monthly Newsletter, January 2015 Edition

2016-01-18 Thread Pranith Kumar Karampuri



On 01/19/2016 09:16 AM, Amye Scavarda wrote:

We're kicking off an updated Monthly Newsletter, coming out mid-month.
We'll highlight special posts, news and noteworthy threads from the
mailing lists, events, and other things that are important for the
Gluster community.

== Community Survey Followup ==
http://blog.gluster.org/2016/01/gluster-community-survey-report-2015/

== News and Noteworthy Threads from the Mailing Lists ==
++ gluster-users ++

Lindsay Mathieson and Krutika collaborated on testing sharding
improvements and performance

as part of Krutika's post on the improvements to sharding translator
specific to the VM image store use case


observations that the libgfapi based access to volumes are slower than
FUSE received a bit of discussion including debugging. Further in this
thread the specific workbench exhibiting the behavior was described


Amye notes that in the interest of making the documentation usable a
review of the MediaWiki based content was completed before plans to
switch on the Github based wiki pages. More at


Defect resolution discussion around diagnosing volume rebalance
failure - 
.
Sakshi and Susant work through the sequence of questions and responses
in order to identify the root cause and provide resolution.

++ gluster-devel ++
Vijay proposes a new plan for 3.8, looking to bring some 4.0 features
into 3.8, testing in distaf, adding forward compatibility and naming a
release manager for 3.8. This pushes out the timeline for 3.8 to end
of May/early June. More at


As part of his work on the eventing framework for Gluster, Samikshan
posted a summary of the ideas and work completed. He has been working
on ensuring notifications from each node is available for
introspection as well as aggregating all the signals/notifications
across a deployed cluster. More at


Niels put together a high level task breakdown to support SELinux over
FUSE mounts - 
.
At present one cannot set a SELinux context over a FUSE mount as FUSE
does not include support.

Krutika has worked on the sharding feature for GlusterFS with specific
focus on enabling the VM image store use case. Now she has provided a
small note on the features and more technical description of the
internals. 


Pranith has been using different forums to discuss the compound FOP

Along with Anuradha, soumya :-)

(re)design ideas with the intent to reduce network round-trips. He
puts together a first pass summary of his ideas at

The post received extensive discussion with participation from both
NFSv4 and SMB developers. The reviews were summed up in another draft
and posted. This, in turn, had a number of queries from Jeff Darcy


The design for the User and Group Quota support in GlusterFS has been
posted for review


Vijay expresses concern about the backlog in reviews

and seeks to have some ideas as to how to create a way to reduce the
backlog and make it more manageable.


Luis Pabon and Sachidananda Urs collaborate to understand integration
work between gDeploy and Heketi

Atin followed up on the RFC call on the GlusterD2.0 ReST API design


The test failure and hangs on NetBSD have been a topic of conversation
through the month of December. This month Pranith and Krutika have
volunteered time to help make the tests better for NetBSD. More at


Atin requested the feature leads to update the Gluster 4.0 Roadmap
Page 
with description of the features and specifications.

Raghavendra Talur's work on some test profiling enabled a set of
observations. Vijay responded with suggestions on using ramdisk for
/var/lib/glusterd with the objective of improving latencies for the
volume operations.


++ gluster-inf

[Gluster-devel] ENOSPC on slave21.cloud.gluster.org ?

2016-01-18 Thread Ravishankar N

https://build.gluster.org/job/glusterfs-devrpms-el7/6734/console
--

Error Summary
-
Disk Requirements:
  At least 104MB more space needed on the / filesystem.


+ exit 1
Build step 'Execute shell' marked build as failure
Archiving artifacts
Finished: FAILURE
--


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] How to cope with spurious regression failures

2016-01-18 Thread Emmanuel Dreyfus
Hi

Spurious regression failures make developers frustrated. One submits a
change and gets completely unrelated failures. The only way out is to
retrigger regression until it passes, a boring and time-wasting task.
Sometimes after 4 or 5 failed runs, the submitter realize there is a
real issue and look at it, which is a waste of time and resources.

The fact that we run regression on multiple platforms makes the
situation worse. If you have 10% of chances to hit a spurious failure on
Linux and a 20% chances to hit a spurious failure on NetBSD (random
number chosen), that means you get roughtly a failure for four
submissions (random prediction, as I used random input numbers, but you
get the idea)

Two solutions are proposed:

1) do not run unreliable tests, as proposed by Raghavendra Talur:
http://review.gluster.org/13173

I have nothing against the idea, but I voted down the change because it
fails to address the need for different test blacklists on different
platforms: we do not have the same unreliable tests on Linux and NetBSD.

2) add a regression option to retry a failed test once, and to validate
the regression if second attempt passes, as I proposed:
http://review.gluster.org/13245

The idea is basicaly to automatically do what every submitter has been
doing: retry without a thought when regression fails. The benefit of
this approach is also that it gives us a better view of what test failed
because of the change, and what test failed because it was unreliable.

The retry feature is optionnal and triggered by using the -r flag to
run-tests.sh. I intend to use it on NetBSD regression to reduce the
number of failures that annoy people. It could be used on Linux
regression too, though I do not plan to touch that on my own.

Please people tell us what approach you prefer. 

-- 
Emmanuel Dreyfus
http://hcpnet.free.fr/pubz
m...@netbsd.org
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] rm -r problem on FreeBSD port

2016-01-18 Thread Sakshi Bansal
Hi Rick,

Could you get us some more information about your setup:

1) Are you parallely doing 'rm -r' from multiple mount points? Or are you doing 
'ls' while rm was in progress?
2) Is the error just a one time occurrence or do you still see it when you try 
and remove the directory?
3) The directory listing that you have shown (for which you got directory not 
empty error) seems to have files, are you able to remove the files explicitly?
4) If you still have the same setup could you list the contents of the 
directory 'modules' from the mount and the bricks.

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Python 3 is coming!

2016-01-18 Thread Aravinda
Python 2 and 3 can co exists. If Python 3 used by default, then we may 
have to change the script heading from `#!/usr/bin/python` to 
`#!/usr/bin/env python2`.
We need to modify the code where python command is used to call python 
script.


For example, `PYTHON GSYNCD` to `PYTHON2 GSYNCD`


regards
Aravinda

On 01/18/2016 06:44 PM, Kaleb S. KEITHLEY wrote:

Python 3.5 is approved for Fedora 24[1], which is scheduled to ship in
May[2]

We have several places in the source where Python 2 is an explicit
requirement.

Do we want to rethink the hard requirement for Python 2?  I suspect we
should.

[1]https://fedoraproject.org/wiki/Releases/24/ChangeSet

[2] https://fedoraproject.org/wiki/Releases/24/Schedule



___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] FreeBSD port of GlusterFS racks up a lot of CPU usage

2016-01-18 Thread Raghavendra Gowdappa


- Original Message -
> From: "Rick Macklem" 
> To: "Raghavendra Gowdappa" 
> Cc: "Jeff Darcy" , "Raghavendra G" 
> , "freebsd-fs"
> , "Hubbard Jordan" , "Xavier 
> Hernandez" , "Gluster
> Devel" 
> Sent: Tuesday, January 19, 2016 4:07:09 AM
> Subject: Re: [Gluster-devel] FreeBSD port of GlusterFS racks up a lot of CPU 
> usage
> 
> Raghavendra Gowdappa wrote:
> > 
> > 
> > - Original Message -
> > > From: "Rick Macklem" 
> > > To: "Jeff Darcy" 
> > > Cc: "Raghavendra G" , "freebsd-fs"
> > > , "Hubbard Jordan"
> > > , "Xavier Hernandez" , "Gluster
> > > Devel" 
> > > Sent: Saturday, January 9, 2016 7:29:59 AM
> > > Subject: Re: [Gluster-devel] FreeBSD port of GlusterFS racks up a lot of
> > > CPU usage
> > > 
> > > Jeff Darcy wrote:
> > > > > > I don't know anything about gluster's poll implementation so I may
> > > > > > be totally wrong, but would it be possible to use an eventfd (or a
> > > > > > pipe if eventfd is not supported) to signal the need to add more
> > > > > > file descriptors to the poll call ?
> > > > > >
> > > > > >
> > > > > > The poll call should listen on this new fd. When we need to change
> > > > > > the fd list, we should simply write to the eventfd or pipe from
> > > > > > another thread.  This will cause the poll call to return and we
> > > > > > will
> > > > > > be able to change the fd list without having a short timeout nor
> > > > > > having to decide on any trade-off.
> > > > > 
> > > > >
> > > > > Thats a nice idea. Based on my understanding of why timeouts are
> > > > > being
> > > > > used, this approach can work.
> > > > 
> > > > The own-thread code which preceded the current poll implementation did
> > > > something similar, using a pipe fd to be woken up for new *outgoing*
> > > > messages.  That code still exists, and might provide some insight into
> > > > how to do this for the current poll code.
> > > I took a look at event-poll.c and found something interesting...
> > > - A pipe called "breaker" is already set up by event_pool_new_poll() and
> > >   closed by event_pool_destroy_poll(), however it never gets used for
> > >   anything.
> > 
> > I did a check on history, but couldn't find any information on why it was
> > removed. Can you send this patch to http://review.gluster.org ? We can
> > review and merge the patch over there. If you are not aware, development
> > work flow can be found at:
> > 
> > http://www.gluster.org/community/documentation/index.php/Developers
> > 
> Actually, the patch turned out to be a flop. Sometimes a fuse mount would end
> up with an empty file system with the patch. (I don't know why it was broken,
> but maybe the original author tan into issues as well?)

+static void
+event_pool_changed (struct event_pool *event_pool)
+{
+
+/* Write a byte into the breaker pipe to wake up poll(). */
+if (event_pool->breaker[1] >= 0)
+write(event_pool->breaker[1], "X", 1);
+}

breaker is set to non-blocking on both read and write ends. So, probably write 
might be failing sometimes with EAGAIN/EBUSY and thereby preventing the socket 
from being registered. Probably that might be the reason?

if (event_pool->breaker[1] >= 0) {
 do {
ret = write(event_pool->breaker[1], "X", 1);
 } while (ret != 1);
}

Also similar logic might be required while flushing out junk from read end too.

> 
> Anyhow, I am now using the 3.7.6 event-poll.c code except that I have
> increased
> the timeout from 1msec->10msec. (Going from 1->5->10 didn't seem to cause a
> problem, but I got slower test runs when I increased to 20msec, so I've
> settled on
> 10mses. This does reduce the CPU usage when the GlusterFS file systems aren't
> active.)
> I will submit this one line change to your workflow if it continues to test
> ok.
> 
> Thanks for everyone's input, rick
> 
> > > 
> > > So, I added a few lines of code that writes a byte to it whenever the
> > > list
> > > of
> > > file descriptors is changed and read when poll() returns, if its revents
> > > is
> > > set.
> > > I also changed the timeout to -1 (infinity) and it seems to work for a
> > > trivial
> > > test.
> > > --> Btw, I also noticed the "changed" variable gets set to 1 on a change,
> > > but
> > > never reset to 0. I didn't change this, since it looks "racey". (ie.
> > > I
> > > think you could easily get a race between a thread that clears it and
> > > one
> > > that adds a new fd.)
> > > 
> > > A slightly safer version of the patch would set a long (100msec ??)
> > > timeout
> > > instead
> > > of -1.
> > > 
> > > Anyhow, I've attached the patch in case anyone would like to try it and
> > > will
> > > create a bug report for this after I've had more time to test it.
> > > (I only use a couple of laptops, so my testing will be minimal.)
> > > 
> > > Thanks for all the help, rick
> > > 
> > > > ___
> > > > freebsd...@freebsd.org ma

Re: [Gluster-devel] Python 3 is coming!

2016-01-18 Thread Prashanth Pai
Hi,

Changing our code to make use of Python's "six" module will help maintain code 
that will run both on Python 2 and Python 3.
https://pypi.python.org/pypi/six

Regards,
 -Prashanth Pai

- Original Message -
> From: "Aravinda" 
> To: "Kaleb S. KEITHLEY" , "Gluster Devel" 
> 
> Sent: Tuesday, January 19, 2016 11:19:50 AM
> Subject: Re: [Gluster-devel] Python 3 is coming!
> 
> Python 2 and 3 can co exists. If Python 3 used by default, then we may
> have to change the script heading from `#!/usr/bin/python` to
> `#!/usr/bin/env python2`.
> We need to modify the code where python command is used to call python
> script.
> 
> For example, `PYTHON GSYNCD` to `PYTHON2 GSYNCD`
> 
> 
> regards
> Aravinda
> 
> On 01/18/2016 06:44 PM, Kaleb S. KEITHLEY wrote:
> > Python 3.5 is approved for Fedora 24[1], which is scheduled to ship in
> > May[2]
> >
> > We have several places in the source where Python 2 is an explicit
> > requirement.
> >
> > Do we want to rethink the hard requirement for Python 2?  I suspect we
> > should.
> >
> > [1]https://fedoraproject.org/wiki/Releases/24/ChangeSet
> >
> > [2] https://fedoraproject.org/wiki/Releases/24/Schedule
> >
> 
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
> 
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] ./tests/bugs/changelog/bug-1208470.t failed NetBSD

2016-01-18 Thread Saravanakumar Arumugam

Hi Ravi,
I can run this locally, seems like spurious.
Let me know if you see again.

Please correct my email id, you are addressing another Saravana :)

Thanks,
Saravana

On 01/19/2016 09:42 AM, Venky Shankar wrote:



Ravishankar N wrote:

Hi Saravna,
./tests/bugs/changelog/bug-1208470.t seems to have failed a NetBSD run:
https://build.gluster.org/job/rackspace-regression-2GB-triggered/17651/consoleFull 



Not sure if it is spurious as it passed in the subsequent run. Please
have a look.


Might be too soon to check the presence of the changelog file after 
enabling changelog.



Thanks,
Ravi
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel



Venky
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] ./tests/bugs/changelog/bug-1208470.t failed NetBSD

2016-01-18 Thread Venky Shankar



Saravanakumar Arumugam wrote:

Hi Ravi,
I can run this locally, seems like spurious.
Let me know if you see again.


To be safe, it would benefit to have sleep(1) before the check.



Please correct my email id, you are addressing another Saravana :)

Thanks,
Saravana

On 01/19/2016 09:42 AM, Venky Shankar wrote:



Ravishankar N wrote:

Hi Saravna,
./tests/bugs/changelog/bug-1208470.t seems to have failed a NetBSD run:
https://build.gluster.org/job/rackspace-regression-2GB-triggered/17651/consoleFull


Not sure if it is spurious as it passed in the subsequent run. Please
have a look.


Might be too soon to check the presence of the changelog file after
enabling changelog.


Thanks,
Ravi
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel



Venky
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel





Venky
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] ./tests/bugs/changelog/bug-1208470.t failed NetBSD

2016-01-18 Thread Ravishankar N

On 01/19/2016 12:01 PM, Venky Shankar wrote:



Saravanakumar Arumugam wrote:

Hi Ravi,
I can run this locally, seems like spurious.


Works fine on Linux for me too.Maybe its observed easily on BSD.


Let me know if you see again.


To be safe, it would benefit to have sleep(1) before the check.


yes, or " EXPECT_WITHIN  $SOME_NEW_TIMEOUT  1 count_changelog_files 
$B0/${V0}1"




Please correct my email id, you are addressing another Saravana :)


Sorry, noted. :)


Thanks,
Saravana

On 01/19/2016 09:42 AM, Venky Shankar wrote:



Ravishankar N wrote:

Hi Saravna,
./tests/bugs/changelog/bug-1208470.t seems to have failed a NetBSD 
run:
https://build.gluster.org/job/rackspace-regression-2GB-triggered/17651/consoleFull 




Not sure if it is spurious as it passed in the subsequent run. Please
have a look.


Might be too soon to check the presence of the changelog file after
enabling changelog.


Thanks,
Ravi
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel



Venky
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel





Venky



___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] ./tests/bugs/changelog/bug-1208470.t failed NetBSD

2016-01-18 Thread Saravanakumar Arumugam

Hi,

Yes, I can easily add additional 2 second sleep.
But, the problem here is: irrespective of anything(especially these 
timing changes), it should not create any additional changelog.

It is a test volume created for test purpose where no I/O is carried out.

So, we may end up putting band-aid to some other problem.
I will look into this, if we see this issue again as it seems spurious.

Please let me know if any issue.

PS:
Issue is actually in Centos, not in NetBSD.

Thanks,
Saravana


On 01/19/2016 12:08 PM, Ravishankar N wrote:

On 01/19/2016 12:01 PM, Venky Shankar wrote:



Saravanakumar Arumugam wrote:

Hi Ravi,
I can run this locally, seems like spurious.


Works fine on Linux for me too.Maybe its observed easily on BSD.


Let me know if you see again.


To be safe, it would benefit to have sleep(1) before the check.


yes, or " EXPECT_WITHIN  $SOME_NEW_TIMEOUT  1 count_changelog_files 
$B0/${V0}1"




Please correct my email id, you are addressing another Saravana :)


Sorry, noted. :)


Thanks,
Saravana

On 01/19/2016 09:42 AM, Venky Shankar wrote:



Ravishankar N wrote:

Hi Saravna,
./tests/bugs/changelog/bug-1208470.t seems to have failed a NetBSD 
run:
https://build.gluster.org/job/rackspace-regression-2GB-triggered/17651/consoleFull 




Not sure if it is spurious as it passed in the subsequent run. Please
have a look.


Might be too soon to check the presence of the changelog file after
enabling changelog.


Thanks,
Ravi
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel



Venky
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel





Venky





___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel