Re: [Gluster-devel] Regular Performance Regression Testing

2016-08-29 Thread Sankarshan Mukhopadhyay
On Mon, Aug 29, 2016 at 9:16 PM, Vijay Bellur  wrote:
> I would also recommend running perf-test.sh [1] for regression.
>

Would it be useful to have this script maintained as part of the
Gluster organization? Improvements/changes could perhaps be more
easily tracked.

> [1] https://github.com/avati/perf-test/blob/master/perf-test.sh




-- 
sankarshan mukhopadhyay

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Gerrit Access Control

2016-08-29 Thread Niels de Vos
On Mon, Aug 29, 2016 at 09:18:05PM +0530, Pranith Kumar Karampuri wrote:
> On Mon, Aug 29, 2016 at 12:25 PM, Nigel Babu  wrote:
> 
> > Hello folks,
> >
> > We have not pruned our Gerrit maintainers list ever as far as I can see.
> > We've
> > only added people. For security reasons, I'd like to propose that we do the
> > following:
> >
> > If you do not have a commit in the last 90 days, your membership from
> > gluster-maintainers team on Gerrit will be revoked. This means you won't
> > have
> > permission to merge patches. This does not mean you're no longer
> > maintainer.
> > This is only a security measure. To gain access again, all you have to do
> > is
> > file a bug against gluster-infra and I'll grant you access immediately.
> >
> 
> Just need a clarification. Does a "commit in the last 90 days" means
> merging a patch sent by someone else by maintainer or maintainer sending a
> patch to be merged?

Interesting question. I was wondering about something similar as well.
What about commits/permissions for the different repositories we host on
Gerrit? Does each repository has its own maintainers, or is it one group
of maintainers that has merge permissions for all repos?

Niels

> 
> 
> >
> > When I remove someone's access, I'll send an invidual email about it.
> > Again,
> > your membership on gluster-maintainers has no say on your maintainer
> > status.
> > This is only for security reasons.
> >
> > Thoughts on implementing this policy?
> >
> > --
> > nigelb
> > ___
> > Gluster-devel mailing list
> > Gluster-devel@gluster.org
> > http://www.gluster.org/mailman/listinfo/gluster-devel
> >
> 
> 
> 
> -- 
> Pranith

> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel



signature.asc
Description: PGP signature
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Regular Performance Testing

2016-08-29 Thread Shyam

On 08/29/2016 08:25 AM, Nigel Babu wrote:

On Mon, Aug 29, 2016 at 01:49:52PM +0200, Niels de Vos wrote:

On Mon, Aug 29, 2016 at 05:01:18PM +0530, Nigel Babu wrote:

Hello folks,

I've had chats with Manoj and Ambarish about performance testing and what we
can do upstream. Niels today solved half my problem by pointing out that we can
get physical nodes on CentOS CI. The general idea is to run iozone[1] and
smallfile[2] on a fixed frequency for master (to begin with).

Does this sound like a good idea? If so, read on.

For this to happens a few things need to happen:
* I'll need some help from a few people who can read the reports and coordinate
  fixes. That is, someone needs to "own" performance for upstream.
* I need some help in generating the right reports so we can figure out if our
  performance went up or down.


I volunteer for both of the above.



The provisioning in the CentOS CI does not allow us to select certain
systems (yet). So you would get different performance results, depending
on the hardware that the reservation request returns:
  https://wiki.centos.org/QaWiki/PubHardware

Also, these physical machines do not have additional disks. The single
SSD that these systems have, is completely used by the installation, no
free space to partition to our liking, no additional disks available.

I welcome any additional testing that we can run regulary, but to call
it 'performance testing' might be a little pre-mature. At least the
performance results should be marked as 'unoptimized' or similar.

HTH,
Niels



The goal of this testing, to begin with, wouldn't be to get absolute numbers
but to try and catch decrease in performance, if that makes sense. In essence,
it's regression testing but for performance.

Thank you for raising the fact that it may be inconsistent, I'll talk to the
Centos CI folks and see what's the best way forward for us before we get here.
But let's work with the assumption that I'll sort out the infra side of things.


I had the same concern as Niels, but as long as we can sort out the 
infar side of things (which we can in parallel to building this up), I 
see that this would be valuable.









[1]: http://www.iozone.org/
[2]: https://github.com/bengland2/smallfile

--
nigelb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel




--
nigelb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Gerrit Access Control

2016-08-29 Thread Pranith Kumar Karampuri
On Mon, Aug 29, 2016 at 12:25 PM, Nigel Babu  wrote:

> Hello folks,
>
> We have not pruned our Gerrit maintainers list ever as far as I can see.
> We've
> only added people. For security reasons, I'd like to propose that we do the
> following:
>
> If you do not have a commit in the last 90 days, your membership from
> gluster-maintainers team on Gerrit will be revoked. This means you won't
> have
> permission to merge patches. This does not mean you're no longer
> maintainer.
> This is only a security measure. To gain access again, all you have to do
> is
> file a bug against gluster-infra and I'll grant you access immediately.
>

Just need a clarification. Does a "commit in the last 90 days" means
merging a patch sent by someone else by maintainer or maintainer sending a
patch to be merged?


>
> When I remove someone's access, I'll send an invidual email about it.
> Again,
> your membership on gluster-maintainers has no say on your maintainer
> status.
> This is only for security reasons.
>
> Thoughts on implementing this policy?
>
> --
> nigelb
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
>



-- 
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Regular Performance Regression Testing

2016-08-29 Thread Vijay Bellur
On Mon, Aug 29, 2016 at 8:42 AM, Niels de Vos  wrote:
> On Mon, Aug 29, 2016 at 05:55:00PM +0530, Nigel Babu wrote:
>> On Mon, Aug 29, 2016 at 01:49:52PM +0200, Niels de Vos wrote:
>> > On Mon, Aug 29, 2016 at 05:01:18PM +0530, Nigel Babu wrote:
>> > > Hello folks,
>> > >
>> > > I've had chats with Manoj and Ambarish about performance testing and 
>> > > what we
>> > > can do upstream. Niels today solved half my problem by pointing out that 
>> > > we can
>> > > get physical nodes on CentOS CI. The general idea is to run iozone[1] and
>> > > smallfile[2] on a fixed frequency for master (to begin with).
>> > >
>> > > Does this sound like a good idea? If so, read on.
>> > >
>> > > For this to happens a few things need to happen:
>> > > * I'll need some help from a few people who can read the reports and 
>> > > coordinate
>> > >   fixes. That is, someone needs to "own" performance for upstream.
>> > > * I need some help in generating the right reports so we can figure out 
>> > > if our
>> > >   performance went up or down.
>> >
>> > The provisioning in the CentOS CI does not allow us to select certain
>> > systems (yet). So you would get different performance results, depending
>> > on the hardware that the reservation request returns:
>> >   https://wiki.centos.org/QaWiki/PubHardware
>> >
>> > Also, these physical machines do not have additional disks. The single
>> > SSD that these systems have, is completely used by the installation, no
>> > free space to partition to our liking, no additional disks available.
>> >
>> > I welcome any additional testing that we can run regulary, but to call
>> > it 'performance testing' might be a little pre-mature. At least the
>> > performance results should be marked as 'unoptimized' or similar.
>> >
>> > HTH,
>> > Niels
>> >
>>
>> The goal of this testing, to begin with, wouldn't be to get absolute numbers
>> but to try and catch decrease in performance, if that makes sense. In 
>> essence,
>> it's regression testing but for performance.
>
> Ah, ok, changing the subject to reflect that.
>
>> Thank you for raising the fact that it may be inconsistent, I'll talk to the
>> Centos CI folks and see what's the best way forward for us before we get 
>> here.
>> But let's work with the assumption that I'll sort out the infra side of 
>> things.
>
> There has been a request from others already to be able to select the
> blade-chassis during reserving machines. This would come a long way to
> get comparable results. Otherwise we can compare the results based on
> sub-domain (per chassis) where the tests were running (more difficult
> when multiple machines/chassis are involved).
>


+1 to running performance regression tests on identical hardware.

I would also recommend running perf-test.sh [1] for regression.

Thanks,
Vijay

[1] https://github.com/avati/perf-test/blob/master/perf-test.sh
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] release checklist for 3.9.0

2016-08-29 Thread Pranith Kumar Karampuri
hi,
   Could we have release checklist for the components? Please add the
steps that need to be done before the release is made at this link:
https://public.pad.fsfe.org/p/gluster-component-release-checklist. This
activity needs to be completed by 2nd September. Please also add if the
tests are automated or not. We also want to use this to evolve a complete
automation that needs to be run before a release goes out. This is the
first step in that direction.

I added the list from MAINTAINERS file. Please add if I missed anything. If
the Maintainer is outdated please send a mail to maintain...@gluster.org

On behalf of
Aravinda & Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] counters in tiering / request for comments

2016-08-29 Thread Dan Lambright
Below is a write-up on tiering counters (bz 1275917) I give three options, and 
I think option (1) and (3) are doable. (2) is harder and would need more 
discussion.

Currently counters give limited information on tiering behavior. They are just 
a raw count of the number of files moved each direction. The overall feature is 
much less usable as a result.

Generally counters should work with future tiering use cases, i.e. tier 
according to location or some other policy.

$ gluster volume tier vol1 status
Node Promoted files   Demoted filesStatus   
   
----
   
localhost20   30   in progress  
   
172.17.60.18 00in progress  
   
172.17.60.19 00in progress  
   
172.17.60.20 00in progress   

(1)

Customers want to know the total number of files / MB on a tier at any one 
time. I propose we query the database on the bricks for each tier, to get a 
count of the number of files. 

$ gluster volume tier vol1 status
Node Promoted files /hot count   Demoted files / cold count 
   Status  
--   -  
   -   
localhost20 / 50030 /2000   
   in progress 
172.17.60.18 0   0  
   in progress 
172.17.60.19 0   0  
   in progress 
172.17.60.20 0   0  
   in progress   

(2)

People need to know the ratio of I/Os served by the hot tier to the cold tier. 
For an administrator, if 90% of your I/Os go to the hot tier, this is good. If 
only 20% are served by the hot tier, this is bad, and there is a 
misconfiguration.

Something like this is what we want:

$ gluster volume tier vol1 status
Node Promoted files   Demoted filesRead Hit rate   
Write Hit Rate Status  
----   
---
localhost0080% 
75%in progress   

The difficulty is how to capture that. When we read a large file, it is broken 
up into multiple individual reads. Each piece is a single read FOP. Should we 
consider each FOP individually? Or does only the first "hit" to the hot tier 
count?  

Also, when an FOP comes in, it will first look on one tier, and then the other 
tier. The callback to the FOP checks success or failure. It is only when the 
file is found on none of the subvolumes that the FOP returns an error. New code 
needs to deal with this complexity. If there is failure on the cold tier but 
success on the hot tier, the "hit count" should be bumped.

We probably do not want to update the "hit rate" on all FOPs. 

(3)

A simpler new counter to implement is the #MB promoted or demoted. I think that 
could be satisfied in a separate patch and could be done quicker. 

This output with (2) and (3):

$ gluster volume tier vol1 status
Node Promoted files/MBDemoted files/MB Read Hit rate   
Write Hit Rate Status  
----   
---
localhost120/2033MB   50/1044MB80% 
75%in progress   
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] [Fwd: [Gluster-infra] Reboot of jenkins/gerrit for upgrade and snapshot]

2016-08-29 Thread Michael Scherer
Forwarding, because coffee didn't reach my brain yet

-- 
Michael Scherer
Sysadmin, Community Infrastructure and Platform, OSAS


--- Begin Message ---
Hi,

so since the release is on the 30, I was wondering if people would be ok
with a small (less than 1h) downtime for upgrade and snapshot of the
disk of gerrit and jenkins on the 1st of september.

Any objections ? 
(and preferred time)
-- 
Michael Scherer
Sysadmin, Community Infrastructure and Platform, OSAS




signature.asc
Description: This is a digitally signed message part
___
Gluster-infra mailing list
gluster-in...@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra
--- End Message ---


signature.asc
Description: This is a digitally signed message part
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Regular Performance Testing

2016-08-29 Thread Niels de Vos
On Mon, Aug 29, 2016 at 05:01:18PM +0530, Nigel Babu wrote:
> Hello folks,
> 
> I've had chats with Manoj and Ambarish about performance testing and what we
> can do upstream. Niels today solved half my problem by pointing out that we 
> can
> get physical nodes on CentOS CI. The general idea is to run iozone[1] and
> smallfile[2] on a fixed frequency for master (to begin with).
> 
> Does this sound like a good idea? If so, read on.
> 
> For this to happens a few things need to happen:
> * I'll need some help from a few people who can read the reports and 
> coordinate
>   fixes. That is, someone needs to "own" performance for upstream.
> * I need some help in generating the right reports so we can figure out if our
>   performance went up or down.

The provisioning in the CentOS CI does not allow us to select certain
systems (yet). So you would get different performance results, depending
on the hardware that the reservation request returns:
  https://wiki.centos.org/QaWiki/PubHardware

Also, these physical machines do not have additional disks. The single
SSD that these systems have, is completely used by the installation, no
free space to partition to our liking, no additional disks available.

I welcome any additional testing that we can run regulary, but to call
it 'performance testing' might be a little pre-mature. At least the
performance results should be marked as 'unoptimized' or similar.

HTH,
Niels

> 
> [1]: http://www.iozone.org/
> [2]: https://github.com/bengland2/smallfile
> 
> --
> nigelb
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel


signature.asc
Description: PGP signature
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Regular Performance Testing

2016-08-29 Thread Nigel Babu
Hello folks,

I've had chats with Manoj and Ambarish about performance testing and what we
can do upstream. Niels today solved half my problem by pointing out that we can
get physical nodes on CentOS CI. The general idea is to run iozone[1] and
smallfile[2] on a fixed frequency for master (to begin with).

Does this sound like a good idea? If so, read on.

For this to happens a few things need to happen:
* I'll need some help from a few people who can read the reports and coordinate
  fixes. That is, someone needs to "own" performance for upstream.
* I need some help in generating the right reports so we can figure out if our
  performance went up or down.

[1]: http://www.iozone.org/
[2]: https://github.com/bengland2/smallfile

--
nigelb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Profiling GlusterFS FUSE client with Valgrind's Massif tool

2016-08-29 Thread Oleksandr Natalenko

More info here.

Massif puts the following warning on volume unmount:

===
valgrind: m_mallocfree.c:304 (get_bszB_as_is): Assertion 'bszB_lo == 
bszB_hi' failed.

valgrind: Heap block lo/hi size mismatch: lo = 1, hi = 0.
This is probably caused by your program erroneously writing past the
end of a heap block and corrupting heap metadata.  If you fix any
invalid writes reported by Memcheck, this assertion failure will
probably go away.  Please try that before reporting this as a bug.
...
Thread 1: status = VgTs_Runnable
==30590==at 0x4C29037: free (in 
/usr/lib64/valgrind/vgpreload_massif-amd64-linux.so)

==30590==by 0x67CE63B: __libc_freeres (in /usr/lib64/libc-2.17.so)
==30590==by 0x4A246B4: _vgnU_freeres (in 
/usr/lib64/valgrind/vgpreload_core-amd64-linux.so)
==30590==by 0x66A2E2A: __run_exit_handlers (in 
/usr/lib64/libc-2.17.so)

==30590==by 0x66A2EB4: exit (in /usr/lib64/libc-2.17.so)
==30590==by 0x1117E9: cleanup_and_exit (glusterfsd.c:1308)
==30590==by 0x669F66F: ??? (in /usr/lib64/libc-2.17.so)
==30590==by 0x606EEF4: pthread_join (in 
/usr/lib64/libpthread-2.17.so)

==30590==by 0x4EC2687: event_dispatch_epoll (event-epoll.c:762)
==30590==by 0x10E876: main (glusterfsd.c:2370)
...
===

I rechecked mount/ls/unmount with memcheck tool as suggested and got the 
following:


===
...
==30315== Thread 8:
==30315== Syscall param writev(vector[...]) points to uninitialised 
byte(s)

==30315==at 0x675FEA0: writev (in /usr/lib64/libc-2.17.so)
==30315==by 0xE664795: send_fuse_iov (fuse-bridge.c:158)
==30315==by 0xE6649B9: send_fuse_data (fuse-bridge.c:197)
==30315==by 0xE666F7A: fuse_attr_cbk (fuse-bridge.c:753)
==30315==by 0xE6671A6: fuse_root_lookup_cbk (fuse-bridge.c:783)
==30315==by 0x14519937: io_stats_lookup_cbk (io-stats.c:1512)
==30315==by 0x14300B3E: mdc_lookup_cbk (md-cache.c:867)
==30315==by 0x13EE9226: qr_lookup_cbk (quick-read.c:446)
==30315==by 0x13CD8B66: ioc_lookup_cbk (io-cache.c:260)
==30315==by 0x1346405D: dht_revalidate_cbk (dht-common.c:985)
==30315==by 0x1320EC60: afr_discover_done (afr-common.c:2316)
==30315==by 0x1320EC60: afr_discover_cbk (afr-common.c:2361)
==30315==by 0x12F9EE91: client3_3_lookup_cbk 
(client-rpc-fops.c:2981)

==30315==  Address 0x170b238c is on thread 8's stack
==30315==  in frame #3, created by fuse_attr_cbk (fuse-bridge.c:723)
...
==30315== Warning: invalid file descriptor -1 in syscall close()
==30315== Thread 1:
==30315== Invalid free() / delete / delete[] / realloc()
==30315==at 0x4C2AD17: free (in 
/usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)

==30315==by 0x67D663B: __libc_freeres (in /usr/lib64/libc-2.17.so)
==30315==by 0x4A246B4: _vgnU_freeres (in 
/usr/lib64/valgrind/vgpreload_core-amd64-linux.so)
==30315==by 0x66AAE2A: __run_exit_handlers (in 
/usr/lib64/libc-2.17.so)

==30315==by 0x66AAEB4: exit (in /usr/lib64/libc-2.17.so)
==30315==by 0x1117E9: cleanup_and_exit (glusterfsd.c:1308)
==30315==by 0x66A766F: ??? (in /usr/lib64/libc-2.17.so)
==30315==by 0x6076EF4: pthread_join (in 
/usr/lib64/libpthread-2.17.so)

==30315==by 0x4ECA687: event_dispatch_epoll (event-epoll.c:762)
==30315==by 0x10E876: main (glusterfsd.c:2370)
==30315==  Address 0x6a2d3d0 is 0 bytes inside data symbol 
"noai6ai_cached"

===

It seems Massif crashes (?) because of invalid memory access in 
glusterfs process cleanup stage.


Pranith? Nithya?

29.08.2016 13:14, Oleksandr Natalenko wrote:

===
valgrind --tool=massif --trace-children=yes /usr/sbin/glusterfs -N
--volfile-server=server.example.com --volfile-id=test
/mnt/net/glusterfs/test
===

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Profiling GlusterFS FUSE client with Valgrind's Massif tool

2016-08-29 Thread Oleksandr Natalenko

Hello.

While dancing around huge memory consumption by FUSE client [1], I was 
suggested by Pranith to use Massif tool to find out the reason of the 
leak.


Unfortunately, it does not work for me properly, and I believe I do 
something wrong.


Instead of generating report after unmounting volume or sigterming 
glusterfs process, Valgrind generates 2 reports (for 2 PIDs) just right 
after launch, and does not update them further, even on exit. I believe, 
that is because something is going on with forking, but I cannot figure 
out, what's going wrong.


The command I use to launch GlusterFS via Valgrind+Massif:

===
valgrind --tool=massif --trace-children=yes /usr/sbin/glusterfs -N 
--volfile-server=server.example.com --volfile-id=test 
/mnt/net/glusterfs/test

===

Any ideas or sample usecases for Massif+GlusterFS?

Thanks.

Regards,
  Oleksandr

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1369364
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] 3.9. feature freeze status check

2016-08-29 Thread Niels de Vos
On Mon, Aug 29, 2016 at 12:48:46AM -0400, Poornima Gurusiddaiah wrote:
> Hi, 
> 
> Updated inline. 
> 
> - Original Message -
...
> > > > 3) Md-cache perf improvements in smb:
> > > 
> > 
> > > > Feature owner: Poornima
> > > 
> > 
> 
> Feature undergoing review. It will be in tech-preview for this
> release. Main feature will be merged by 31st August 2016. 

Note that "Tech Preview" is something Red Hat products use. Features
that we include in community releases can be marked as 'stable and
production ready' (the default) or 'experimental'.

Niels


signature.asc
Description: PGP signature
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] 3.9. feature freeze status check

2016-08-29 Thread Niels de Vos
On Mon, Aug 29, 2016 at 02:45:01AM -0400, Prashanth Pai wrote:
> 
> 
>  -Prashanth Pai
> 
> - Original Message -
> > From: "Soumya Koduri" 
> > To: "Pranith Kumar Karampuri" , "Rajesh Joseph" 
> > , "Manikandan Selvaganesh"
> > , "Csaba Henk" , "Niels de Vos" 
> > , "Jiffin Thottan"
> > , "Aravinda Vishwanathapura Krishna Murthy" 
> > , "Anoop Chirayath Manjiyil
> > Sajan" , "Ravishankar Narayanankutty" 
> > , "Kaushal Madappa"
> > , "Raghavendra Talur" , "Poornima 
> > Gurusiddaiah" ,
> > "Kaleb Keithley" , "Jose Rivera" , 
> > "Prashanth Pai" ,
> > "Samikshan Bairagya" , "Vijay Bellur" 
> > 
> > Cc: "Gluster Devel" 
> > Sent: Monday, 29 August, 2016 12:10:02 PM
> > Subject: Re: 3.9. feature freeze status check
> > 
> > 
> > 
> > On 08/26/2016 09:38 PM, Pranith Kumar Karampuri wrote:
> > > hi,
> > >   Now that we are almost near the feature freeze date (31st of Aug),
> > > want to get a sense if any of the status of the features.
> > >
> > > Please respond with:
> > > 1) Feature already merged
> > > 2) Undergoing review will make it by 31st Aug
> > > 3) Undergoing review, but may not make it by 31st Aug
> > > 4) Feature won't make it for 3.9.
> > >
> > > I added the features that were not planned(i.e. not in the 3.9 roadmap
> > > page) but made it to the release and not planned but may make it to
> > > release at the end of this mail.
> > > If you added a feature on master that will be released as part of 3.9.0
> > > but forgot to add it to roadmap page, please let me know I will add it.
> > >
> > > Here are the features planned as per the roadmap:
> > > 1) Throttling
> > > Feature owner: Ravishankar
> > >
> > > 2) Trash improvements
> > > Feature owners: Anoop, Jiffin
> > >
> > > 3) Kerberos for Gluster protocols:
> > > Feature owners: Niels, Csaba
> > >
> > > 4) SELinux on gluster volumes:
> > > Feature owners: Niels, Manikandan
> > >
> > > 5) Native sub-directory mounts:
> > > Feature owners: Kaushal, Pranith
> > >
> > > 6) RichACL support for GlusterFS:
> > > Feature owners: Rajesh Joseph
> > >
> > > 7) Sharemodes/Share reservations:
> > > Feature owners: Raghavendra Talur, Poornima G, Soumya Koduri, Rajesh
> > > Joseph, Anoop C S
> > >
> > > 8) Integrate with external resource management software
> > > Feature owners: Kaleb Keithley, Jose Rivera
> > >
> > > 9) Python Wrappers for Gluster CLI Commands
> > > Feature owners: Aravinda VK
> > >
> > > 10) Package and ship libgfapi-python
> > > Feature owners: Prashant Pai
> 
> This has been packaged on PyPI and is available for installation using pip[1].
> But as per discussion on this thread[2], the RPM packages won't be built as
> part of glusterfs release lifecycle and will continue to be external.

Also external packages are part of the Gluster Community, and can be
mentioned in the release notes. What is the status of the RPM packages
for Fedora and possibly packages for other distributions?

Thanks,
Niels


> 
> [1]: https://pypi.python.org/pypi/gfapi
> [2]: http://nongnu.13855.n7.nabble.com/Packaging-libgfapi-python-td214308.html
> 
> > >
> > > 11) Management REST APIs
> > > Feature owners: Aravinda VK
> > >
> > > 12) Events APIs
> > > Feature owners: Aravinda VK
> > >
> > > 13) CLI to get state representation of a cluster from the local glusterd
> > > pov
> > > Feature owners: Samikshan Bairagya
> > >
> > > 14) Posix-locks Reclaim support
> > > Feature owners: Soumya Koduri
> > 
> > Sorry this feature will not make it 3.9. Hopefully will get it in the
> > next release.
> > 
> > >
> > > 15) Deprecate striped volumes
> > > Feature owners: Vijay Bellur, Niels de Vos
> > >
> > > 16) Improvements in Gluster NFS-Ganesha integration
> > > Feature owners: Jiffin Tony Thottan, Soumya Koduri
> > 
> > This one is already merged.
> > 
> > Thanks,
> > Soumya
> > 
> > >
> > > *The following need to be added to the roadmap:*
> > >
> > > Features that made it to master already but were not palnned:
> > > 1) Multi threaded self-heal in EC
> > > Feature owner: Pranith (Did this because serkan asked for it. He has 9PB
> > > volume, self-healing takes a long time :-/)
> > >
> > > 2) Lock revocation (Facebook patch)
> > > Feature owner: Richard Wareing
> > >
> > > Features that look like will make it to 3.9.0:
> > > 1) Hardware extension support for EC
> > > Feature owner: Xavi
> > >
> > > 2) Reset brick support for replica volumes:
> > > Feature owner: Anuradha
> > >
> > > 3) Md-cache perf improvements in smb:
> > > Feature owner: Poornima
> > >
> > > --
> > > Pranith
> > 


signature.asc
Description: PGP signature
___

Re: [Gluster-devel] 3.9. feature freeze status check

2016-08-29 Thread Samikshan Bairagya



On 08/26/2016 09:38 PM, Pranith Kumar Karampuri wrote:

hi,
  Now that we are almost near the feature freeze date (31st of Aug),
want to get a sense if any of the status of the features.

Please respond with:
1) Feature already merged
2) Undergoing review will make it by 31st Aug
3) Undergoing review, but may not make it by 31st Aug
4) Feature won't make it for 3.9.

I added the features that were not planned(i.e. not in the 3.9 roadmap
page) but made it to the release and not planned but may make it to release
at the end of this mail.
If you added a feature on master that will be released as part of 3.9.0 but
forgot to add it to roadmap page, please let me know I will add it.

Here are the features planned as per the roadmap:
1) Throttling
Feature owner: Ravishankar

2) Trash improvements
Feature owners: Anoop, Jiffin

3) Kerberos for Gluster protocols:
Feature owners: Niels, Csaba

4) SELinux on gluster volumes:
Feature owners: Niels, Manikandan

5) Native sub-directory mounts:
Feature owners: Kaushal, Pranith

6) RichACL support for GlusterFS:
Feature owners: Rajesh Joseph

7) Sharemodes/Share reservations:
Feature owners: Raghavendra Talur, Poornima G, Soumya Koduri, Rajesh
Joseph, Anoop C S

8) Integrate with external resource management software
Feature owners: Kaleb Keithley, Jose Rivera

9) Python Wrappers for Gluster CLI Commands
Feature owners: Aravinda VK

10) Package and ship libgfapi-python
Feature owners: Prashant Pai

11) Management REST APIs
Feature owners: Aravinda VK

12) Events APIs
Feature owners: Aravinda VK

13) CLI to get state representation of a cluster from the local glusterd pov
Feature owners: Samikshan Bairagya



This one has been merged.

Thanks.


14) Posix-locks Reclaim support
Feature owners: Soumya Koduri

15) Deprecate striped volumes
Feature owners: Vijay Bellur, Niels de Vos

16) Improvements in Gluster NFS-Ganesha integration
Feature owners: Jiffin Tony Thottan, Soumya Koduri

*The following need to be added to the roadmap:*

Features that made it to master already but were not palnned:
1) Multi threaded self-heal in EC
Feature owner: Pranith (Did this because serkan asked for it. He has 9PB
volume, self-healing takes a long time :-/)

2) Lock revocation (Facebook patch)
Feature owner: Richard Wareing

Features that look like will make it to 3.9.0:
1) Hardware extension support for EC
Feature owner: Xavi

2) Reset brick support for replica volumes:
Feature owner: Anuradha

3) Md-cache perf improvements in smb:
Feature owner: Poornima


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] 3.9. feature freeze status check

2016-08-29 Thread Atin Mukherjee
On Fri, Aug 26, 2016 at 9:38 PM, Pranith Kumar Karampuri <
pkara...@redhat.com> wrote:

> hi,
>   Now that we are almost near the feature freeze date (31st of Aug),
> want to get a sense if any of the status of the features.
>
> Please respond with:
> 1) Feature already merged
> 2) Undergoing review will make it by 31st Aug
> 3) Undergoing review, but may not make it by 31st Aug
> 4) Feature won't make it for 3.9.
>
> I added the features that were not planned(i.e. not in the 3.9 roadmap
> page) but made it to the release and not planned but may make it to release
> at the end of this mail.
> If you added a feature on master that will be released as part of 3.9.0
> but forgot to add it to roadmap page, please let me know I will add it.
>
> Here are the features planned as per the roadmap:
> 1) Throttling
> Feature owner: Ravishankar
>
> 2) Trash improvements
> Feature owners: Anoop, Jiffin
>
> 3) Kerberos for Gluster protocols:
> Feature owners: Niels, Csaba
>
> 4) SELinux on gluster volumes:
> Feature owners: Niels, Manikandan
>
> 5) Native sub-directory mounts:
> Feature owners: Kaushal, Pranith
>
> 6) RichACL support for GlusterFS:
> Feature owners: Rajesh Joseph
>
> 7) Sharemodes/Share reservations:
> Feature owners: Raghavendra Talur, Poornima G, Soumya Koduri, Rajesh
> Joseph, Anoop C S
>
> 8) Integrate with external resource management software
> Feature owners: Kaleb Keithley, Jose Rivera
>
> 9) Python Wrappers for Gluster CLI Commands
> Feature owners: Aravinda VK
>
> 10) Package and ship libgfapi-python
> Feature owners: Prashant Pai
>
> 11) Management REST APIs
> Feature owners: Aravinda VK
>
> 12) Events APIs
> Feature owners: Aravinda VK
>
> 13) CLI to get state representation of a cluster from the local glusterd
> pov
> Feature owners: Samikshan Bairagya
>

This is in mainline now.


>
> 14) Posix-locks Reclaim support
> Feature owners: Soumya Koduri
>
> 15) Deprecate striped volumes
> Feature owners: Vijay Bellur, Niels de Vos
>
> 16) Improvements in Gluster NFS-Ganesha integration
> Feature owners: Jiffin Tony Thottan, Soumya Koduri
>
> *The following need to be added to the roadmap:*
>
> Features that made it to master already but were not palnned:
> 1) Multi threaded self-heal in EC
> Feature owner: Pranith (Did this because serkan asked for it. He has 9PB
> volume, self-healing takes a long time :-/)
>
> 2) Lock revocation (Facebook patch)
> Feature owner: Richard Wareing
>
> Features that look like will make it to 3.9.0:
> 1) Hardware extension support for EC
> Feature owner: Xavi
>
> 2) Reset brick support for replica volumes:
> Feature owner: Anuradha
>
> 3) Md-cache perf improvements in smb:
> Feature owner: Poornima
>
> --
> Pranith
>
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
>



-- 

--Atin
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Gerrit Access Control

2016-08-29 Thread Atin Mukherjee
On Mon, Aug 29, 2016 at 12:25 PM, Nigel Babu  wrote:

> Hello folks,
>
> We have not pruned our Gerrit maintainers list ever as far as I can see.
> We've
> only added people. For security reasons, I'd like to propose that we do the
> following:
>
> If you do not have a commit in the last 90 days, your membership from
> gluster-maintainers team on Gerrit will be revoked. This means you won't
> have
> permission to merge patches. This does not mean you're no longer
> maintainer.
> This is only a security measure. To gain access again, all you have to do
> is
> file a bug against gluster-infra and I'll grant you access immediately.
>
> When I remove someone's access, I'll send an invidual email about it.
> Again,
> your membership on gluster-maintainers has no say on your maintainer
> status.
> This is only for security reasons.
>
> Thoughts on implementing this policy?
>

I like this idea, +1


>
> --
> nigelb
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
>



-- 

--Atin
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Gerrit Access Control

2016-08-29 Thread Nigel Babu
Hello folks,

We have not pruned our Gerrit maintainers list ever as far as I can see. We've
only added people. For security reasons, I'd like to propose that we do the
following:

If you do not have a commit in the last 90 days, your membership from
gluster-maintainers team on Gerrit will be revoked. This means you won't have
permission to merge patches. This does not mean you're no longer maintainer.
This is only a security measure. To gain access again, all you have to do is
file a bug against gluster-infra and I'll grant you access immediately.

When I remove someone's access, I'll send an invidual email about it. Again,
your membership on gluster-maintainers has no say on your maintainer status.
This is only for security reasons.

Thoughts on implementing this policy?

--
nigelb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] 3.9. feature freeze status check

2016-08-29 Thread Prashanth Pai


 -Prashanth Pai

- Original Message -
> From: "Soumya Koduri" 
> To: "Pranith Kumar Karampuri" , "Rajesh Joseph" 
> , "Manikandan Selvaganesh"
> , "Csaba Henk" , "Niels de Vos" 
> , "Jiffin Thottan"
> , "Aravinda Vishwanathapura Krishna Murthy" 
> , "Anoop Chirayath Manjiyil
> Sajan" , "Ravishankar Narayanankutty" 
> , "Kaushal Madappa"
> , "Raghavendra Talur" , "Poornima 
> Gurusiddaiah" ,
> "Kaleb Keithley" , "Jose Rivera" , 
> "Prashanth Pai" ,
> "Samikshan Bairagya" , "Vijay Bellur" 
> 
> Cc: "Gluster Devel" 
> Sent: Monday, 29 August, 2016 12:10:02 PM
> Subject: Re: 3.9. feature freeze status check
> 
> 
> 
> On 08/26/2016 09:38 PM, Pranith Kumar Karampuri wrote:
> > hi,
> >   Now that we are almost near the feature freeze date (31st of Aug),
> > want to get a sense if any of the status of the features.
> >
> > Please respond with:
> > 1) Feature already merged
> > 2) Undergoing review will make it by 31st Aug
> > 3) Undergoing review, but may not make it by 31st Aug
> > 4) Feature won't make it for 3.9.
> >
> > I added the features that were not planned(i.e. not in the 3.9 roadmap
> > page) but made it to the release and not planned but may make it to
> > release at the end of this mail.
> > If you added a feature on master that will be released as part of 3.9.0
> > but forgot to add it to roadmap page, please let me know I will add it.
> >
> > Here are the features planned as per the roadmap:
> > 1) Throttling
> > Feature owner: Ravishankar
> >
> > 2) Trash improvements
> > Feature owners: Anoop, Jiffin
> >
> > 3) Kerberos for Gluster protocols:
> > Feature owners: Niels, Csaba
> >
> > 4) SELinux on gluster volumes:
> > Feature owners: Niels, Manikandan
> >
> > 5) Native sub-directory mounts:
> > Feature owners: Kaushal, Pranith
> >
> > 6) RichACL support for GlusterFS:
> > Feature owners: Rajesh Joseph
> >
> > 7) Sharemodes/Share reservations:
> > Feature owners: Raghavendra Talur, Poornima G, Soumya Koduri, Rajesh
> > Joseph, Anoop C S
> >
> > 8) Integrate with external resource management software
> > Feature owners: Kaleb Keithley, Jose Rivera
> >
> > 9) Python Wrappers for Gluster CLI Commands
> > Feature owners: Aravinda VK
> >
> > 10) Package and ship libgfapi-python
> > Feature owners: Prashant Pai

This has been packaged on PyPI and is available for installation using pip[1].
But as per discussion on this thread[2], the RPM packages won't be built as
part of glusterfs release lifecycle and will continue to be external.

[1]: https://pypi.python.org/pypi/gfapi
[2]: http://nongnu.13855.n7.nabble.com/Packaging-libgfapi-python-td214308.html

> >
> > 11) Management REST APIs
> > Feature owners: Aravinda VK
> >
> > 12) Events APIs
> > Feature owners: Aravinda VK
> >
> > 13) CLI to get state representation of a cluster from the local glusterd
> > pov
> > Feature owners: Samikshan Bairagya
> >
> > 14) Posix-locks Reclaim support
> > Feature owners: Soumya Koduri
> 
> Sorry this feature will not make it 3.9. Hopefully will get it in the
> next release.
> 
> >
> > 15) Deprecate striped volumes
> > Feature owners: Vijay Bellur, Niels de Vos
> >
> > 16) Improvements in Gluster NFS-Ganesha integration
> > Feature owners: Jiffin Tony Thottan, Soumya Koduri
> 
> This one is already merged.
> 
> Thanks,
> Soumya
> 
> >
> > *The following need to be added to the roadmap:*
> >
> > Features that made it to master already but were not palnned:
> > 1) Multi threaded self-heal in EC
> > Feature owner: Pranith (Did this because serkan asked for it. He has 9PB
> > volume, self-healing takes a long time :-/)
> >
> > 2) Lock revocation (Facebook patch)
> > Feature owner: Richard Wareing
> >
> > Features that look like will make it to 3.9.0:
> > 1) Hardware extension support for EC
> > Feature owner: Xavi
> >
> > 2) Reset brick support for replica volumes:
> > Feature owner: Anuradha
> >
> > 3) Md-cache perf improvements in smb:
> > Feature owner: Poornima
> >
> > --
> > Pranith
> 
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] CFP for Gluster Developer Summit

2016-08-29 Thread Raghavendra G
Though its a bit late, here is one from me:

Topic: "DHT: current design, (dis)advantages, challenges - A perspective"

Agenda:

I'll try to address
* the why's, (dis)advantages of current design. As noted in the title, this
is my own perspective I've gathered while working on DHT. We don't have any
existing documentation for the motivations. The source has been bugs (huge
number of them :)), interaction with other people working on DHT and code
reading.
* Current work going on and a rough roadmap of what we'll be working on
during at least next few months.
* Going by the objectives of this talk, this might as well turn out to be a
discussion.

regards,

On Wed, Aug 24, 2016 at 8:27 PM, Arthy Loganathan 
wrote:

>
>
> On 08/24/2016 07:18 PM, Atin Mukherjee wrote:
>
>
>
> On Wed, Aug 24, 2016 at 5:43 PM, Arthy Loganathan < 
> aloga...@redhat.com> wrote:
>
>> Hi,
>>
>> I would like to propose below topic as a lightening talk.
>>
>> Title: Data Logging to monitor Gluster Performance
>>
>> Theme: Process and Infrastructure
>>
>> To benchmark any software product, we often need to do performance
>> analysis of the system along with the product. I have written a tool
>> "System Monitor" to collect required data like CPU, memory usage and load
>> average periodically (with graphical representation) of any process on a
>> system. This data collected can help in analyzing the system & product
>> performance.
>>
>
> A link to this project would definitely help here.
>
>
> Hi Atin,
>
> Here is the link to the project - https://github.com/aloganat/
> system_monitor
>
> Thanks & Regards,
> Arthy
>
>
>
>>
>> From this talk I would like to give an overview of this tool and explain
>> how it can be used to monitor Gluster performance.
>>
>> Agenda:
>>   - Overview of the tool and its usage
>>   - Collecting the data in an excel sheet at regular intervals of time
>>   - Plotting the graph with that data (in progress)
>>   - a short demo
>>
>> Thanks & Regards,
>>
>> Arthy
>>
>>
>>
>>
>> ___
>> Gluster-users mailing list
>> gluster-us...@gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-users
>>
>
>
>
> --
>
> --Atin
>
>
>
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
>



-- 
Raghavendra G
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] 3.9. feature freeze status check

2016-08-29 Thread Ravishankar N

On 08/26/2016 09:39 PM, Pranith Kumar Karampuri wrote:



On Fri, Aug 26, 2016 at 9:38 PM, Pranith Kumar Karampuri 
> wrote:


hi,
  Now that we are almost near the feature freeze date (31st of
Aug), want to get a sense if any of the status of the features.


I meant "want to get a sense of the status of the features"


Please respond with:
1) Feature already merged
2) Undergoing review will make it by 31st Aug
3) Undergoing review, but may not make it by 31st Aug
4) Feature won't make it for 3.9.

I added the features that were not planned(i.e. not in the 3.9
roadmap page) but made it to the release and not planned but may
make it to release at the end of this mail.
If you added a feature on master that will be released as part of
3.9.0 but forgot to add it to roadmap page, please let me know I
will add it.

Here are the features planned as per the roadmap:
1) Throttling
Feature owner: Ravishankar



Sorry, this won't make it to 3.9. I'm working on the patch and hope to 
get it ready for the next release.

Thanks,
Ravi



2) Trash improvements
Feature owners: Anoop, Jiffin

3) Kerberos for Gluster protocols:
Feature owners: Niels, Csaba

4) SELinux on gluster volumes:
Feature owners: Niels, Manikandan

5) Native sub-directory mounts:
Feature owners: Kaushal, Pranith

6) RichACL support for GlusterFS:
Feature owners: Rajesh Joseph

7) Sharemodes/Share reservations:
Feature owners: Raghavendra Talur, Poornima G, Soumya Koduri,
Rajesh Joseph, Anoop C S

8) Integrate with external resource management software
Feature owners: Kaleb Keithley, Jose Rivera

9) Python Wrappers for Gluster CLI Commands
Feature owners: Aravinda VK

10) Package and ship libgfapi-python
Feature owners: Prashant Pai

11) Management REST APIs
Feature owners: Aravinda VK

12) Events APIs
Feature owners: Aravinda VK

13) CLI to get state representation of a cluster from the local
glusterd pov
Feature owners: Samikshan Bairagya

14) Posix-locks Reclaim support
Feature owners: Soumya Koduri

15) Deprecate striped volumes
Feature owners: Vijay Bellur, Niels de Vos

16) Improvements in Gluster NFS-Ganesha integration
Feature owners: Jiffin Tony Thottan, Soumya Koduri

*The following need to be added to the roadmap:*

Features that made it to master already but were not palnned:
1) Multi threaded self-heal in EC
Feature owner: Pranith (Did this because serkan asked for it. He
has 9PB volume, self-healing takes a long time :-/)

2) Lock revocation (Facebook patch)
Feature owner: Richard Wareing

Features that look like will make it to 3.9.0:
1) Hardware extension support for EC
Feature owner: Xavi

2) Reset brick support for replica volumes:
Feature owner: Anuradha

3) Md-cache perf improvements in smb:
Feature owner: Poornima

-- 
Pranith





--
Pranith



___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] 3.9. feature freeze status check

2016-08-29 Thread Soumya Koduri



On 08/26/2016 09:38 PM, Pranith Kumar Karampuri wrote:

hi,
  Now that we are almost near the feature freeze date (31st of Aug),
want to get a sense if any of the status of the features.

Please respond with:
1) Feature already merged
2) Undergoing review will make it by 31st Aug
3) Undergoing review, but may not make it by 31st Aug
4) Feature won't make it for 3.9.

I added the features that were not planned(i.e. not in the 3.9 roadmap
page) but made it to the release and not planned but may make it to
release at the end of this mail.
If you added a feature on master that will be released as part of 3.9.0
but forgot to add it to roadmap page, please let me know I will add it.

Here are the features planned as per the roadmap:
1) Throttling
Feature owner: Ravishankar

2) Trash improvements
Feature owners: Anoop, Jiffin

3) Kerberos for Gluster protocols:
Feature owners: Niels, Csaba

4) SELinux on gluster volumes:
Feature owners: Niels, Manikandan

5) Native sub-directory mounts:
Feature owners: Kaushal, Pranith

6) RichACL support for GlusterFS:
Feature owners: Rajesh Joseph

7) Sharemodes/Share reservations:
Feature owners: Raghavendra Talur, Poornima G, Soumya Koduri, Rajesh
Joseph, Anoop C S

8) Integrate with external resource management software
Feature owners: Kaleb Keithley, Jose Rivera

9) Python Wrappers for Gluster CLI Commands
Feature owners: Aravinda VK

10) Package and ship libgfapi-python
Feature owners: Prashant Pai

11) Management REST APIs
Feature owners: Aravinda VK

12) Events APIs
Feature owners: Aravinda VK

13) CLI to get state representation of a cluster from the local glusterd pov
Feature owners: Samikshan Bairagya

14) Posix-locks Reclaim support
Feature owners: Soumya Koduri


Sorry this feature will not make it 3.9. Hopefully will get it in the 
next release.




15) Deprecate striped volumes
Feature owners: Vijay Bellur, Niels de Vos

16) Improvements in Gluster NFS-Ganesha integration
Feature owners: Jiffin Tony Thottan, Soumya Koduri


This one is already merged.

Thanks,
Soumya



*The following need to be added to the roadmap:*

Features that made it to master already but were not palnned:
1) Multi threaded self-heal in EC
Feature owner: Pranith (Did this because serkan asked for it. He has 9PB
volume, self-healing takes a long time :-/)

2) Lock revocation (Facebook patch)
Feature owner: Richard Wareing

Features that look like will make it to 3.9.0:
1) Hardware extension support for EC
Feature owner: Xavi

2) Reset brick support for replica volumes:
Feature owner: Anuradha

3) Md-cache perf improvements in smb:
Feature owner: Poornima

--
Pranith

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel