[Gluster-users] Sharding - what next?

2015-12-03 Thread Krutika Dhananjay
Hi, 

When we designed and wrote sharding feature in GlusterFS, our focus had been 
single-writer-to-large-files use cases, chief among these being the virtual 
machine image store use-case. 
Sharding, for the uninitiated, is a feature that was introduced in 
glusterfs-3.7.0 release with 'experimental' status. 
Here is some documentation that explains what it does at a high level: 
http://www.gluster.org/community/documentation/index.php/Features/sharding-xlator
 
https://gluster.readthedocs.org/en/release-3.7.0/Features/shard/ 

We have now reached that stage where the feature is considered stable for the 
VM store use case 
after several rounds of testing (thanks to Lindsay Mathieson, Paul Cuzner and 
Satheesaran Sundaramoorthi), 
bug fixing and reviews (thanks to Pranith Karampuri). Also in this regard, 
patches have been sent to make 
sharding work with geo-replication, thanks to Kotresh's efforts (testing still 
in progress). 

We would love to hear from you on what you think of the feature and where it 
could be improved. 
Specifically, the following are the questions we are seeking feedback on: 
a) your experience testing sharding with VM store use-case - any bugs you ran 
into, any performance issues, etc 
b) what are the other large-file use-cases (apart from the VM store workload) 
you know of or use, 
where you think having sharding capability will be useful. 

Based on your feedback we will start work on making sharding work in other 
workloads and/or with other existing GlusterFS features. 

Thanks, 
Krutika 


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] after upgrade to 3.6.7 : Internal error xfs_attr3_leaf_write_verify

2015-12-03 Thread Dietmar Putz

Hello all,

on 1st december i upgraded two 6 node cluster from glusterfs 3.5.6 to 3.6.7.
all of them are equal in hw, os and patchlevel, currently running ubuntu 
14.04 lts by an do-release-upgrade from 12.04 lts (this was done before 
gfs upgrade to 3.5.6, not directly before upgrading to 3.6.7).
because of a geo-replication issue all of the nodes have rsync 3.1.1.3 
installed instead  3.1.0 which comes by the repositories. this is the 
only deviation from ubuntu repositories for 14.04 lts.
since upgrade to gfs 3.6.7 the glusterd on two nodes of the same cluster 
are going offline after getting an xfs_attr3_leaf_write_verify error for 
the underlying bricks as shown below.
this happens about every 4-5 hours after the problem was solved by an 
umount / remount of the brick. it makes no difference to run a xfs_check 
/ xfs_repair before remount.
xfs_check / xfs_repair did not show any faults. the underlying hw is a 
raid 5 vol on lsi-9271 8i. megacli does not show any errors.

the syslog does not show more than the dmesg output below.
every time the same two nodes of the same cluster are affected.
as shown in dmesg and syslog, the system recognizes the 
xfs_attr_leaf_write_verify error about 38 min. before finally giving up. 
for both events i can not found corresponding events in gluster logs.
this is strange...the gluster is historical grown from 3.2.5, 3.3, to 
3.4.6/7 which was running well for month, gfs 3.5.6 was running for 
about two weeks and upgrade to 3.6.7 was done because of a geo-repl 
log-flood.
even when i have no hint/evidence that this is caused by gfs 3.6.7 
somehow i believe that this is the case...
does anybody experienced such an error or have some hints to getting out 
of this big problem...?
unfortunately the affected cluster is the master of a geo-replication 
which is not well running since update from gfs 3.4.7...fortunately both 
affected gluster-nodes are not of the same sub-volume.


any help is appreciated...

best regards
dietmar




[ 09:32:29 ] - root@gluster-ger-ber-10  /var/log $gluster volume info

Volume Name: ger-ber-01
Type: Distributed-Replicate
Volume ID: 6a071cfa-b150-4f0b-b1ed-96ab5d4bd671
Status: Started
Number of Bricks: 3 x 2 = 6
Transport-type: tcp
Bricks:
Brick1: gluster-ger-ber-11-int:/gluster-export
Brick2: gluster-ger-ber-12-int:/gluster-export
Brick3: gluster-ger-ber-09-int:/gluster-export
Brick4: gluster-ger-ber-10-int:/gluster-export
Brick5: gluster-ger-ber-07-int:/gluster-export
Brick6: gluster-ger-ber-08-int:/gluster-export
Options Reconfigured:
changelog.changelog: on
geo-replication.ignore-pid-check: on
cluster.min-free-disk: 200GB
geo-replication.indexing: on
auth.allow: 
10.0.1.*,188.138.82.*,188.138.123.*,82.193.249.198,82.193.249.200,31.7.178.137,31.7.178.135,31.7.180.109,31.7.180.98,82.199.147.*,104.155.22.202,104.155.30.201,104.155.5.117,104.155.11.253,104.155.15.34,104.155.25.145,146.148.120.255,31.7.180.148

nfs.disable: off
performance.cache-refresh-timeout: 2
performance.io-thread-count: 32
performance.cache-size: 1024MB
performance.read-ahead: on
performance.cache-min-file-size: 0
network.ping-timeout: 10
[ 09:32:52 ] - root@gluster-ger-ber-10  /var/log $




[ 19:10:55 ] - root@gluster-ger-ber-10  /var/log $gluster volume status
Status of volume: ger-ber-01
Gluster processPortOnline Pid
-- 


Brick gluster-ger-ber-11-int:/gluster-export 49152Y 15994
Brick gluster-ger-ber-12-int:/gluster-export N/AN N/A
Brick gluster-ger-ber-09-int:/gluster-export 49152Y 10965
Brick gluster-ger-ber-10-int:/gluster-export N/AN N/A
Brick gluster-ger-ber-07-int:/gluster-export 49152Y 18542
Brick gluster-ger-ber-08-int:/gluster-export 49152Y 20275
NFS Server on localhost2049Y 13658
Self-heal Daemon on localhostN/AY 13666
NFS Server on gluster-ger-ber-09-int2049 Y13503
Self-heal Daemon on gluster-ger-ber-09-intN/A Y 13511
NFS Server on gluster-ger-ber-07-int2049 Y21526
Self-heal Daemon on gluster-ger-ber-07-intN/A Y 21534
NFS Server on gluster-ger-ber-08-int2049 Y24004
Self-heal Daemon on gluster-ger-ber-08-intN/A Y 24011
NFS Server on gluster-ger-ber-11-int2049 Y18944
Self-heal Daemon on gluster-ger-ber-11-intN/A Y 18952
NFS Server on gluster-ger-ber-12-int2049 Y19138
Self-heal Daemon on gluster-ger-ber-12-intN/A Y 19146

Task Status of Volume ger-ber-01
-- 


There are no active volume tasks

- root@gluster-ger-ber-10  /var/log $

- root@gluster-ger-ber-10  /var/log $dmesg -T
...
[Wed Dec  2 12:43:47 2015] XFS (sdc1): xfs_log_force: error 5 returned.
[Wed Dec  2 12:43:48 2015] XFS (sdc1): xfs_log_force: error 5 returned.
[Wed Dec  2 12:45:58 2015] XFS (sdc1): Mounting Filesystem
[Wed Dec  2 

Re: [Gluster-users] Configuring Ganesha and gluster on separate nodes?

2015-12-03 Thread Jiffin Tony Thottan

comments inline.

On 03/12/15 01:08, Surya K Ghatty wrote:


Hi Soumya, Kaleb, all:

Thanks for the response!


Quick follow-up to this question - We tried running ganesha and 
gluster on two separate machines and the configuration seems to be 
working without issues.


Follow-up question I have is this: what changes do I need to make to 
put the Ganesha in active active HA mode - where backend gluster and 
ganesha will be on a different node. I am using the instructions here 
for putting Ganesha in HA mode. 
http://www.slideshare.net/SoumyaKoduri/high-49117846. This 
presentation refers to commands like gluster 
cluster.enable-shared-storage to enable HA.


1. Here is the config I am hoping to achieve:
glusterA and glusterB on individual bare metals - both in Trusted 
pool, with volume gvol0 up and running.




Ganesha 1 and 2 on machines ganesha1, and ganesha1. And my gluster 
storage will be on a third machine gluster1. (with a peer on another 
machine gluster2).


Ganesha node1: on a VM ganeshaA.
Ganesha node2: on another vm GaneshaB.

I would like to know what it takes to put ganeshaA and GaneshaB in 
Active Active HA mode. Is it technically possible?




Technically possible, but difficult to do that, u must manually follow 
the steps which are internally by  "gluster nfs-ganesha enable"

(Kaleb will have clear idea about it)


a. How do commands like cluster.enable-shared-storage work in this case?

you should manually configure a shared storage(an export which both 
GaneshaA and GaneshaB can access)


b. where does this command need to be run? on the ganesha node, or on 
the gluster nodes?


As a I mentioned before, u cannot do this with help of gluster cli if 
ganesha cluster outside trusted pool.


I don't understand your requirement correctly, if it falls to any of the 
following, I had answered according to my best knowledge


1.) "ganesha should run on nodes in which gluster volume(/bricks) is 
created"

i.  created trust pool using glusterA, glusterB, GaneshaA, GaneshaB
ii. create volume using glusterA and glusterB
iii. add GaneshaA and GaneshaB on server list in ganesha-ha.conf file
iv then follow remaining the steps for exporting volume via nfs-ganesha

2.) "ganesha cluster(vms) should not be part of gluster trusted pool"
(hacky way)
 i.) created trusted pool using glusterA and glusterB.
ii.) create and start volume gvol0 using it
iii.) created trusted pool using GaneshaA and GaneshaB
iv.) before enabling nfs-ganesha option, add EXPORT{} for gvol0 in 
/etc/ganesha/ganesha.conf

in both GaneshaA and GaneshaB

Note : The value for hostname in EXPORT{ FSAL {} } should be glusterA or 
glusterB.




2. Also, is it possible to have multiple ganesha servers point to the 
same gluster volume in the back end? say, in the configuration #1, I 
have another ganesha server GaneshaC that is not clustered with 
ganeshaA or ganeshaB. Can it export the volume gvol0 that ganeshaA and 
ganeshaB are also exporting?




Yes it is possible, but u may need to start GaneshaC manually (running 
two different ganesha clusters in trusted pool via cli is not supported)


thank you!




Regards,
Jiffin


Surya.

Regards,

Surya Ghatty

"This too shall pass"

Surya Ghatty | Software Engineer | IBM Cloud Infrastructure Services 
Development | tel: (507) 316-0559 | gha...@us.ibm.com



Inactive hide details for Soumya Koduri ---11/18/2015 05:08:02 AM---On 
11/17/2015 10:21 PM, Surya K Ghatty wrote: > Hi:Soumya Koduri 
---11/18/2015 05:08:02 AM---On 11/17/2015 10:21 PM, Surya K Ghatty 
wrote: > Hi:


From: Soumya Koduri 
To: Surya K Ghatty/Rochester/IBM@IBMUS, gluster-users@gluster.org
Date: 11/18/2015 05:08 AM
Subject: Re: [Gluster-users] Configuring Ganesha and gluster on 
separate nodes?








On 11/17/2015 10:21 PM, Surya K Ghatty wrote:
> Hi:
>
> I am trying to understand if it is technically feasible to have gluster
> nodes on one machine, and export a volume from one of these nodes using
> a nfs-ganesha server installed on a totally different machine? I tried
> the below and showmount -e does not show my volume exported. Any
> suggestions will be appreciated.
>
> 1. Here is my configuration:
>
> Gluster nodes: glusterA and glusterB on individual bare metals - both in
> Trusted pool, with volume gvol0 up and running.
> Ganesha node: on bare metal ganeshaA.
>
> 2. my ganesha.conf looks like this with IP address of glusterA in 
the FSAL.

>
> FSAL {
> Name = GLUSTER;
>
> # IP of one of the nodes in the trusted pool
> *hostname = "WW.ZZ.XX.YY" --> IP address of GlusterA.*
>
> # Volume name. Eg: "test_volume"
> volume = "gvol0";
> }
>
> 3. I disabled nfs on gvol0. As you can see, *nfs.disable is set to on.*
>
> [root@glusterA ~]# gluster vol info
>
> Volume Name: gvol0
> Type: Distribute
> Volume ID: 

Re: [Gluster-users] Changes in SELinux handling in 3.6+

2015-12-03 Thread Manikandan Selvaganesh
Hi Charl,

Sorry for the very late response. Thanks for mentioning clearly on what was the 
issue. As you have mentioned, from gluster-3.6+ versions, though selinux option 
is there, you are thrown an error, "Invalid option: context". It happens when 
you try to set context while mounting, which was done unintentionally. We have 
filed bug against mainline[1](for which the patch/fix is merged in master) and 
have backported the same to 3.7[2] and 3.6[3] as well. We are planning to get 
it fixed with the next minor updates in the coming releases. With the fix, you 
will able to set selinux context while mounting.

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1287763

[2] https://bugzilla.redhat.com/show_bug.cgi?id=1287877

[3] https://bugzilla.redhat.com/show_bug.cgi?id=1287878

Thank you :-)

--
Regards,
Manikandan Selvaganesh.

> Hi all
> 
> I run a small HPC with a single storage node (Scientific Linux 6, using 
> Gluster 3.5.2 RPMs). SELinux is set to Enforcing. Home directories are shared 
> to a handful of compute nodes where SELinux is also set to Enforcing.
> 
> The system is quite happy provided you specify the correct SELinux context 
> when mounting /home (system_u:object_r:user_home_t:s0). With 3.5 this done 
> with
> 
> $ mount -t glusterfs storage:/home /home -o 
> context="system_u:object_r:user_home_t:s0"
> 
> 
> I'm going to be adding a second storage node and will be setting up 
> replication between the two nodes. While I'm doing that, I might as well 
> upgrade to 3.6+.
> 
> During testing I found that version 3.6.1 of mount.glusterfs does not support 
> the 'context' mount option. Is the removal of this functionality intentional? 
> There's unfortunately very little documentation available on SELinux support 
> in Gluster. Version 3.6.1 does have the 'selinux' mount option, but it 
> doesn't seem to do anything.
> 
> It should also be noted that a 3.5.3 client mounting a 3.6.1 server works as 
> expected, a 3.6.1 client never has the correct SELinux tags. The issue seems 
> to be limited to the mount.glusterfs utility.
> 
> 
> Below I'll paste the output of my testing. 'storage0' runs 3.5.3 and works as 
> expected, 'storage1' runs 3.6.1 and doesn't honour SELinux tags.
> 
> Any help will be appreciated.
> 
> ciao
> Charl
> 
> 
> === Gluster 3.5.3 START ===
> 
> [root@storage0 /]$ yum install glusterfs-{server,api,libs}-3.5.3 xfsprogs
> [root@storage0 /]$ mkfs.xfs -i size=512 /dev/sdb
> [root@storage0 /]$ mkdir /brick1
> [root@storage0 /]$ mount /dev/sdb /brick1
> [root@storage0 /]$ mkdir /brick1/home
> [root@storage0 /]$ ls -lsaZ /home
> total 8
> drwxr-xr-x. root root system_u:object_r:home_root_t:s0 .
> dr-xr-xr-x. root root system_u:object_r:root_t:s0  ..
> 
> [root@storage0 /]$ chcon system_u:object_r:home_root_t:s0 /brick1/home
> [root@storage0 /]$ ls -lsaZ /brick1/home
> total 0
> drwxr-xr-x. root root system_u:object_r:home_root_t:s0 .
> drwxr-xr-x. root root system_u:object_r:file_t:s0  ..
> 
> [root@storage0 /]$ service glusterd start
> Starting glusterd: [  OK  ]
> 
> [root@storage0 /]$ gluster volume create home storage0:/brick1/home
> volume create: home: success: please start the volume to access data
> 
> [root@storage0 /]$ gluster volume start home
> volume start: home: success
> 
> [root@storage0 /]$ mount -t glusterfs storage0:/home home
> [root@storage0 /]$ ls -lsaZ /home
> total 4
> drwxr-xr-x. root root system_u:object_r:fusefs_t:s0.
> dr-xr-xr-x. root root system_u:object_r:root_t:s0  ..
> 
> [testuser@launch ~]$ ssh testuser@storage0
> Password:
> Could not chdir to home directory /home/testuser: No such file or directory
> [testuser@storage0 /]$
> 
> 
> [root@storage0 /]$ umount home
> [root@storage0 /]$ mount -t glusterfs storage0:/home home -o 
> context="system_u:object_r:user_home_t:s0"
> 
> [testuser@launch ~]$ ssh testuser@storage0
> Password:
> Creating home directory for testuser.
> Last login: Tue Jan  6 10:40:29 2015 from 192.168.2.3
> [testuser@storage0 ~]$
> 
> 
> [root@storage0 /]$ ls -lsaZ /home
> total 4
> drwxr-xr-x. root root  system_u:object_r:user_home_t:s0 .
> dr-xr-xr-x. root root  system_u:object_r:root_t:s0  ..
> drwxr-xr-x. testuser users system_u:object_r:user_home_t:s0 testuser
> 
> [root@storage0 /]$ rpm -qa | grep gluster
> glusterfs-libs-3.5.3-1.el6.x86_64
> glusterfs-api-3.5.3-1.el6.x86_64
> glusterfs-cli-3.5.3-1.el6.x86_64
> glusterfs-server-3.5.3-1.el6.x86_64
> glusterfs-3.5.3-1.el6.x86_64
> glusterfs-fuse-3.5.3-1.el6.x86_64
> 
> === Gluster 3.5.3 END ===
> 
> === Gluster 3.6.1 START ===
> 
> [root@storage1 /]$ yum install glusterfs-{server,api,libs}-3.6.1 xfsprogs
> [root@storage1 /]$ mkfs.xfs -i size=512 /dev/sdb
> [root@storage1 /]$ mkdir /brick1
> [root@storage1 /]$ mount /dev/sdb /brick1
> [root@storage1 /]$ mkdir /brick1/home
> [root@storage1 /]$ ls -lsaZ /home
> total 8
> drwxr-xr-x. root root system_u:object_r:home_root_t:s0 .
> dr-xr-xr-x. root root 

Re: [Gluster-users] after upgrade to 3.6.7 : Internal error xfs_attr3_leaf_write_verify

2015-12-03 Thread Vijay Bellur
Looks like an issue with xfs. Adding Brian to check if it is a familiar problem.

Regards,
Vijay

- Original Message -
> From: "Dietmar Putz" 
> To: gluster-users@gluster.org
> Sent: Thursday, December 3, 2015 6:06:11 AM
> Subject: [Gluster-users] after upgrade to 3.6.7 : Internal error  
> xfs_attr3_leaf_write_verify
> 
> Hello all,
> 
> on 1st december i upgraded two 6 node cluster from glusterfs 3.5.6 to 3.6.7.
> all of them are equal in hw, os and patchlevel, currently running ubuntu
> 14.04 lts by an do-release-upgrade from 12.04 lts (this was done before
> gfs upgrade to 3.5.6, not directly before upgrading to 3.6.7).
> because of a geo-replication issue all of the nodes have rsync 3.1.1.3
> installed instead  3.1.0 which comes by the repositories. this is the
> only deviation from ubuntu repositories for 14.04 lts.
> since upgrade to gfs 3.6.7 the glusterd on two nodes of the same cluster
> are going offline after getting an xfs_attr3_leaf_write_verify error for
> the underlying bricks as shown below.
> this happens about every 4-5 hours after the problem was solved by an
> umount / remount of the brick. it makes no difference to run a xfs_check
> / xfs_repair before remount.
> xfs_check / xfs_repair did not show any faults. the underlying hw is a
> raid 5 vol on lsi-9271 8i. megacli does not show any errors.
> the syslog does not show more than the dmesg output below.
> every time the same two nodes of the same cluster are affected.
> as shown in dmesg and syslog, the system recognizes the
> xfs_attr_leaf_write_verify error about 38 min. before finally giving up.
> for both events i can not found corresponding events in gluster logs.
> this is strange...the gluster is historical grown from 3.2.5, 3.3, to
> 3.4.6/7 which was running well for month, gfs 3.5.6 was running for
> about two weeks and upgrade to 3.6.7 was done because of a geo-repl
> log-flood.
> even when i have no hint/evidence that this is caused by gfs 3.6.7
> somehow i believe that this is the case...
> does anybody experienced such an error or have some hints to getting out
> of this big problem...?
> unfortunately the affected cluster is the master of a geo-replication
> which is not well running since update from gfs 3.4.7...fortunately both
> affected gluster-nodes are not of the same sub-volume.
> 
> any help is appreciated...
> 
> best regards
> dietmar
> 
> 
> 
> 
> [ 09:32:29 ] - root@gluster-ger-ber-10  /var/log $gluster volume info
> 
> Volume Name: ger-ber-01
> Type: Distributed-Replicate
> Volume ID: 6a071cfa-b150-4f0b-b1ed-96ab5d4bd671
> Status: Started
> Number of Bricks: 3 x 2 = 6
> Transport-type: tcp
> Bricks:
> Brick1: gluster-ger-ber-11-int:/gluster-export
> Brick2: gluster-ger-ber-12-int:/gluster-export
> Brick3: gluster-ger-ber-09-int:/gluster-export
> Brick4: gluster-ger-ber-10-int:/gluster-export
> Brick5: gluster-ger-ber-07-int:/gluster-export
> Brick6: gluster-ger-ber-08-int:/gluster-export
> Options Reconfigured:
> changelog.changelog: on
> geo-replication.ignore-pid-check: on
> cluster.min-free-disk: 200GB
> geo-replication.indexing: on
> auth.allow:
> 10.0.1.*,188.138.82.*,188.138.123.*,82.193.249.198,82.193.249.200,31.7.178.137,31.7.178.135,31.7.180.109,31.7.180.98,82.199.147.*,104.155.22.202,104.155.30.201,104.155.5.117,104.155.11.253,104.155.15.34,104.155.25.145,146.148.120.255,31.7.180.148
> nfs.disable: off
> performance.cache-refresh-timeout: 2
> performance.io-thread-count: 32
> performance.cache-size: 1024MB
> performance.read-ahead: on
> performance.cache-min-file-size: 0
> network.ping-timeout: 10
> [ 09:32:52 ] - root@gluster-ger-ber-10  /var/log $
> 
> 
> 
> 
> [ 19:10:55 ] - root@gluster-ger-ber-10  /var/log $gluster volume status
> Status of volume: ger-ber-01
> Gluster processPortOnline Pid
> --
> 
> Brick gluster-ger-ber-11-int:/gluster-export 49152Y 15994
> Brick gluster-ger-ber-12-int:/gluster-export N/AN N/A
> Brick gluster-ger-ber-09-int:/gluster-export 49152Y 10965
> Brick gluster-ger-ber-10-int:/gluster-export N/AN N/A
> Brick gluster-ger-ber-07-int:/gluster-export 49152Y 18542
> Brick gluster-ger-ber-08-int:/gluster-export 49152Y 20275
> NFS Server on localhost2049Y 13658
> Self-heal Daemon on localhostN/AY 13666
> NFS Server on gluster-ger-ber-09-int2049 Y13503
> Self-heal Daemon on gluster-ger-ber-09-intN/A Y 13511
> NFS Server on gluster-ger-ber-07-int2049 Y21526
> Self-heal Daemon on gluster-ger-ber-07-intN/A Y 21534
> NFS Server on gluster-ger-ber-08-int2049 Y24004
> Self-heal Daemon on gluster-ger-ber-08-intN/A Y 24011
> NFS Server on gluster-ger-ber-11-int2049 Y18944
> Self-heal Daemon on gluster-ger-ber-11-intN/A Y 18952
> NFS Server on gluster-ger-ber-12-int2049 Y

[Gluster-users] [Gluster-devel] Meeting minutes of Gluster community meeting 2015-12-02

2015-12-03 Thread Vijay Bellur

Hi All,

Meeting logs from this week's community meeting are available at the 
locations mentioned below. The meeting minutes have also been added to 
the end of this mail.


Minutes: 
http://meetbot.fedoraproject.org/gluster-meeting/2015-12-02/gluster_community_weekly.2015-12-02-12.01.html


Minutes (text): 
http://meetbot.fedoraproject.org/gluster-meeting/2015-12-02/gluster_community_weekly.2015-12-02-12.01.txt


Log: 
http://meetbot.fedoraproject.org/gluster-meeting/2015-12-02/gluster_community_weekly.2015-12-02-12.01.log.html


Calendar invite has been attached for next week's meeting.

Cheers,
Vijay

Meeting summary

Roll Call (hagarth, 12:02:06)
AIs from last week (hagarth, 12:03:18)
ACTION: ndevos to send out a reminder to the maintainers about 
more actively enforcing backports of bugfixes  (hagarth, 12:04:07)
ACTION: raghu to call for volunteers and help from maintainers 
for doing backports listed by rwareing to 3.6.8 (hagarth, 12:05:31)
ACTION: rafi1 to setup a doodle poll for bug triage meeting 
(hagarth, 12:07:35)
ACTION: rastar and msvbhat to publish a test exit criterion for 
major/minor releases on gluster.org (hagarth, 12:10:28)
ACTION: kshlm & csim to set up faux/pseudo user email for 
gerrit, bugzilla, github (hagarth, 12:12:46)
ACTION: Need to decide if fixing BSD testing for release-3.6 is 
worth it. (hagarth, 12:13:33)
ACTION: amye to get on top of disucssion on long-term releases 
(hagarth, 12:15:20)
ACTION: hagarth_ to start a thread on review backlog  (hagarth, 
12:15:45)


Gluster 3.7 (hagarth, 12:17:42)
http://review.gluster.org/#/q/status:open+branch:+release-3.7 
(hagarth, 12:18:10)


Gluster 3.6 (hagarth, 12:22:22)
ACTION: raghu to announce 3.6.7 GA (hagarth, 12:24:48)

Gluster 3.5 (hagarth, 12:24:56)

http://www.gluster.org/pipermail/gluster-devel/2015-December/047260.html 
(raghu, 12:26:17)

3.5.7 expected around 10th of December (hagarth, 12:27:38)

Gluster 3.8 (hagarth, 12:28:10)
ACTION: atinm, kshlm to review IPv6 patchset (hagarth, 12:32:00)

Gluster 4.0 (hagarth, 12:33:02)
https://www.gluster.org/community/roadmap/4.0/ (atinm, 12:34:28)

Open Floor (hagarth, 12:40:54)
ACTION: hagarth to post Gluster Monthly News this week 
(hagarth, 12:43:37)




Meeting ended at 12:59:18 UTC (full logs).

Action items

ndevos to send out a reminder to the maintainers about more 
actively enforcing backports of bugfixes
raghu to call for volunteers and help from maintainers for doing 
backports listed by rwareing to 3.6.8

rafi1 to setup a doodle poll for bug triage meeting
rastar and msvbhat to publish a test exit criterion for major/minor 
releases on gluster.org
kshlm & csim to set up faux/pseudo user email for gerrit, bugzilla, 
 github

Need to decide if fixing BSD testing for release-3.6 is worth it.
amye to get on top of disucssion on long-term releases
hagarth_ to start a thread on review backlog
raghu to announce 3.6.7 GA
atinm, kshlm to review IPv6 patchset
hagarth to post Gluster Monthly News this week



Action items, by person

atinm
atinm, kshlm to review IPv6 patchset
hagarth
hagarth_ to start a thread on review backlog
hagarth to post Gluster Monthly News this week
msvbhat
rastar and msvbhat to publish a test exit criterion for 
major/minor releases on gluster.org

ndevos
ndevos to send out a reminder to the maintainers about more 
actively enforcing backports of bugfixes

raghu
raghu to call for volunteers and help from maintainers for 
doing backports listed by rwareing to 3.6.8

raghu to announce 3.6.7 GA
UNASSIGNED
rafi1 to setup a doodle poll for bug triage meeting
kshlm & csim to set up faux/pseudo user email for gerrit, 
bugzilla,  github

Need to decide if fixing BSD testing for release-3.6 is worth it.
amye to get on top of disucssion on long-term releases



People present (lines said)

hagarth (117)
ndevos (40)
atinm (13)
jdarcy (10)
JustinClift1 (9)
msvbhat (8)
kkeithley (7)
raghu (5)
jiffin (4)
zodbot (3)
hgowtham (2)
anoopcs (1)
Manikandan (1)
Humble (1)
poornimag (1)
lpabon (1)
rjoseph (1)




BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//www.marudot.com//iCal Event Maker
X-WR-CALNAME:Gluster Community Meeting
CALSCALE:GREGORIAN
BEGIN:VEVENT
DTSTAMP:20151203T202032Z
UID:20151203t202032z-1929851...@marudot.com
DTSTART;TZID="Etc/UTC":20151209T12
DTEND;TZID="Etc/UTC":20151209T13
SUMMARY:Gluster Community Meeting
DESCRIPTION:This is the weekly Gluster community meeting. \n\nThe agenda is available at https://public.pad.fsfe.org/p/gluster-community-meetings
LOCATION:#gluster-meeting in irc.freenode.net
END:VEVENT
END:VCALENDAR___
Gluster-users