Re: [Gluster-users] How to set up a 4 way gluster file system

2018-04-26 Thread Milind Changire
On Fri, Apr 27, 2018 at 10:56 AM, Thing  wrote:

> Hi,
>
> I have 4 servers each with 1TB of storage set as /dev/sdb1, I would like
> to set these up in a raid 10 which will? give me 2TB useable.  So Mirrored
> and concatenated?
>
> The command I am running is as per documents but I get a warning error,
> how do I get this to proceed please as the documents do not say.
>
> gluster volume create gv0 replica 2 glusterp1:/bricks/brick1/gv0
> glusterp2:/bricks/brick1/gv0 glusterp3:/bricks/brick1/gv0
> glusterp4:/bricks/brick1/gv0
> Replica 2 volumes are prone to split-brain. Use Arbiter or Replica 3 to
> avoid this. See: http://docs.gluster.org/en/latest/Administrator%20Guide/
> Split%20brain%20and%20ways%20to%20deal%20with%20it/.
> Do you still want to continue?
>  (y/n) n
>
>
This is not an error. It is a recommendation and an explicit cautionary
step to remind users of the problems using a replica 2.
It is advisable to use a replica 3 instead of a replica 2 to avoid
split-brain conditions.

If you still insist on using a replica 2, you can go ahead and answer a 'y'
to a the "Do you still want to continue ?" question.


> Usage:
> volume create  [stripe ] [replica  [arbiter
> ]] [disperse []] [disperse-data ] [redundancy ]
> [transport ] ?... [force]
>
> [root@glustep1 ~]#
>
> thanks
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>



-- 
Milind
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] How to set up a 4 way gluster file system

2018-04-26 Thread Thing
Hi,

I have 4 servers each with 1TB of storage set as /dev/sdb1, I would like to
set these up in a raid 10 which will? give me 2TB useable.  So Mirrored and
concatenated?

The command I am running is as per documents but I get a warning error,
how do I get this to proceed please as the documents do not say.

gluster volume create gv0 replica 2 glusterp1:/bricks/brick1/gv0
glusterp2:/bricks/brick1/gv0 glusterp3:/bricks/brick1/gv0
glusterp4:/bricks/brick1/gv0
Replica 2 volumes are prone to split-brain. Use Arbiter or Replica 3 to
avoid this. See:
http://docs.gluster.org/en/latest/Administrator%20Guide/Split%20brain%20and%20ways%20to%20deal%20with%20it/
.
Do you still want to continue?
 (y/n) n

Usage:
volume create  [stripe ] [replica  [arbiter
]] [disperse []] [disperse-data ] [redundancy ]
[transport ] ?... [force]

[root@glustep1 ~]#

thanks
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Reconstructing files from shards

2018-04-26 Thread Krutika Dhananjay
The short answer is - no there exists no script currently that can piece
the shards together into a single file.

Long answer:
IMO the safest way to convert from sharded to a single file _is_ by copying
the data out into a new volume at the moment.
Picking up the files from the individual bricks directly and joining them,
although fast, is a strict no-no for many reasons - for example, when you
have a replicated volume
and the good copy needs to be carefully selected and must remain a good
copy through the course of the copying process. There could be other
consistency issues with
file attributes changing while they are being copied. All of this is not
possible, unless you're open to taking the volume down.

Then the other option is to have gluster client (perhaps in the shard
translator itself)) do the conversion in the background within the gluster
translator stack, which is safer
but would require that shard lock it until the copying is complete. And
until then no IO can happen into this file.
(I haven't found the time to work on this, as there exists a workaround and
I've been busy with other tasks. If anyone wants to volunteer to get this
done, I'll be happy to help).

But anway, why is copying data into new unsharded volume disruptive for you?

-Krutika


On Sat, Apr 21, 2018 at 1:14 AM, Jamie Lawrence 
wrote:

> Hello,
>
> So I have a volume on a gluster install (3.12.5) on which sharding was
> enabled at some point recently. (Don't know how it happened, it may have
> been an accidental run of an old script.) So it has been happily sharding
> behind our backs and it shouldn't have.
>
> I'd like to turn sharding off and reverse the files back to normal.  Some
> of these are sparse files, so I need to account for holes. There are more
> than enough that I need to write a tool to do it.
>
> I saw notes ca. 3.7 saying the only way to do it was to read-off on the
> client-side, blow away the volume and start over. This would be extremely
> disruptive for us, and language I've seen reading tickets and old messages
> to this list make me think that isn't needed anymore, but confirmation of
> that would be good.
>

> The only discussion I can find are these videos[1]:
> http://opensource-storage.blogspot.com/2016/07/de-
> mystifying-gluster-shards.html , and some hints[2] that are old enough
> that I don't trust them without confirmation that nothing's changed. The
> video things don't acknowledge the existence of file holes. Also, the hint
> in [2] mentions using trusted.glusterfs.shard.file-size to get the size
> of a partly filled hole; that value looks like base64, but when I attempt
> to decode it, base64 complains about invalid input.
>
> In short, I can't find sufficient information to reconstruct these. Has
> anyone written a current, step-by-step guide on reconstructing sharded
> files? Or has someone has written a tool so I don't have to?
>
> Thanks,
>
> -j
>
>
> [1] Why one would choose to annoy the crap out of their fellow gluster
> users by using video to convey about 80 bytes of ASCII-encoded information,
> I have no idea.
> [2] http://lists.gluster.org/pipermail/gluster-devel/2017-
> March/052212.html
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Single brick expansion

2018-04-26 Thread Gandalf Corvotempesta
Any updates about this feature?
It was planned for v4 but seems to be postponed...
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Problem adding replicated bricks on FreeBSD

2018-04-26 Thread Kaushal M
On Thu, Apr 26, 2018 at 9:06 PM Mark Staudinger 
wrote:

> Hi Folks,

> I'm trying to debug an issue that I've found while attempting to qualify
> GlusterFS for potential distributed storage projects on the FreeBSD-11.1
> server platform - using the existing package of GlusterFS v3.11.1_4

> The main issue I've encountered is that I cannot add new bricks while
> setting/increasing the replica count.

> If I create a replicated volume "poc" on two hosts, say s1:/gluster/1/poc
> and s2:/gluster/1/poc, the volume is created properly and shows replicated
> status, files are written to both volumes.

> If I create a single volume: s1:/gluster/1/poc as a single / distributed
> brick, and then try to run

> gluster volume add-brick poc replica 2 s2:/gluster/1/poc

> it will always fail (sometimes after a pause, sometimes not.)  The only
> error I'm seeing on the server hosting the new brick, aside from the
> generic "Unable to add bricks" message, is like so:

> I [MSGID: 106578]
> [glusterd-brick-ops.c:1352:glusterd_op_perform_add_bricks] 0-management:
> replica-count is set 2
> I [MSGID: 106578]
> [glusterd-brick-ops.c:1362:glusterd_op_perform_add_bricks] 0-management:
> type is set 2, need to change it
> E [MSGID: 106054]
> [glusterd-utils.c:12974:glusterd_handle_replicate_brick_ops] 0-management:
> Failed to set extended attribute trusted.add-brick : Operation not
> supported [Operation not supported]

The log here seems to indicate the filesystem on the new brick being added
doesn't seem to support setting xattrs.
Maybe check the new brick again?

> E [MSGID: 106074] [glusterd-brick-ops.c:2565:glusterd_op_add_brick]
> 0-glusterd: Unable to add bricks
> E [MSGID: 106123] [glusterd-mgmt.c:311:gd_mgmt_v3_commit_fn] 0-management:
> Add-brick commit failed.

> I was initially using ZFS and noted that ZFS on FreeBSD does not support
> xattr, so I reverted to using UFS as the storage type for the brick, and
> still encounter this behavior.

> I also recompiled the port (again, GlusterFS v3.11.1) with the patch from
> https://bugzilla.redhat.com/show_bug.cgi?id=1484246 as this deals
> specifically with xattr handling in FreeBSD.

> To recap - I'm able to create any type of volume (2 or 3-way replicated or
> distributed), but I'm unable to add replicated bricks to a volume.

> I was, however, able to add a second distributed brick ( gluster volume
> add-brick poc s2:/gluster/1/poc ) - so the issue seems specific to adding
> and/or changing the replica count while adding a new brick.

> Please let me know if there are any other issues in addition to bug
> #1452961 I should be aware of, or additional log or debug info I can
> provide.

> Best Regards,
> Mark
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Turn off replication

2018-04-26 Thread Jose Sanchez
Looking at the logs , it seems that it is trying to add using the same port was 
assigned for gluster01ib:


Any Ideas??

Jose



[2018-04-25 22:08:55.169302] I [MSGID: 106482] 
[glusterd-brick-ops.c:447:__glusterd_handle_add_brick] 0-management: Received 
add brick req
[2018-04-25 22:08:55.186037] I [run.c:191:runner_log] 
(-->/usr/lib64/glusterfs/3.8.15/xlator/mgmt/glusterd.so(+0x33045) 
[0x7f5464b9b045] 
-->/usr/lib64/glusterfs/3.8.15/xlator/mgmt/glusterd.so(+0xcbd85) 
[0x7f5464c33d85] -->/lib64/libglusterfs.so.0(runner_log+0x115) [0x7f54704cf1e5] 
) 0-management: Ran script: 
/var/lib/glusterd/hooks/1/add-brick/pre/S28Quota-enable-root-xattr-heal.sh 
--volname=scratch --version=1 --volume-op=add-brick 
--gd-workdir=/var/lib/glusterd
[2018-04-25 22:08:55.309534] I [MSGID: 106143] 
[glusterd-pmap.c:250:pmap_registry_bind] 0-pmap: adding brick 
/gdata/brick1/scratch on port 49152
[2018-04-25 22:08:55.309659] I [MSGID: 106143] 
[glusterd-pmap.c:250:pmap_registry_bind] 0-pmap: adding brick 
/gdata/brick1/scratch.rdma on port 49153
[2018-04-25 22:08:55.310231] E [MSGID: 106005] 
[glusterd-utils.c:4877:glusterd_brick_start] 0-management: Unable to start 
brick gluster02ib:/gdata/brick1/scratch
[2018-04-25 22:08:55.310275] E [MSGID: 106074] 
[glusterd-brick-ops.c:2493:glusterd_op_add_brick] 0-glusterd: Unable to add 
bricks
[2018-04-25 22:08:55.310304] E [MSGID: 106123] 
[glusterd-mgmt.c:294:gd_mgmt_v3_commit_fn] 0-management: Add-brick commit 
failed.
[2018-04-25 22:08:55.310316] E [MSGID: 106123] 
[glusterd-mgmt.c:1427:glusterd_mgmt_v3_commit] 0-management: Commit failed for 
operation Add brick on local node
[2018-04-25 22:08:55.310330] E [MSGID: 106123] 
[glusterd-mgmt.c:2018:glusterd_mgmt_v3_initiate_all_phases] 0-management: 
Commit Op Failed
[2018-04-25 22:09:11.678141] E [MSGID: 106452] 
[glusterd-utils.c:6064:glusterd_new_brick_validate] 0-management: Brick: 
gluster02ib:/gdata/brick1/scratch not available. Brick may be containing or be 
contained by an existing brick
[2018-04-25 22:09:11.678184] W [MSGID: 106122] 
[glusterd-mgmt.c:188:gd_mgmt_v3_pre_validate_fn] 0-management: ADD-brick 
prevalidation failed.
[2018-04-25 22:09:11.678200] E [MSGID: 106122] 
[glusterd-mgmt-handler.c:337:glusterd_handle_pre_validate_fn] 0-management: Pre 
Validation failed on operation Add brick
[root@gluster02 glusterfs]# gluster volume status scratch
Status of volume: scratch
Gluster process TCP Port  RDMA Port  Online  Pid
--
Brick gluster01ib:/gdata/brick1/scratch 49152 49153  Y   1819 
Brick gluster01ib:/gdata/brick2/scratch 49154 49155  Y   1827 
Brick gluster02ib:/gdata/brick1/scratch N/A   N/AN   N/A  
 
Task Status of Volume scratch
--
There are no active volume tasks
 
[root@gluster02 glusterfs]# 



> On Apr 25, 2018, at 3:23 PM, Jose Sanchez  wrote:
> 
> Hello Karthik
> 
> 
> Im having trouble adding the two bricks back online.  Any help is appreciated 
>  
> 
> thanks 
> 
> 
> when i try to add-brick command this is what i get 
> 
> [root@gluster01 ~]# gluster volume add-brick scratch 
> gluster02ib:/gdata/brick2/scratch/
> volume add-brick: failed: Pre Validation failed on gluster02ib. Brick: 
> gluster02ib:/gdata/brick2/scratch not available. Brick may be containing or 
> be contained by an existing brick
> 
> I have run the following commands and remove the .glusterfs hidden 
> directories 
> 
> [root@gluster02 ~]# setfattr -x trusted.glusterfs.volume-id 
> /gdata/brick2/scratch/
> setfattr: /gdata/brick2/scratch/: No such attribute
> [root@gluster02 ~]# setfattr -x trusted.gfid /gdata/brick2/scratch/
> setfattr: /gdata/brick2/scratch/: No such attribute
> [root@gluster02 ~]# 
> 
> 
> this is what I get when I run status and info
> 
> 
> [root@gluster01 ~]# gluster volume info scratch
>  
> Volume Name: scratch
> Type: Distribute
> Volume ID: 23f1e4b1-b8e0-46c3-874a-58b4728ea106
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 4
> Transport-type: tcp,rdma
> Bricks:
> Brick1: gluster01ib:/gdata/brick1/scratch
> Brick2: gluster01ib:/gdata/brick2/scratch
> Brick3: gluster02ib:/gdata/brick1/scratch
> Brick4: gluster02ib:/gdata/brick2/scratch
> Options Reconfigured:
> nfs.disable: on
> performance.readdir-ahead: on
> [root@gluster01 ~]# 
> 
> 
> [root@gluster02 ~]# gluster volume status scratch
> Status of volume: scratch
> Gluster process TCP Port  RDMA Port  Online  Pid
> --
> Brick gluster01ib:/gdata/brick1/scratch 49156 49157  Y   1819 
> Brick gluster01ib:/gdata/brick2/scratch 49158 49159  Y   1827 
> Brick gluster02ib:/gdata/brick1/scratch N/A   N/AN   N/A  
> Brick 

[Gluster-users] Problem adding replicated bricks on FreeBSD

2018-04-26 Thread Mark Staudinger

Hi Folks,

I'm trying to debug an issue that I've found while attempting to qualify  
GlusterFS for potential distributed storage projects on the FreeBSD-11.1  
server platform - using the existing package of GlusterFS v3.11.1_4


The main issue I've encountered is that I cannot add new bricks while  
setting/increasing the replica count.


If I create a replicated volume "poc" on two hosts, say s1:/gluster/1/poc  
and s2:/gluster/1/poc, the volume is created properly and shows replicated  
status, files are written to both volumes.


If I create a single volume: s1:/gluster/1/poc as a single / distributed  
brick, and then try to run


gluster volume add-brick poc replica 2 s2:/gluster/1/poc

it will always fail (sometimes after a pause, sometimes not.)  The only  
error I'm seeing on the server hosting the new brick, aside from the  
generic "Unable to add bricks" message, is like so:


I [MSGID: 106578]  
[glusterd-brick-ops.c:1352:glusterd_op_perform_add_bricks] 0-management:  
replica-count is set 2
I [MSGID: 106578]  
[glusterd-brick-ops.c:1362:glusterd_op_perform_add_bricks] 0-management:  
type is set 2, need to change it
E [MSGID: 106054]  
[glusterd-utils.c:12974:glusterd_handle_replicate_brick_ops] 0-management:  
Failed to set extended attribute trusted.add-brick : Operation not  
supported [Operation not supported]
E [MSGID: 106074] [glusterd-brick-ops.c:2565:glusterd_op_add_brick]  
0-glusterd: Unable to add bricks
E [MSGID: 106123] [glusterd-mgmt.c:311:gd_mgmt_v3_commit_fn] 0-management:  
Add-brick commit failed.


I was initially using ZFS and noted that ZFS on FreeBSD does not support  
xattr, so I reverted to using UFS as the storage type for the brick, and  
still encounter this behavior.


I also recompiled the port (again, GlusterFS v3.11.1) with the patch from  
https://bugzilla.redhat.com/show_bug.cgi?id=1484246 as this deals  
specifically with xattr handling in FreeBSD.


To recap - I'm able to create any type of volume (2 or 3-way replicated or  
distributed), but I'm unable to add replicated bricks to a volume.


I was, however, able to add a second distributed brick ( gluster volume  
add-brick poc s2:/gluster/1/poc ) - so the issue seems specific to adding  
and/or changing the replica count while adding a new brick.


Please let me know if there are any other issues in addition to bug  
#1452961 I should be aware of, or additional log or debug info I can  
provide.


Best Regards,
Mark
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Turn off replication

2018-04-26 Thread Jose Sanchez
Hello Karthik


Im having trouble adding the two bricks back online.  Any help is appreciated  

thanks 


when i try to add-brick command this is what i get 

[root@gluster01 ~]# gluster volume add-brick scratch 
gluster02ib:/gdata/brick2/scratch/
volume add-brick: failed: Pre Validation failed on gluster02ib. Brick: 
gluster02ib:/gdata/brick2/scratch not available. Brick may be containing or be 
contained by an existing brick

I have run the following commands and remove the .glusterfs hidden directories 

[root@gluster02 ~]# setfattr -x trusted.glusterfs.volume-id 
/gdata/brick2/scratch/
setfattr: /gdata/brick2/scratch/: No such attribute
[root@gluster02 ~]# setfattr -x trusted.gfid /gdata/brick2/scratch/
setfattr: /gdata/brick2/scratch/: No such attribute
[root@gluster02 ~]# 


this is what I get when I run status and info


[root@gluster01 ~]# gluster volume info scratch
 
Volume Name: scratch
Type: Distribute
Volume ID: 23f1e4b1-b8e0-46c3-874a-58b4728ea106
Status: Started
Snapshot Count: 0
Number of Bricks: 4
Transport-type: tcp,rdma
Bricks:
Brick1: gluster01ib:/gdata/brick1/scratch
Brick2: gluster01ib:/gdata/brick2/scratch
Brick3: gluster02ib:/gdata/brick1/scratch
Brick4: gluster02ib:/gdata/brick2/scratch
Options Reconfigured:
nfs.disable: on
performance.readdir-ahead: on
[root@gluster01 ~]# 


[root@gluster02 ~]# gluster volume status scratch
Status of volume: scratch
Gluster process TCP Port  RDMA Port  Online  Pid
--
Brick gluster01ib:/gdata/brick1/scratch 49156 49157  Y   1819 
Brick gluster01ib:/gdata/brick2/scratch 49158 49159  Y   1827 
Brick gluster02ib:/gdata/brick1/scratch N/A   N/AN   N/A  
Brick gluster02ib:/gdata/brick2/scratch N/A   N/AN   N/A  
 
Task Status of Volume scratch
--
There are no active volume tasks
 
[root@gluster02 ~]# 


This are the logs files from Gluster ETC

[2018-04-25 20:56:54.390662] I [MSGID: 106143] 
[glusterd-pmap.c:250:pmap_registry_bind] 0-pmap: adding brick 
/gdata/brick1/scratch on port 49152
[2018-04-25 20:56:54.390798] I [MSGID: 106143] 
[glusterd-pmap.c:250:pmap_registry_bind] 0-pmap: adding brick 
/gdata/brick1/scratch.rdma on port 49153
[2018-04-25 20:56:54.391401] E [MSGID: 106005] 
[glusterd-utils.c:4877:glusterd_brick_start] 0-management: Unable to start 
brick gluster02ib:/gdata/brick1/scratch
[2018-04-25 20:56:54.391457] E [MSGID: 106074] 
[glusterd-brick-ops.c:2493:glusterd_op_add_brick] 0-glusterd: Unable to add 
bricks
[2018-04-25 20:56:54.391476] E [MSGID: 106123] 
[glusterd-mgmt.c:294:gd_mgmt_v3_commit_fn] 0-management: Add-brick commit 
failed.
[2018-04-25 20:56:54.391490] E [MSGID: 106123] 
[glusterd-mgmt-handler.c:603:glusterd_handle_commit_fn] 0-management: commit 
failed on operation Add brick
[2018-04-25 20:58:55.332262] I [MSGID: 106499] 
[glusterd-handler.c:4349:__glusterd_handle_status_volume] 0-management: 
Received status volume req for volume scratch
[2018-04-25 21:02:07.464357] E [MSGID: 106452] 
[glusterd-utils.c:6064:glusterd_new_brick_validate] 0-management: Brick: 
gluster02ib:/gdata/brick1/scratch not available. Brick may be containing or be 
contained by an existing brick
[2018-04-25 21:02:07.464395] W [MSGID: 106122] 
[glusterd-mgmt.c:188:gd_mgmt_v3_pre_validate_fn] 0-management: ADD-brick 
prevalidation failed.
[2018-04-25 21:02:07.464414] E [MSGID: 106122] 
[glusterd-mgmt-handler.c:337:glusterd_handle_pre_validate_fn] 0-management: Pre 
Validation failed on operation Add brick
[2018-04-25 21:04:56.198662] E [MSGID: 106452] 
[glusterd-utils.c:6064:glusterd_new_brick_validate] 0-management: Brick: 
gluster02ib:/gdata/brick2/scratch not available. Brick may be containing or be 
contained by an existing brick
[2018-04-25 21:04:56.198700] W [MSGID: 106122] 
[glusterd-mgmt.c:188:gd_mgmt_v3_pre_validate_fn] 0-management: ADD-brick 
prevalidation failed.
[2018-04-25 21:04:56.198716] E [MSGID: 106122] 
[glusterd-mgmt-handler.c:337:glusterd_handle_pre_validate_fn] 0-management: Pre 
Validation failed on operation Add brick
[2018-04-25 21:07:11.084205] I [MSGID: 106482] 
[glusterd-brick-ops.c:447:__glusterd_handle_add_brick] 0-management: Received 
add brick req
[2018-04-25 21:07:11.087682] E [MSGID: 106452] 
[glusterd-utils.c:6064:glusterd_new_brick_validate] 0-management: Brick: 
gluster02ib:/gdata/brick2/scratch not available. Brick may be containing or be 
contained by an existing brick
[2018-04-25 21:07:11.087716] W [MSGID: 106122] 
[glusterd-mgmt.c:188:gd_mgmt_v3_pre_validate_fn] 0-management: ADD-brick 
prevalidation failed.
[2018-04-25 21:07:11.087729] E [MSGID: 106122] 
[glusterd-mgmt.c:884:glusterd_mgmt_v3_pre_validate] 0-management: Pre 
Validation failed for operation Add brick on local node
[2018-04-25 21:07:11.087741] E [MSGID: 106122] 

[Gluster-users] Project Technical Leadership Council

2018-04-26 Thread Amye Scavarda
April 25, 2018

In order to continue the momentum that Gluster has seen over the last few
years, we've formed a project technical leadership council to replace our
current single project lead. The Technical Leadership Council will provide
broad oversight and direction for the project as a whole. As Gluster has
grown, it is no longer feasible to have one person be responsible for
direction and leadership. The Project Technical Leadership Council will
meet on a regular basis to discuss both technical issues and project
issues, such as outreach, advocacy and other open source community issues.
The council is responsible for making larger technical decisions for the
project. This includes accepting or removing subprojects, setting the
direction for the project on a general basis, and resolving conflicts as
they arrive. We want to involve and invite more contributors to be involved
in a strategic and technical way.

The Project Leadership Council is:
- Vijay Bellur
- Amar Tumballi
- Shyam Ranganathan
- Jeff Darcy
- John Strunk
- Amye Scavarda, Gluster Community Lead

This council is chaired by the Gluster Community Lead, Amye Scavarda, in an
administrative capacity.
To reach the Project Technical Leadership Council, please email
t...@gluster.org

This council will begin by meeting quarterly and will be consulted on an
as-needs basis. Council members are reviewed on a yearly basis, in April.
Individuals wishing to be considered for the Project Technical Leadership
Council should contact the Gluster Community Lead, Amye Scavarda.
___
Announce mailing list
annou...@gluster.org
http://lists.gluster.org/mailman/listinfo/announce
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users