Re: [Gluster-users] 3 Node Replicated Gluster set up as a ring - a good idea?

2015-02-15 Thread Ml Ml
 Will i then get loop? Do i need SPF or something alike?

Sorry, i meant STP (Spanning Tree Protocol).

Cheers,
Mario
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] O_DIRECT (I think) again

2015-02-15 Thread Artem Kuzmitskiy
Hi,
We're using liboindirect and it's works. Regarding not mantained - it's
not required bz this is very simple and short lib.

2015-02-13 18:55 GMT+03:00 Anand Avati av...@gluster.org:

 O_DIRECT support in fuse has been for quite some time now, surely well
 before 3.4

 On Fri, Feb 13, 2015, 02:37 Pedro Serotto pedro.sero...@yahoo.es wrote:

 Dear All,

 I am actually using the following software stack:

 debian wheezy with kernel 3.2.0-4-amd64, glusterfs 3.6.2, openstack Juno,
 libvirt 1.2.9.

 If I try to attach a block storage to a running vm, Openstack shows the
 following error:
 DeviceIsBusy: The supplied device (vdc) is busy.

 If I try to attach a block storage to a running vm, Libvirt shows the
 following error:
  qemuMonitorTextAddDrive:2621 : operation failed: open disk image file
 failed

 Looking up for this issue on the web, I found out that Libvirt tries to
 open the block device by using  O_DIRECT flag on; This last one is
 supported only by fuse for kernel 3.4.
 Therefore, I tried to apply some options (
 http://www.gluster.org/documentation/use_cases/Virt-store-usecase/) to
 Gluster, but the problem has not been solved.
 I also found  https://github.com/avati/liboindirect but it is old and
 not mantained.

 Does somebody found himself in the same situation? If yes, could you
 please show me how to solve it by mainteining the same version of my
 software stack.


 ThanksRegards

 Pedro


 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://www.gluster.org/mailman/listinfo/gluster-users


 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://www.gluster.org/mailman/listinfo/gluster-users




-- 
Best regards,
Artem Kuzmitskiy
Severnoe DB JSC
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Gluster replicate-brick issues (Distrubuted-Replica)

2015-02-15 Thread Thomas Holkenbrink
That makes sense, but now that the brick is missing and the data is not 
replicated, how don I get the brick back in to the trusted pool?

 Original message 
From: Subrata Ghosh
Date:02/14/2015 9:55 PM (GMT-08:00)
To: Thomas Holkenbrink , 'gluster-users@gluster.org'
Subject: Re: [Gluster-users] Gluster replicate-brick issues 
(Distrubuted-Replica)


Please try to use commit force option and see whether it works as gluster  
suggested the other only replace-brick command has been depreciated.

[root@Server1 ~]# gluster volume replace-brick $volname $from $to start
All replace-brick commands except commit force are deprecated. Do you want to 
continue? (y/n) y
volume replace-brick: success: replace-brick started successfully
ID: 0062d555-e7eb-4ebe-a264-7e0baf6e7546


You could try following  options , used to resolve our replace-brcik scenario  
looks like close to your case  gluster heal info has some issues. That I 
requested for a clarification fewdays back.



# gluster volume replace-bricks $vol_name $old_brick $new_brick commit force

# gluster volume heal $vol_name full

#gluster volume $vol_name info  -- this shows that the Numer of files=1, 
even though the file is already healed.


Regards,
Subrata

On 02/14/2015 02:28 PM, Thomas Holkenbrink wrote:
We have tried to migrate a Brick from one server to another using the following 
commands.   But the data is NOT being replicated… and the BRICK is not showing 
up anymore.
Gluster still appears to be working but the Bricks are not balanced and I need 
to add the other Brick for Server3 that I don’t want to do until after 
Server1:Brick2 gets replicated.

This is the command to create the Original Volume:
[root@Server1 ~]# gluster volume create Storage1 replica 2 transport tcp 
Server1:/exp/br01/brick1 Server2:/exp/br01/brick1 Server1:/exp/br02/brick2 
Server2:/exp/br02/brick2


This is the Current configuration BEFORE the migration.. Server3 has been Peer 
Probed successfully but that has been it
[root@Server1 ~]# gluster --version
glusterfs 3.6.2 built on Jan 22 2015 12:58:11

[root@Server1 ~]# gluster volume status
Status of volume: Storage1
Gluster process PortOnline  Pid
--
Brick Server1:/exp/br01/brick1  49152   Y   2167
Brick Server2:/exp/br01/brick1  49152   Y   2192
Brick Server1:/exp/br02/brick2  49153   Y   2172   --- this is the one 
that goes missing
Brick Server2:/exp/br02/brick2  49153   Y   2193
NFS Server on localhost 2049Y   2181
Self-heal Daemon on localhost   N/A Y   2186
NFS Server on Server2   2049Y   2205
Self-heal Daemon on Server2 N/A Y   2210
NFS Server on Server3   2049Y   6015
Self-heal Daemon on Server3 N/A Y   6016

Task Status of Volume Storage1
--
There are no active volume tasks
[root@Server1 ~]# gluster volume info

Volume Name: Storage1
Type: Distributed-Replicate
Volume ID: 9616ce42-48bd-4fe3-883f-decd6c4fcd00
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: Server1:/exp/br01/brick1
Brick2: Server2:/exp/br01/brick1
Brick3: Server1:/exp/br02/brick2
Brick4: Server2:/exp/br02/brick2
Options Reconfigured:
diagnostics.brick-log-level: WARNING
diagnostics.client-log-level: WARNING
cluster.entry-self-heal: off
cluster.data-self-heal: off
cluster.metadata-self-heal: off
performance.cache-size: 1024MB
performance.cache-max-file-size: 2MB
performance.cache-refresh-timeout: 1
performance.stat-prefetch: off
performance.read-ahead: on
performance.quick-read: off
performance.write-behind-window-size: 4MB
performance.flush-behind: on
performance.write-behind: on
performance.io-thread-count: 32
performance.io-cache: on
network.ping-timeout: 2
nfs.addr-namelookup: off
performance.strict-write-ordering: on
[root@Server1 ~]#



So we start the Migration of the Brick to the new server using the replace 
Brick command
[root@Server1 ~]# volname=Storage1

[root@Server1 ~]# from=Server1:/exp/br02/brick2

[root@Server1 ~]# to=Server3:/exp/br02/brick2

[root@Server1 ~]# gluster volume replace-brick $volname $from $to start
All replace-brick commands except commit force are deprecated. Do you want to 
continue? (y/n) y
volume replace-brick: success: replace-brick started successfully
ID: 0062d555-e7eb-4ebe-a264-7e0baf6e7546


[root@Server1 ~]# gluster volume replace-brick $volname $from $to status
All replace-brick commands except commit force are deprecated. Do you want to 
continue? (y/n) y
volume replace-brick: success: Number of files migrated = 281   Migration 
complete

At this point everything seems to be in order with no outstanding issues.

[root@Server1 ~]# gluster volume status
Status of volume: Storage1
Gluster process PortOnline  Pid

Re: [Gluster-users] O_DIRECT (I think) again

2015-02-15 Thread Joe Julian
Yeah, hard to maintain something that's complete and has no bugs.

On February 15, 2015 8:20:36 AM PST, Artem Kuzmitskiy 
artem.kuzmits...@gmail.com wrote:
Hi,
We're using liboindirect and it's works. Regarding not mantained -
it's
not required bz this is very simple and short lib.

2015-02-13 18:55 GMT+03:00 Anand Avati av...@gluster.org:

 O_DIRECT support in fuse has been for quite some time now, surely
well
 before 3.4

 On Fri, Feb 13, 2015, 02:37 Pedro Serotto pedro.sero...@yahoo.es
wrote:

 Dear All,

 I am actually using the following software stack:

 debian wheezy with kernel 3.2.0-4-amd64, glusterfs 3.6.2, openstack
Juno,
 libvirt 1.2.9.

 If I try to attach a block storage to a running vm, Openstack shows
the
 following error:
 DeviceIsBusy: The supplied device (vdc) is busy.

 If I try to attach a block storage to a running vm, Libvirt shows
the
 following error:
  qemuMonitorTextAddDrive:2621 : operation failed: open disk image
file
 failed

 Looking up for this issue on the web, I found out that Libvirt tries
to
 open the block device by using  O_DIRECT flag on; This last one is
 supported only by fuse for kernel 3.4.
 Therefore, I tried to apply some options (
 http://www.gluster.org/documentation/use_cases/Virt-store-usecase/)
to
 Gluster, but the problem has not been solved.
 I also found  https://github.com/avati/liboindirect but it is old
and
 not mantained.

 Does somebody found himself in the same situation? If yes, could you
 please show me how to solve it by mainteining the same version of my
 software stack.


 ThanksRegards

 Pedro


 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://www.gluster.org/mailman/listinfo/gluster-users


 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://www.gluster.org/mailman/listinfo/gluster-users




-- 
Best regards,
Artem Kuzmitskiy
Severnoe DB JSC




___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] 3 Node Replicated Gluster set up as a ring - a good idea?

2015-02-15 Thread Joe Julian
I think switches are much faster at their job than the kernel and avoiding 
their use may be a mistake.

With regard to gluster, latency affects performance so if it's performance 
you're looking for, lowest latency switches would be wiser. 

If it's just network design you're asking for, this might not be the most 
appropriate forum. 

On February 14, 2015 2:19:43 AM PST, Ml Ml mliebher...@googlemail.com wrote:
Hello List,

i would like to build a 3 Node Cluster. I was thinking of such a setup:

http://oi58.tinypic.com/2rfgghi.jpg

My question:

Can i set it up my network that way, that it will still work of one
node fails?

Just thinking about i want bonding and then i might even need a brige
to bring the two sides of the nodes together?

Will i then get loop? Do i need SPF or something alike?

Could i also use TCP Multipath? ( i have no experience with tcp
multipath so far)

What do you think?


Thanks,
Mario

p.s. i want to avoid switches here.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Gluster replicate-brick issues (Distrubuted-Replica)

2015-02-15 Thread Joe Julian
Of those missing files, are they maybe dht link files? Mode 1000, size 0.

On February 14, 2015 12:58:12 AM PST, Thomas Holkenbrink 
thomas.holkenbr...@fibercloud.com wrote:
We have tried to migrate a Brick from one server to another using the
following commands.   But the data is NOT being replicated... and the
BRICK is not showing up anymore.
Gluster still appears to be working but the Bricks are not balanced and
I need to add the other Brick for Server3 that I don't want to do until
after Server1:Brick2 gets replicated.

This is the command to create the Original Volume:
[root@Server1 ~]# gluster volume create Storage1 replica 2 transport
tcp Server1:/exp/br01/brick1 Server2:/exp/br01/brick1
Server1:/exp/br02/brick2 Server2:/exp/br02/brick2


This is the Current configuration BEFORE the migration.. Server3 has
been Peer Probed successfully but that has been it
[root@Server1 ~]# gluster --version
glusterfs 3.6.2 built on Jan 22 2015 12:58:11

[root@Server1 ~]# gluster volume status
Status of volume: Storage1
Gluster process PortOnline  Pid
--
Brick Server1:/exp/br01/brick1  49152   Y   2167
Brick Server2:/exp/br01/brick1  49152   Y   2192
Brick Server1:/exp/br02/brick2  49153   Y   2172   --- this is the
one that goes missing
Brick Server2:/exp/br02/brick2  49153   Y   2193
NFS Server on localhost 2049Y   2181
Self-heal Daemon on localhost   N/A Y   2186
NFS Server on Server2   2049Y   2205
Self-heal Daemon on Server2 N/A Y   2210
NFS Server on Server3   2049Y   6015
Self-heal Daemon on Server3 N/A Y   6016

Task Status of Volume Storage1
--
There are no active volume tasks
[root@Server1 ~]# gluster volume info

Volume Name: Storage1
Type: Distributed-Replicate
Volume ID: 9616ce42-48bd-4fe3-883f-decd6c4fcd00
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: Server1:/exp/br01/brick1
Brick2: Server2:/exp/br01/brick1
Brick3: Server1:/exp/br02/brick2
Brick4: Server2:/exp/br02/brick2
Options Reconfigured:
diagnostics.brick-log-level: WARNING
diagnostics.client-log-level: WARNING
cluster.entry-self-heal: off
cluster.data-self-heal: off
cluster.metadata-self-heal: off
performance.cache-size: 1024MB
performance.cache-max-file-size: 2MB
performance.cache-refresh-timeout: 1
performance.stat-prefetch: off
performance.read-ahead: on
performance.quick-read: off
performance.write-behind-window-size: 4MB
performance.flush-behind: on
performance.write-behind: on
performance.io-thread-count: 32
performance.io-cache: on
network.ping-timeout: 2
nfs.addr-namelookup: off
performance.strict-write-ordering: on
[root@Server1 ~]#



So we start the Migration of the Brick to the new server using the
replace Brick command
[root@Server1 ~]# volname=Storage1

[root@Server1 ~]# from=Server1:/exp/br02/brick2

[root@Server1 ~]# to=Server3:/exp/br02/brick2

[root@Server1 ~]# gluster volume replace-brick $volname $from $to start
All replace-brick commands except commit force are deprecated. Do you
want to continue? (y/n) y
volume replace-brick: success: replace-brick started successfully
ID: 0062d555-e7eb-4ebe-a264-7e0baf6e7546


[root@Server1 ~]# gluster volume replace-brick $volname $from $to
status
All replace-brick commands except commit force are deprecated. Do you
want to continue? (y/n) y
volume replace-brick: success: Number of files migrated = 281  
Migration complete

At this point everything seems to be in order with no outstanding
issues.

[root@Server1 ~]# gluster volume status
Status of volume: Storage1
Gluster process PortOnline  Pid
--
Brick Server1:/exp/br01/brick1  49152   Y   2167
Brick Server2:/exp/br01/brick1  49152   Y   2192
Brick Server1:/exp/br02/brick2  49153   Y   27557
Brick Server2:/exp/br02/brick2  49153   Y   2193
NFS Server on localhost 2049Y   27562
Self-heal Daemon on localhost   N/A Y   2186
NFS Server on Server2   2049Y   2205
Self-heal Daemon on Server2 N/A Y   2210
NFS Server on Server3   2049Y   6015
Self-heal Daemon on Server3 N/A Y   6016

Task Status of Volume Storage1
--
Task : Replace brick
ID   : 0062d555-e7eb-4ebe-a264-7e0baf6e7546
Source Brick : Server1:/exp/br02/brick2
Destination Brick: Server3:/exp/br02/brick2
Status   : completed

The volume reports that the replace Brick command completed.. so the
next step is to commit the change

[root@Server1 ~]# gluster volume replace-brick $volname $from $to
commit
All replace-brick commands except commit force are deprecated. Do you

[Gluster-users] basic architecture

2015-02-15 Thread Ed Greenberg

Something I don't understand.

If I have two servers running gluster code, each with an attached brick, 
is there a master slave relationship or are they peers?


Supposing a have my brick mounted as a document root on each of the two 
servers, and am running an app, such as Wordpress, or Magento behind a 
load balancer:


* If one goes down, and the other one continues, what happens when I 
bring the first one back?  Does the brick rebuild itself on the downed 
server?


* What if I want to bring a third one into the mix? It comes up with an 
empty, or obsolete, brick.  How does the third brick get in sync?


Thanks,

Ed Greenberg

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Migrating from files to libgfapi

2015-02-15 Thread Miloš Kozák

Hello,

I have been looking for a while, but I can not find the proper 
procedure. I have got some VMs stored in a shared folder distributed 
using NFS. Additionally, I am switching our storage to GlusterFS, and I 
plan to use native access using libgfapi in libvirt. Is there a way how 
to import such a raw files to glusterfs such that it was possible to use 
them using libgfapi?


Thank you,
Milos
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] basic architecture

2015-02-15 Thread Atin Mukherjee


On 02/16/2015 04:20 AM, Ed Greenberg wrote:
 Something I don't understand.
 
 If I have two servers running gluster code, each with an attached brick,
 is there a master slave relationship or are they peers?
There is no master slave relationship here. Each brick can be considered
as individual storage entity.
 
 Supposing a have my brick mounted as a document root on each of the two
 servers, and am running an app, such as Wordpress, or Magento behind a
 load balancer:
 
 * If one goes down, and the other one continues, what happens when I
 bring the first one back?  Does the brick rebuild itself on the downed
 server?
If you have set up a replicated volume then once the brick comes back,
self heal will get kicked in and it will try to come in sync with its
replica pair.
 
 * What if I want to bring a third one into the mix? It comes up with an
 empty, or obsolete, brick.  How does the third brick get in sync?
I believe you are talking about add-brick command here. In that case at
the of provisioning it is ideally empty (we recommend to be it as empty)
and then once you kick in rebalance, this brick will also get load
balanced along with other bricks in the cluster.
 
 Thanks,
 
 Ed Greenberg
 
 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://www.gluster.org/mailman/listinfo/gluster-users
 

-- 
~Atin
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Changelog dir in gluster 3.6

2015-02-15 Thread Félix de Lelelis
Hi,

I don't understand if the changelog is needed to replica and geo-replica
situations, so the filesystem is fill of a lot of that files in
/.glusterfs/changelogs. Is there anyway to reduce or waive directory?


Thanks.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Warning: W [socket.c:611:__socket_rwv] 0-management: readv on

2015-02-15 Thread Atin Mukherjee


On 02/16/2015 12:37 PM, Félix de Lelelis wrote:
 Hi,
 
 The last week upgraded us cluster to 3.6 version. I noticed in the log the
 following error:
 
 W [socket.c:611:__socket_rwv] 0-management: readv on
 /var/run/f3fcde54ca5d30115274155a37baa079.socket failed (Invalid argument)
 
 It is due a nfs daemon?
It could be, check whether nfs daemon is running through volume status
command. If not then look at the nfs.log, most probably the nfs kernel
module is not disabled.

~Atin
 
 Thanks.
 
 
 
 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://www.gluster.org/mailman/listinfo/gluster-users
 

-- 
~Atin
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Gluster performance on the small files

2015-02-15 Thread Punit Dambiwal
Hi,

My VM disk are based on RAW...

On Sat, Feb 14, 2015 at 4:08 PM, Samuli Heinonen samp...@neutraali.net
wrote:

 Hi!

 What image type you are using to store virtual machines? For example using
 sparse QCOW2 images is much slower than preallocated RAW images.
 Performance with QCOW2 should get better after image file has grown bigger
 and it's not necessary to resize sparse image anymore.

 Best regards,
 Samuli Heinonen


 On 13.2.2015, at 8.58, Punit Dambiwal hypu...@gmail.com wrote:

 Hi,

 I have seen the gluster performance is dead slow on the small files...even
 i am using the SSDit's too bad performanceeven i am getting better
 performance in my SAN with normal SATA disk...

 I am using distributed replicated glusterfs with replica count=2...i have
 all SSD disks on the brick...

 root@vm3:~# dd bs=64k count=4k if=/dev/zero of=test oflag=dsync

 4096+0 records in

 4096+0 records out

 268435456 bytes (268 MB) copied, 57.3145 s, 4.7 MB/s


 root@vm3:~# dd bs=64k count=4k if=/dev/zero of=test conv=fdatasync

 4096+0 records in

 4096+0 records out

 268435456 bytes (268 MB) copied, 1.80093 s, 149 MB/s


 Thanks,

 Punit
 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://www.gluster.org/mailman/listinfo/gluster-users



___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Warning: W [socket.c:611:__socket_rwv] 0-management: readv on

2015-02-15 Thread Félix de Lelelis
Hi,

The last week upgraded us cluster to 3.6 version. I noticed in the log the
following error:

W [socket.c:611:__socket_rwv] 0-management: readv on
/var/run/f3fcde54ca5d30115274155a37baa079.socket failed (Invalid argument)

It is due a nfs daemon?

Thanks.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] gluster volume heal info giving wrong report

2015-02-15 Thread Atin Mukherjee
AFR team (Pranith/Ravi cced), FYA..

~Atin

On 02/13/2015 07:48 PM, Subrata Ghosh wrote:
 Hi All,
 
 Can any one clarify the issue e , we are facing with heal incorrect
 report mentioned below , We are using gluster 3.3.2 .
 
 *Issue:*
 
 *Bug 1039544* https://bugzilla.redhat.com/show_bug.cgi?id=1039544
 -[FEAT] gluster volume heal info should list the entries that actually
 required to be healed.
 
 *Steps to recreate*
 
 If we are continuously writing to a file in a volume. At the same time
 if we do the following
 
 # gluster volume replace-bricks $vol_name $old_brick $new_brick commit
 force
 
 # gluster volume heal $vol_name full
 
 #gluster volume $vol_name info  -- this shows that the Numer of
 files=1, even though the file is already healed.
 
 *Related issues:*
 
 http://comments.gmane.org/gmane.comp.file-systems.gluster.user/14188
 
 *Fix:*
 
 http://review.gluster.org/#/c/7480/
 
 *Query:*
 
 Is the fix that we are trying to back port to gluster 3.2 is the correct
 code changes.
 
 Regards,
 
 Subrata
 
 
 
 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://www.gluster.org/mailman/listinfo/gluster-users
 

-- 
~Atin
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] read only feature only works after the volume is restarted?

2015-02-15 Thread Atin Mukherjee


On 02/13/2015 02:05 PM, Feng Wang wrote:
 Hi all,
 If we set the read-only feature using the following command in the cli to a 
 volume in service, it will not work until the volume is restarted.
That's the correct functionality. http://review.gluster.org/#/c/8571/
should address it, however it hasn't been taken yet.

 gluster volume set vol-name features.readonly on
 It means that the service must be stopped temporarily. Does this make sense?
 An alternative method is re-mounting the glusterfs using the -o ro option. 
 So both of them need the service to be interrupted . Could we have the method 
 to make the volume read-only after flushing the requests on the fly without 
 stopping the service? 
  Best Regards,
 Fang Huang
 
 
 
 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://www.gluster.org/mailman/listinfo/gluster-users
 

-- 
~Atin
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users