Re: [Gluster-users] [Gluster-devel] BoF - Gluster for VM store use case

2017-10-31 Thread Paul Cuzner
Just wanted to pick up on the EC for vm storage domains option..



> > * Erasure coded volumes with sharding - seen as a good fit for VM disk
> > storage
>
> I am working on this with a customer, we have been able to do 400-500 MB /
> sec writes!  Normally things max out at ~150-250.  The trick is to use
> multiple files, create the lvm stack and use native LVM striping.  We have
> found that 4-6 files seems to give the best perf on our setup.  I don't
> think we are using sharding on the EC vols, just multiple files and LVM
> striping.  Sharding may be able to avoid the LVM striping, but I bet
> dollars to doughnuts you won't see this level of perf :)  I am working on a
> blog post for RHHI and RHEV + RHS performance where I am able to in some
> cases get 2x+ the performance out of VMs / VM storage.  I'd be happy to
> share my data / findings.
>
>
The main reason IIRC for sharding was to break down the vdisk image file
into smaller chunks to improve self heal efficiency. With EC the vdisk
image is already split, so do we really need sharding as well - especially
given Ben's findings?
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] CFP for Gluster Developer Summit

2016-08-31 Thread Paul Cuzner
Sounds great!

I had to knit together different cli commands in the past for 'gstatus' to
provide a view of the cluster - so this is a cool.

Would it be possible to add an example of the output to the RFE BZ* 1353156
* 

*?*
Paul C

On Wed, Aug 31, 2016 at 6:13 PM, Samikshan Bairagya 
wrote:

> Hi all,
>
> I'd like to propose the following talk for Gluster Developer Summit 2016.
>
> Title: How an external application looking to integrate with Gluster can
> use the CLI to get the state of a cluster
>
> Theme: Experience (Developers integrating Gluster with other ecosystems)
>
> Gluster 3.9 will have a new CLI that can be used to get the local state
> representation of a cluster. This can be used by external applications
> (like storage managers) to get a representation of the entire state of a
> cluster. I plan to talk about this during the summit and will cover the
> following:
>
> - Introduction
> - List of data points covered in the state representation
> - How to consume this CLI
> - Discussion on what other data points might need to be added later on.
> - Demo (External application representing the state of a cluster using
> data obtained from the CLI)
>
> Thanks and Regards,
>
> Samikshan
>
> On 08/13/2016 01:18 AM, Vijay Bellur wrote:
>
>> Hey All,
>>
>> Gluster Developer Summit 2016 is fast approaching [1] on us. We are
>> looking to have talks and discussions related to the following themes in
>> the summit:
>>
>> 1. Gluster.Next - focusing on features shaping the future of Gluster
>>
>> 2. Experience - Description of real world experience and feedback from:
>>a> Devops and Users deploying Gluster in production
>>b> Developers integrating Gluster with other ecosystems
>>
>> 3. Use cases  - focusing on key use cases that drive Gluster.today and
>> Gluster.Next
>>
>> 4. Stability & Performance - focusing on current improvements to reduce
>> our technical debt backlog
>>
>> 5. Process & infrastructure  - focusing on improving current workflow,
>> infrastructure to make life easier for all of us!
>>
>> If you have a talk/discussion proposal that can be part of these themes,
>> please send out your proposal(s) by replying to this thread. Please
>> clearly mention the theme for which your proposal is relevant when you
>> do so. We will be ending the CFP by 12 midnight PDT on August 31st, 2016.
>>
>> If you have other topics that do not fit in the themes listed, please
>> feel free to propose and we might be able to accommodate some of them as
>> lightening talks or something similar.
>>
>> Please do reach out to me or Amye if you have any questions.
>>
>> Thanks!
>> Vijay
>>
>> [1] https://www.gluster.org/events/summit2016/
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-users
>>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] Non Shared Persistent Gluster Storage with Kubernetes

2016-07-05 Thread Paul Cuzner
Just to pick up on how the block device is defined. I think sharding is the
best option - it's already the 'standard' for virtual disks, and the images
files for iSCSI are no different in my mind. They have pretty much the same
requirements around sizing, fault tolerance and recovery.

Let's keep it simple :)


On Wed, Jul 6, 2016 at 6:54 AM, Shyam  wrote:

> On 07/01/2016 01:45 AM, B.K.Raghuram wrote:
>
>> I have not gone through this implementation nor the new iscsi
>> implementation being worked on for 3.9 but I thought I'd share the
>> design behind a distributed iscsi implementation that we'd worked on
>> some time back based on the istgt code with a libgfapi hook.
>>
>> The implementation used the idea of using one file to represent one
>> block (of a chosen size) thus allowing us to use gluster as the backend
>> to store these files while presenting a single block device of possibly
>> infinite size. We used a fixed file naming convention based on the block
>> number which allows the system to determine which file(s) needs to be
>> operated on for the requested byte offset. This gave us the advantage of
>> automatically accessing all of gluster's file based functionality
>> underneath to provide a fully distributed iscsi implementation.
>>
>> Would this be similar to the new iscsi implementation thats being worked
>> on for 3.9?
>>
>
> 
>
> Ultimately the idea would be to use sharding, as a part of the gluster
> volume graph, to distribute the blocks (or rather shard the blocks), rather
> than having the disk image on one distribute subvolume and hence scale disk
> sizes to the size of the cluster. Further, sharding should work well here,
> as this is a single client access case (or are we past that hurdle
> already?).
>
> What this achieves is similar to the iSCSI implementation that you talk
> about, but gluster doing the block splitting and hence distribution, rather
> than the iSCSI implementation (istgt) doing the same.
>
> < I did a cursory check on the blog post, but did not find a shard
> reference, so maybe others could pitch in here, if they know about the
> direction>
>
> Further, in your original proposal, how do you maintain device properties,
> such as size of the device and used/free blocks? I ask about used and free,
> as that is an overhead to compute, if each block is maintained as a
> separate file by itself, or difficult to achieve consistency of the size
> and block update (as they are separate operations). Just curious.
>
> ___
> Gluster-devel mailing list
> gluster-de...@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] How to identify a files shard?

2016-04-24 Thread Paul Cuzner
Just wondering how shards can silently be different across bricks in a
replica? Lindsay caught this issue due to her due diligence taking on 'new'
tech - and resolved the inconsistency, but tbh this shouldn't be an admin's
job :(



On Sun, Apr 24, 2016 at 7:06 PM, Krutika Dhananjay 
wrote:

> OK. Under normal circumstances it should have been possible to heal a
> single file by issuing a lookup on it (==> stat on the file from the
> mountpoint). But with shards this won't work. We take care not to expose
> /.shard on the mountpoint, and as a result any attempt to issue lookup on a
> shard from the mountpoint will be met with an 'operation not permitted'
> error.
>
> -Krutika
>
> On Sun, Apr 24, 2016 at 11:42 AM, Lindsay Mathieson <
> lindsay.mathie...@gmail.com> wrote:
>
>> On 24/04/2016 2:56 PM, Krutika Dhananjay wrote:
>>
>>> Nope, it's not necessary for them to all have the xattr.
>>>
>>
>> Thats good :)
>>
>>
>>> Do you see anything at least in .glusterfs/indices/dirty on all bricks?
>>>
>>
>> I checked, dirty dir empty on all bricks
>>
>> I used diff3 to compare the checksums of the shards and it revealed that
>> seven of the shards were the same on two bricks (vna & vng) and one of the
>> shards was the same on two other bricks (vna & vnb). Fortunately none were
>> different on all 3 bricks :)
>>
>> Using the checksum as a quorum I deleted all the singleton shards (7 on
>> vnb, 1 on vng), touched the file owner and issule a "heal full". All 8
>> shards were restored with matching checksums for the other two bricks. A
>> rechack of the entire set of shards for the vm showed all 3 copies as
>> identical and the VM itself is functioning normally.
>>
>> Its one way to manually heal up shard mismatches which gluster hasn't
>> detected, if somewhat tedious. Its a method which lends itself to
>> automation though.
>>
>>
>> Cheers,
>>
>>
>> --
>> Lindsay Mathieson
>>
>>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Does gluster have a "scrub"?

2016-04-13 Thread Paul Cuzner
Hi Lindsay,

As I understand it, the current logic of bitd/scrubd does not address the
problem you asked about
"a process where all replicas are compared for inconsistencies."

bitd/scrubd operate independently within each node, signing each file and
validating the checksum - which is part of the answer. It basically
protects the files on a brick from silent data corruption - it does not
ensure that each replica is consistent with each other.

I believe that replica compare and automatic healing, are features being
worked on for a future release - not sure of timing, but maybe one of the
devs can chime in.

PC



On Thu, Apr 14, 2016 at 5:54 AM, Kaleb S. KEITHLEY 
wrote:

> On 04/13/2016 01:41 PM, Alan Millar wrote:
> >> Please refer to Bitrot Feature:
> >
> >>
> http://www.gluster.org/community/documentation/index.php/Features/BitRot
> >
> >> I suppose it is already quite mature because it was already listed as a
> >
> >> feature on RedHat Gluster Storage 3.2 Administration Guide.
> >
> > If you are only looking for *detection*, it sounds like it is mature and
> working.  If you want automatic *correction* on a replicated volume (like
> you find in ZFS or BTRFS), that doesn't appear to exist yet.
> >
> >
> > I can't find an RHGS 3.2 admin guide, but I can find an RHGS 3.1 guide at
> >
>
> RHGS 3.2 hasn't been released yet.  RHGS 3.1.2 is the current version.
>
> --
>
> Kaleb
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Blog on Hyperconverged Infrastructure using oVirt and Gluster

2016-01-13 Thread Paul Cuzner
copying the vm's across the storage domains can be done with the storage
migrate feature. I did see a few problems in the past with migrating
running vm's in this way, but powered off vm's were fine.

However, it's not the fastest process though!

On Wed, Jan 13, 2016 at 4:28 AM, Krutika Dhananjay 
wrote:

> Hi Nicolas,
>
> Only files created _after_ features.shard is enabled will be 'sharded'.
> You could create a volume with sharding enabled and copy the images from
> your original volume into this new volume.
> If you don't like the feature and want to revert to the old state for some
> reason, then you would need to create a new volume without sharding and
> copy all your images from the sharded volume into this new volume.
> The different pieces of the file will get naturally stitched together once
> they land in the new volume.
>
> HTH,
> Krutika
>
>
> --
>
> *From: *"Nicolas Ecarnot" 
> *To: *gluster-users@gluster.org
> *Sent: *Tuesday, January 12, 2016 7:35:16 PM
> *Subject: *Re: [Gluster-users] Blog on Hyperconverged Infrastructure
> using oVirt and Gluster
>
>
> Le 12/01/2016 12:40, Ramesh Nachimuthu a écrit :
> > Hi Folks,
> >
> >Have you ever wondered about Hyperconverged Ovirt and Gluster Setup.
> > Here is an answer[1]. I wrote a blog explaining how to setup oVirt in a
> > hyper-converged mode with Gluster.
> >
> > [1]
> > <
> http://blogs-ramesh.blogspot.in/2016/01/ovirt-and-gluster-hyperconvergence.html
> >
> http://blogs-ramesh.blogspot.in/2016/01/ovirt-and-gluster-hyperconvergence.htm
> > *
> > *Regards,
> > Ramesh
> >
> >
> > ___
> > Gluster-users mailing list
> > Gluster-users@gluster.org
> > http://www.gluster.org/mailman/listinfo/gluster-users
> >
>
> Hi Ramesh,
>
> Thank you for this very detailled and nice post.
> We're using a similar setup here in one of our DC, except the sharding
> we weren't aware of.
> Is it possible to activate the sharding afterward, or is it something to
> be done before filling the volume?
> Is it revertable?
>
> --
> Nicolas ECARNOT
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Test results and Performance Tuning efforts ...

2015-10-12 Thread Paul Cuzner
I don't think so. The workaround affects how glusterd is working. The
performance fix benefit with epoll is on the glusterfsd daemons - AFAIK.

Perhaps one of the devs can chime in to confirm the impact.


On Tue, Oct 13, 2015 at 2:53 PM, Lindsay Mathieson <
lindsay.mathie...@gmail.com> wrote:

>
> On 13 October 2015 at 11:51, Paul Cuzner  wrote:
>
>> add
>> # for epoll issue glusterd crash fix
>> option ping-timeout 0
>> option event-threads  1
>>
>> to your glusterd.vol files (/etc/glusterfs/glusterd.vol)
>
>
>
>
>
> Thanks, yah I saw that.
>
> Won't that remove the performance benefits though?
>
> --
> Lindsay
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Test results and Performance Tuning efforts ...

2015-10-12 Thread Paul Cuzner
Hi,

*IF* your seeing crashes in glusterd, Atin sent out an workaround that
needs to be applied to 3.7.x to avoid the issue (introduced with epoll)

add
# for epoll issue glusterd crash fix
option ping-timeout 0
option event-threads  1

to your glusterd.vol files (/etc/glusterfs/glusterd.vol)




On Tue, Oct 13, 2015 at 2:09 PM, Lindsay Mathieson <
lindsay.mathie...@gmail.com> wrote:

>
> On 13 October 2015 at 07:54, Ben Turner  wrote:
>
>> Random IO has vastly improved with MT epoll introduced in 3.7, try a
>> test on 3.7 with server and client event threads set to 4.  If you want to
>> confirm this before you upgrade run top -H during your testing and look for
>> a hot thread(single thread / CPU pegged at 100%).  If you see this during
>> your runs on 3.6 then the MT epoll implementation in 3.7 will def help
>> you out.
>>
>
>
> Not 100%, but it gets up to 90%
>
> Oddly, the feature list seems to show it as being done in 3.6 (
> http://www.gluster.org/community/documentation/index.php/Features/Feature_Smallfile_Perf#multi-thread-epoll
> )
>
> Seems to be causing 3.7.4 to crash regularly at the moment to :(
>
>
> --
> Lindsay
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] How to clear volume options

2015-10-10 Thread Paul Cuzner
yep, try gluster vol reset  

Paul C

On Sun, Oct 11, 2015 at 11:30 AM, Lindsay Mathieson <
lindsay.mathie...@gmail.com> wrote:

> Once set, is there any way to "unset" a volume option, so that it returns
> to its default value.
>
> --
> Lindsay
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Keeping it Simple replication for HA

2015-09-14 Thread Paul Cuzner
Have you considered the disperse volume? We'd normally advocate 6 servers
for a +2 redundancy factor though.

Paul C

On Tue, Sep 15, 2015 at 5:47 AM,  wrote:

> Gluster users,
>
> I am looking to implement GlusterFS on my network for large, expandable,
> and redundant storage.
>
> I have 5 servers with 1 brick each.  All I want is a simple replication
> that requires at least 3 of the 5 bricks have a copy of the data so I can
> lose any 2 bricks without data loss.  I have tried replica 3 with 5 bricks
> but it seems to complain that my # of bricks must be a multiple of of my
> replica count.
>
> There a simple replication method like replica=3 with 5 bricks with
> glusterfs?  Is there a better or different technology I can use for this?
>
> Thanks,
> Aaron
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] [RFC] GlusterFS Operations Guide

2014-06-03 Thread Paul Cuzner
This is a really good initiative Lala. 

Anything that helps Operations folks always gets my vote :) 

I've added a few items to the etherpad. 

Cheers, 

PC 

- Original Message -

> From: "Lalatendu Mohanty" 
> To: gluster-users@gluster.org, gluster-de...@gluster.org
> Sent: Friday, 30 May, 2014 11:33:35 PM
> Subject: Re: [Gluster-users] [Gluster-devel] [RFC] GlusterFS Operations Guide

> On 05/30/2014 04:50 PM, Lalatendu Mohanty wrote:

> > I think it is time to create an operations/ops guide for GlusterFS.
> > Operations guide should address issues which administrators face while
> > running/maintaining GlusterFS storage nodes. Openstack project has an
> > operations guide [2] which try to address similar issues and it is pretty
> > cool.
> 

> > IMO these are the typical/example questions which operations guide should
> > try
> > to address.
> 

> > * Maintenance, Failures, and Debugging
> 

> > > * What are steps for planned maintenance for GlusterFS node?
> > 
> 

> > > * Steps for replacing a failed node?
> > 
> 

> > > * Steps to decommission a brick?
> > 
> 

> > * Logging and Monitoring
> 

> > > * Where are the log files?
> > 
> 

> > > * How to find-out if self-heal is working properly?
> > 
> 

> > > * Which log files to monitor for detecting failures?
> > 
> 

> > Operating guide needs good amount of work, hence we all need to come
> > together
> > for this. You can contribute for this by either of the following
> 

> > * Let know others about the questions you want to get answered in the
> > operating guide. ( I have set-up a etherpad for this [1])
> 
> > * Answer the questions/issues raised by others.
> 

> > Comments, suggestions?
> 
> > Should this be part of gluster code base i.e. /doc or somewhere else?
> 

> > [1] http://titanpad.com/op-guide
> 
> > [2] http://docs.openstack.org/ops/oreilly-openstack-ops-guide.pdf
> 

> > Thanks,
> 
> > Lala
> 
> > #lalatenduM on freenode
> 

> > ___
> 
> > Gluster-devel mailing list gluster-de...@gluster.org
> > http://supercolony.gluster.org/mailman/listinfo/gluster-devel
> 

> Resending after fixing few typos.

> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] User-serviceable snapshots design

2014-05-05 Thread Paul Cuzner
Just one question relating to thoughts around how you apply a filter to the 
snapshot view from a user's perspective. 

In the "considerations" section, it states - "We plan to introduce a 
configurable option to limit the number of snapshots visible under the USS 
feature." 
Would it not be possible to take the meta data from the snapshots to form a 
tree hierarchy when the number of snapshots present exceeds a given threshold, 
effectively organising the snaps by time. I think this would work better from 
an end-user workflow perspective. 

i.e. 
.snaps 
\/ Today 
+-- snap01_20140503_0800 
+-- snap02_ 20140503_ 1400 
> Last 7 days 
> 7-21 days 
> 21-60 days 
> 60-180days 
> 180days 

> From: "Anand Subramanian" 
> To: gluster-de...@nongnu.org, "gluster-users" 
> Cc: "Anand Avati" 
> Sent: Saturday, 3 May, 2014 2:35:26 AM
> Subject: [Gluster-users] User-serviceable snapshots design

> Attached is a basic write-up of the user-serviceable snapshot feature
> design (Avati's). Please take a look and let us know if you have
> questions of any sort...

> We have a basic implementation up now; reviews and upstream commit
> should follow very soon over the next week.

> Cheers,
> Anand

> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Survey of GlusterFS implementation details?

2014-04-09 Thread Paul Cuzner
I have a questionnaire (google form) that could be used for this. 

Right now, the form asks high level things like; 
- glusterfs version 
- node count 
- capacity 
- protocols in use 
- workload type 

I would be simple to extend to cover the additional items that Alan and Carlos 
have mentioned. 

Let me know if you guys are happy with this approach. 

Personally, I'd like to see this become an annual data point to track how we 
use gluster over time. 

- Original Message -

> From: "John Mark Walker" 
> To: "Vijay Bellur" 
> Cc: "Alan Orth" , "gluster-users"
> , "Paul Cuzner" 
> Sent: Thursday, 10 April, 2014 6:04:03 AM
> Subject: Re: [Gluster-users] Survey of GlusterFS implementation details?

> Hi Alan,

> It would be a great idea to have a form online somewhere that people could
> submit their use cases and deployment details. We used to have something
> like that, way back when.

> What other things, besides what you've listed, would be useful to know? What
> should we offer in return to those who fill out the form?

> -JM

> - Original Message -
> > On 04/09/2014 12:21 PM, Alan Orth wrote:
> > > Hi, all.
> > >
> > > As I was tinkering with my GlusterFS setup, I was just thinking about
> > > how useful it would be to have some sort of stats on peoples' GlusterFS
> > > implementations. Things like:
> > >
> > > - Ethernet or Infiniband?
> > > - TCP or RDMA?
> > > - Fiber or copper?
> > > - RAID or JBOD?
> > > - Hardware RAID or software (md?) RAID
> > > - XFS or ext4 or ZFS or btrfs?
> > > - RHEL or Ubuntu?
> > >
> > > This could lower the barrier of entry to people who are new to GlusterFS
> > > and need help making decisions. Also, the data would just be
> > > reallly interesting!
> > >
> > > It could be as simple as a Survey Monkey, unless someone has a better
> > > idea for something more formal / professional...
> > >
> >
> > Sounds like a very good idea.
> >
> > JM and Paul have dabbled with a similar idea and might have a ready
> > questionnaire as well. Seems like a good thing to collaborate on and
> > have it rolling.
> >
> > Thanks,
> > Vijay
> >
> >
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Gluster-Deploy

2014-04-08 Thread Paul Cuzner
Hi Jimmy, 

gluster-deploy is useful once the gluster nodes are built to perform the first 
time setup. 

If you need a tool that will build the environment from the ground up, James' 
puppet-gluster module is probably the better option. 

My tool needs the gluster rpms to be in place already. 

Cheers, 

Paul C 

- Original Message -

> From: "Jimmy Lu" 
> To: gluster-users@gluster.org
> Sent: Wednesday, 9 April, 2014 5:05:38 AM
> Subject: [Gluster-users] Gluster-Deploy

> Hello Gluster Guru,

> I like to deploy about 5 nodes of gluster using gluster-deploy. Would someone
> please point me to the link where I can download? I do not see it in the
> repo. I am using rhel on my 5 nodes.

> Thanks in advance!

> -Jimmy

> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] HealthChecks for Gluster

2014-03-09 Thread Paul Cuzner

Hi all, 

I found that the gluster CLI's lack of a health check command to be pretty 
annoying, and I'm sure many of you have found the same thing! 

I'm equally sure that a great many of you have put scripts together to bridge 
this functionality gap - but, why have one tool when you can have more :) 

I've written a python tool call gstatus that's available on the forge ( 
https://forge.gluster.org/gstatus ) It performs a number of health checks on a 
cluster, providing a point-in-time view of what's going on. 

At the moment it's dependant upon glusterfs 3.4 (since I'm using the xml output 
from various gluster commands), but it shows things like 

[root@glfs35-1 gstatus]# gstatus -a 

Status: HEALTHY Capacity: 80.00 GiB(raw bricks) 
Glusterfs: 3.5.0beta3 60.00 GiB(usable) 

Nodes : 4/ 4 Volumes: 2 Up 
Self Heal: 4/ 4 0 Up(Degraded) 
Bricks : 8/ 8 0 Up(Partial) 
0 Down 
Volume Information 
myvol UP - 4/4 bricks up - Distributed-Replicate 
Capacity: (0% used) 77.00 MiB/20.00 GiB (used/total) 
Self Heal: 4/ 4 All files in sync 
Protocols: glusterfs:on NFS:off SMB:off 

dist UP - 4/4 bricks up - Distribute 
Capacity: (0% used) 129.00 MiB/40.00 GiB (used/total) 
Self Heal: N/A 
Protocols: glusterfs:on NFS:on SMB:on 

Status Messages 
- Cluster is HEALTHY, all checks successful 

You install it with " python setup.py install " (you'll need python-setuptools 
rpm). However, if you're just curious, you can take a look at the examples 
directory in the download archive to see how the tool reports on various error 
scenarios (nodes down, bricks down etc). It's 

It's at version 0.45, so it's still early days and subject to the limited 
scenarios I can throw at it - so there will be bugs! 

Anyway, if you have the time give it a go and see if it helps you. And if it 
misses the mark in your environment, let me know what else it should do and 
why. 

Cheers, 

Paul C 




___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] mounting a gluster volume

2014-02-25 Thread Paul Cuzner
That'd work! 

- Original Message -

> From: "Xavier Hernandez" 
> To: "Paul Cuzner" , "Joe Julian" 
> Cc: gluster-users@gluster.org
> Sent: Tuesday, 25 February, 2014 10:20:42 PM
> Subject: Re: [Gluster-users] mounting a gluster volume

> Hi Paul,

> if peers are identified by host names instead of IP addresses, the vol files
> will contain names that can be translated to the public addresses of the
> brick nodes. This can be done using custom hosts files, different DNS
> configurations or a mix.

> For example you could have a hosts file in the brick nodes that resolve all
> host names to the IP addresses of the internal network. On the public
> network you can have a DNS configured to resolve the same host names to the
> public addresses of the same nodes.

> Xavi

> El 25/02/14 10:06, Paul Cuzner ha escrit:

> > Hi Joe,
> 

> > Bernhard has his gluster environment on a private network, so won't that
> > mean
> > that the vol file that would be sent to a gluster client will contain the
> > address on the private network - which won't be reachable from his client
> > side network?
> 

> > PC
> 

> > - Original Message -
> 

> > > From: "Joe Julian" 
> > 
> 
> > > To: "Bernhard Glomm"  ,
> > > gluster-users@gluster.org
> > 
> 
> > > Sent: Tuesday, 25 February, 2014 12:00:12 PM
> > 
> 
> > > Subject: Re: [Gluster-users] mounting a gluster volume
> > 
> 

> > > GlusterFS listens on all addresses so it's simply a matter of having your
> > > hostnames resolve to the IP you want any particular node to resolve to.
> > 
> 

> > > On February 24, 2014 7:36:17 AM PST, Bernhard Glomm
> > >  wrote:
> > 
> 
> > > > Hi all
> > > 
> > 
> 

> > > > I have a replica 2 glustervolume.
> > > 
> > 
> 
> > > > between hostA and hostB
> > > 
> > 
> 
> > > > both hosts are connected through a private network/crosslink
> > > 
> > 
> 
> > > > which has addresses in a distinguished network.
> > > 
> > 
> 
> > > > The server have another set of interfaces facing the client side -
> > > 
> > 
> 
> > > > (on a different network address range)
> > > 
> > 
> 
> > > > Is there a way that a client can mount a glustervolume
> > > 
> > 
> 
> > > > without enabling ipforward on the server?
> > > 
> > 
> 

> > > > TIA
> > > 
> > 
> 

> > > > Bernhard
> > > 
> > 
> 

> > > > Gluster-users mailing list Gluster-users@gluster.org
> > > > http://supercolony.gluster.org/mailman/listinfo/gluster-users
> > > 
> > 
> 

> > > --
> > 
> 
> > > Sent from my Android device with K-9 Mail. Please excuse my brevity.
> > 
> 
> > > ___
> > 
> 
> > > Gluster-users mailing list
> > 
> 
> > > Gluster-users@gluster.org
> > 
> 
> > > http://supercolony.gluster.org/mailman/listinfo/gluster-users
> > 
> 

> > ___
> 
> > Gluster-users mailing list Gluster-users@gluster.org
> > http://supercolony.gluster.org/mailman/listinfo/gluster-users
> 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] mounting a gluster volume

2014-02-25 Thread Paul Cuzner
Hi Joe, 

Bernhard has his gluster environment on a private network, so won't that mean 
that the vol file that would be sent to a gluster client will contain the 
address on the private network - which won't be reachable from his client side 
network? 

PC 

- Original Message -

> From: "Joe Julian" 
> To: "Bernhard Glomm" , gluster-users@gluster.org
> Sent: Tuesday, 25 February, 2014 12:00:12 PM
> Subject: Re: [Gluster-users] mounting a gluster volume

> GlusterFS listens on all addresses so it's simply a matter of having your
> hostnames resolve to the IP you want any particular node to resolve to.

> On February 24, 2014 7:36:17 AM PST, Bernhard Glomm
>  wrote:
> > Hi all
> 

> > I have a replica 2 glustervolume.
> 
> > between hostA and hostB
> 
> > both hosts are connected through a private network/crosslink
> 
> > which has addresses in a distinguished network.
> 
> > The server have another set of interfaces facing the client side -
> 
> > (on a different network address range)
> 
> > Is there a way that a client can mount a glustervolume
> 
> > without enabling ipforward on the server?
> 

> > TIA
> 

> > Bernhard
> 

> > Gluster-users mailing list
> 
> > Gluster-users@gluster.org
> 
> > http://supercolony.gluster.org/mailman/listinfo/gluster-users
> 

> --
> Sent from my Android device with K-9 Mail. Please excuse my brevity.
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] mounting a gluster volume

2014-02-24 Thread Paul Cuzner
having a front end network and a back end network is a common approach to 
handling "legacy" protocols like SMB and NFS. 

Your front end network provides one entry point, and the backend network would 
support the node inter-connects and gluster client connections. 

As it stands today, I'm not aware of a simple way to make native gluster mounts 
pass through the front end network without ipforward approach. 

Perhaps others on this list can enlighten us both! 

- Original Message -

> From: "Bernhard Glomm" 
> To: gluster-users@gluster.org
> Sent: Tuesday, 25 February, 2014 4:36:17 AM
> Subject: [Gluster-users] mounting a gluster volume

> Hi all

> I have a replica 2 glustervolume.
> between hostA and hostB
> both hosts are connected through a private network/crosslink
> which has addresses in a distinguished network.
> The server have another set of interfaces facing the client side -
> (on a different network address range)
> Is there a way that a client can mount a glustervolume
> without enabling ipforward on the server?

> TIA

> Bernhard

> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Is replica data usable by clients?

2014-02-16 Thread Paul Cuzner
Hi Elias, 

Online replica's will be used to satisfy client requests. You'll typically see 
a delay when accessing files, while the cluster determines that the node/brick 
is dead, and then access resumes using the remaining brick(s). 

gluster tracks updates against the online replica, such that when the other 
brick returns self heal can get to work and catch up with the changes on the 
other brick. 

Does that make sense? 

Cheers, 

PC 

- Original Message -

> From: "Elías David" 
> To: "gluster-users" 
> Sent: Sunday, 16 February, 2014 12:26:09 PM
> Subject: [Gluster-users] Is replica data usable by clients?

> Suppose you have a vol made up of 4 bricks in a replica 2 set, so brick 2 is
> a replica of brick 1 and brick 4 is a replica of brick 3.

> Now if brick 1 is unavailable for whatever reason, are clients able to use
> the data directly stored on the replica? Or the data in a replica brick is
> just used to heal/rebalance/recreate bricks?
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Question on replicated volumes with bricks on the same server

2014-02-12 Thread Paul Cuzner
Hi Antonio, 

There's also a Fred's project on gluster forge, that shows you the brick 
relationships too 

https://forge.gluster.org/lsgvt 

To give you an idea, here's an example 

[root@rhs1-1 bin]# lsgvt.py 
Topology for volume myvol: 
Distribute set 
│ 
├ Replica set 0 
│ │ 
│ ├ Brick 0: rhs1-1:/gluster/brick1 
│ │ 
│ └ Brick 1: rhs1-3:/gluster/brick1 
│ 
└ Replica set 1 
│ 
├ Brick 0: rhs1-2:/gluster/brick1 
│ 
└ Brick 1: rhs1-4:/gluster/bric k1 

Cheers, 
Paul C 

- Original Message -

> From: "Vijay Bellur" 
> To: "Antonio Messina" ,
> gluster-users@gluster.org
> Sent: Wednesday, 12 February, 2014 11:21:50 PM
> Subject: Re: [Gluster-users] Question on replicated volumes with bricks on
> the same server

> On 02/11/2014 03:21 PM, Antonio Messina wrote:
> > Hi all,
> >
> > I would like to know how gluster distribute the data when two bricks
> > of the same volumes are on the same server. Specifically, I would like
> > to know if there is any way to spread the replicas on different nodes
> > whenever possible, in order not to lose any data if the node goes
> > down.
> >
> > I did a simple test and it seems that the way replicas are spread over
> > the bricks is related to the way the volume is created, that is if I
> > create a volume with:
> >
> > gluster volume create vol1 replica 2\
> > gluster-data001:/srv/gluster/vol1.1 \
> > gluster-data001:/srv/gluster/vol1.2 \
> > gluster-data002:/srv/gluster/vol1.1 \
> > gluster-data002:/srv/gluster/vol1.2
> >
> > replicas of a file will be stored on the two bricks of the same
> > server, while if I create the volume with
> >
> > gluster volume create vol1 replica 2\
> > gluster-data001:/srv/gluster/vol1.1 \
> > gluster-data002:/srv/gluster/vol1.1 \
> > gluster-data001:/srv/gluster/vol1.2 \
> > gluster-data002:/srv/gluster/vol1.2
> >
> > replicas will be saved on two bricks of different servers.
> >
> > So, my guess is that if I create a "replica N" replicated+distributed
> > volumes using the bricks:
> >
> > gluster-1:/srv/gluster
> > ...
> > gluster-[N*M]:/srv/gluster
> >
> > gluster internally creates a distributed volumes made of the following
> > replicated "volumes":
> >
> > replicated volume 1: gluster-[1..N]:/srv/gluster
> > replicated volume 2: gluster-[N+1..2N]:/srv/gluster
> > ...
> > replicated volume M: gluster-[N*(M-1)+1..N*M]:/srv/gluster
> >
> > Is that correct or there is a more complex algorithm involved?
> >

> The interpretation is correct. The way replica sets are chosen is
> related to the order in which bricks are defined at the time of volume
> creation.

> -Vijay

> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] nfs

2014-02-10 Thread Paul Cuzner
Hi John, 

I think on gluster 3.3 and 3.4, nfs is enabled by default. 

nfs is actually a translator, so it's in the 'stack' already - this is why you 
don't need the nfs-kernel-server package. 

When mounting from a client, you just need to ensure the mount options are 
right 
i.e. use the following -o proto=tcp,vers=3 (for linux) 

The other consideration is that by default the volume will expose 64bit inodes 
- if you have a 32bit apps, or OS you'll need to tweak the gluster volume with 
"vol set  nfs.enable-ino32 on" 

I haven't got a 3.3 system handy, but on a 3.4 system if you run the following 
gluster vol set help | grep "^Option: nfs." 

you'll get a view of all the nfs tweaks that can be made with the translator. 

Cheers, 

Paul C 

- Original Message -

> From: "John G. Heim" 
> To: "gluster-users" 
> Sent: Friday, 7 February, 2014 9:52:44 AM
> Subject: [Gluster-users] nfs

> Maybe this is a dumb question but do I have to set up an nfs server on
> one of the server peers in my gluster volume in order to connect to the
> volume with nfs? I did a port scan on a couple of the peers in my
> cluster and port 2049 was cloased. I'm thinking maybe you have to
> configure an nfs server on one of the peers and it can read/write to the
> gluster volume like it would any disk. But then what do these commands do:

> gluster volume set  nfs.disable off
> gluster volume set  nfs.disable on

> The documentation on the gluster.org web site seems to imply that yu
> don't need an nfs server. It specifically says you need the nfs-common
> package on your servers. That would imply you don't need the
> nfs-kernel-server package, right? See:
> http://gluster.org/community/documentation/index.php/Gluster_3.2:_Using_NFS_to_Mount_Volumes
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] gluster-deploy tool updated

2013-11-06 Thread Paul Cuzner

Yep that's the location! https://forge.gluster.org/gluster-deploy/

Can't believe I sent the update to the list and then didn't include the URL!

Doh!



- Original Message -
> From: "Vijay Bellur" 
> To: "Anand Avati" , "Paul Cuzner" 
> Cc: "Gluster-users@gluster.org" 
> Sent: Sunday, 3 November, 2013 3:43:58 AM
> Subject: Re: [Gluster-users] gluster-deploy tool updated
> 
> On 11/02/2013 01:28 AM, Anand Avati wrote:
> > Sounds good! URL please ? :-)
> 
> Seems to be this one:
> 
> https://forge.gluster.org/gluster-deploy/
> 
> Paul: Please let us know if you need to share the review.gluster.org
> gerrit infrastructure for managing patches to this project.
> 
> Cheers,
> Vijay
> 
> >
> >
> > On Fri, Nov 1, 2013 at 12:54 PM, Paul Cuzner  > <mailto:pcuz...@redhat.com>> wrote:
> >
> >
> > Hi,
> >
> > Just to let you know that I've updated the deploy tool (aka setup
> > wizard), to include the creation/tuning of the 1st volume.
> >
> > Here's the changelog info;
> >
> > - Added optparse module for command line arguments. Added -n to
> > bypass accesskey checking
> > - Added password check code to RequestHandler class, and updated js
> > to use xml and ajax request
> > - Added globals module to share config across modules
> > - http server default 'run' method overridden to enable it to be
> > stopped (e.g. when error met)
> > - added ability to create a volume after bricks are defined, and
> > apply use case tuning/settings
> > - some minor UI fixes
> > - added initial error page
> > - Added help page showing mount option syntax for smb,nfs and native
> > client
> > - css split to place theme type elements in the same file, so people
> > can play with skinning the interface
> >
> > If your interested there is a screenshots directory in the archive,
> > so you can have a look through that to get a feel for the 'workflow'.
> >
> > There's still a heap of changes needed for real world deployments -
> > but it makes my life easier when testing things out ;o)
> >
> > Cheers,
> >
> > Paul C
> >
> > ___
> > Gluster-users mailing list
> > Gluster-users@gluster.org <mailto:Gluster-users@gluster.org>
> > http://supercolony.gluster.org/mailman/listinfo/gluster-users
> >
> >
> >
> >
> > ___
> > Gluster-users mailing list
> > Gluster-users@gluster.org
> > http://supercolony.gluster.org/mailman/listinfo/gluster-users
> >
> 
> 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] gluster-deploy tool updated

2013-11-01 Thread Paul Cuzner

Hi,

Just to let you know that I've updated the deploy tool (aka setup wizard), to 
include the creation/tuning of the 1st volume.

Here's the changelog info;

- Added optparse module for command line arguments. Added -n to bypass 
accesskey checking
- Added password check code to RequestHandler class, and updated js to use xml 
and ajax request
- Added globals module to share config across modules
- http server default 'run' method overridden to enable it to be stopped (e.g. 
when error met)
- added ability to create a volume after bricks are defined, and apply use case 
tuning/settings
- some minor UI fixes 
- added initial error page
- Added help page showing mount option syntax for smb,nfs and native client
- css split to place theme type elements in the same file, so people can play 
with skinning the interface

If your interested there is a screenshots directory in the archive, so you can 
have a look through that to get a feel for the 'workflow'.

There's still a heap of changes needed for real world deployments - but it 
makes my life easier when testing things out ;o)

Cheers,

Paul C

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] A "Wizard" for Initial Gluster Configuration

2013-10-13 Thread Paul Cuzner

Hi,

Another good idea ;o)

I'll update the projects wiki to capture these ideas on the forge.

Cheers,

Paul C



- Original Message -
> From: "Jay Vyas" 
> To: "Nux!" 
> Cc: "Paul Cuzner" , "Gluster-users@gluster.org" 
> 
> Sent: Saturday, 12 October, 2013 7:18:49 AM
> Subject: Re: [Gluster-users] A "Wizard" for Initial Gluster Configuration
> 
> yeah this is a nice tool.  i think  developers could use it to,,,
> You know what:, ive seen alot of incantations of this script:
> 
> https://forge.gluster.org/gluster-deploy/gluster-deploy/blobs/master/scripts/findDevs.py
> 
> I think it would be ideal to take your utilities and put them in a separate
> project, thats maintained by the community -
> 
> gluster-dev-utils or something.
> 
> I like the idea of lower llevel bash/python APIs for managing gluster
> setups, LVM, etc  that hide the implementation.
> 
> 
> 
> 
> On Fri, Oct 11, 2013 at 2:14 PM, Nux!  wrote:
> 
> > On 10.10.2013 01:08, Paul Cuzner wrote:
> >
> >> Hi,
> >>
> >>
> >> I'm writing a tool to simplify the initial configuration of a
> >> cluster, and it's now in a state that I find useful.
> >>
> >> Obviously the code is on the forge and can be found at
> >> https://forge.gluster.org/**gluster-deploy<https://forge.gluster.org/gluster-deploy>
> >>
> >> If your interested in what it does, but don't have the time to look
> >> at the code I've uploaded a video to youtube
> >>
> >> http://www.youtube.com/watch?**v=UxyPLnlCdhA<http://www.youtube.com/watch?v=UxyPLnlCdhA>
> >>
> >
> > Impressive video, it looks very nice and I'm sure there are many "IT
> > managers" out there who would love this.
> > Good job!
> >
> >
> >
> >> Feedback / ideas / code contributions - all welcome ;o)
> >>
> >
> > Please add at least some basic volume management support (create/delete,
> > acl). :)
> >
> > Regards,
> > Lucian
> >
> > --
> > Sent from the Delta quadrant using Borg technology!
> >
> > Nux!
> > www.nux.ro
> >
> > __**_
> > Gluster-users mailing list
> > Gluster-users@gluster.org
> > http://supercolony.gluster.**org/mailman/listinfo/gluster-**users<http://supercolony.gluster.org/mailman/listinfo/gluster-users>
> >
> 
> 
> 
> --
> Jay Vyas
> http://jayunit100.blogspot.com
> 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] A "Wizard" for Initial Gluster Configuration

2013-10-10 Thread Paul Cuzner

Hi James,

You make some good points ;o)

At this stage, my plan is to add "create volume" functionality to complete the 
initial setup tasks and then look at next steps.

Looking at adopting puppet sounds like an interesting direction.

Cheers,

Paul C




- Original Message -
> From: "James" 
> To: "Paul Cuzner" 
> Cc: gluster-users@gluster.org
> Sent: Friday, 11 October, 2013 4:04:32 AM
> Subject: Re: [Gluster-users] A "Wizard" for Initial Gluster Configuration
> 
> On Wed, Oct 9, 2013 at 8:08 PM, Paul Cuzner  wrote:
> >
> > Hi,
> Hey there,
> 
> >
> > I'm writing a tool to simplify the initial configuration of a cluster, and
> > it's now in a state that I find useful.
> Cool...
> 
> >
> > Obviously the code is on the forge and can be found at
> > https://forge.gluster.org/gluster-deploy
> >
> > If your interested in what it does, but don't have the time to look at the
> > code I've uploaded a video to youtube
> >
> > http://www.youtube.com/watch?v=UxyPLnlCdhA
> I had a quick watch of this...
> >
> > Feedback / ideas / code contributions - all welcome ;o)
> I'm biased because I'm the puppet-gluster [1] author, and I think
> puppet is the right tool for this type of thing. Having said that, the
> advantages of my puppet tool include:
> 
> 1) recovering from error conditions and continuing.
> 2) ability to change the configuration after initial setup.
> 3) ability to define the entire configuration up front.
> 4) no need to exchange ssh keys.
> 5) puppet-gluster installs packages and starts glusterd too. Also
> manages firewall...
> 6) Future puppet-gluster features which I haven't released yet.
> 
> However your tool is quite pretty and offers a UI which puppet doesn't
> provide. If you could be persuaded, one idea comes to mind: modify
> your tool to run a light puppet server. Each client could have ssh
> execute a puppet client.
> 
> Ultimately you would get the same effect with a temporary puppet
> server. A permanent one would be preferable, but it would replace the
> need to write the logic in python. You'd benefit from the extra
> features that you can configure with puppet-gluster.
> 
> Either way, happy hacking!
> >
> > Cheers,
> >
> > Paul C
> 
> Cheers,
> James
> [1] https://github.com/purpleidea/puppet-gluster
> 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] A "Wizard" for Initial Gluster Configuration

2013-10-10 Thread Paul Cuzner

Hi,

I'm writing a tool to simplify the initial configuration of a cluster, and it's 
now in a state that I find useful.

Obviously the code is on the forge and can be found at 
https://forge.gluster.org/gluster-deploy

If your interested in what it does, but don't have the time to look at the code 
I've uploaded a video to youtube

http://www.youtube.com/watch?v=UxyPLnlCdhA

Feedback / ideas / code contributions - all welcome ;o)

Cheers,

Paul C

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users