These are a few ideas I had about how to implement a MESI-like protocol
on gluster. It's more a bunch of ideas than an structured proposal,
however I hope it's clear enough to show the basic concepts as I see them.
Each inode will have two separate access levels: one for the metadata
and one
On Tue, Feb 04, 2014 at 10:07:22AM +0100, Xavier Hernandez wrote:
Hi,
currently, inodelk() and entrylk() are being used to make sure that
changes happen synchronously on all bricks, avoiding data/metadata
corruption when multiple clients modify the same inode concurrently.
So far so good,
Hi Niels,
El 10/02/14 11:05, Niels de Vos ha escrit:
On Tue, Feb 04, 2014 at 10:07:22AM +0100, Xavier Hernandez wrote:
Hi,
currently, inodelk() and entrylk() are being used to make sure that
changes happen synchronously on all bricks, avoiding data/metadata
corruption when multiple clients
On Sun, 9 Feb 2014 15:30:56 -0500 (EST)
Paul Cuzner pcuz...@redhat.com wrote:
snip
At the moment testing consists of vm's on my laptop - so who knows what bugs
you may find :)
Any way if it's of interest give it a go.
Interesting. I'll try to test this out today/tomorrow. :)
Regards and
On Thu, 6 Feb 2014 02:08:56 -0500 (EST)
Prashanth Pai p...@redhat.com wrote:
This is looks really cool! :)
Is StatsD integration good in this use case ? Just a thought.
Don't think so. StatsD seems to be be counter based (not real
flexible), whereas I'm wanting GlusterFlow to be useful for
On 02/10/2014 02:00 AM, Paul Cuzner wrote:
Hi,
I've started a new project on the forge, called gstatus.- wiki page is
https://forge.gluster.org/gstatus/pages/Home
The idea is to provide admins with a single command to assess the state
of the components of a cluster - nodes, bricks and volume
On Mon, 18 Nov 2013 10:08:47 -0500 (EST)
John Walker jowal...@redhat.com wrote:
Awesome!
I think Justin will also be interested, but he won't be back until Dec 1.
Yep, definitely interested. Has there been progress since this was
first raised? :)
Found a showstopper bug in the current 3.5
It's been a while since I did some gluster replication testing, so I
spun up a quick cluster *cough, plug* using puppet-gluster+vagrant (of
course) and here are my results.
* Setup is a 2x2 distributed-replicated cluster
* Hosts are named: annex{1..4}
* Volume name is 'puppet'
* Client vm's mount
James,
Could you provide the logs of the mount process, where you see the hang for 42s?
My initial guess, seeing 42s, is that the client translator's ping timeout
is in play.
I would encourage you to report a bug and attach relevant logs.
If the issue (observed) turns out to be an
The 42 second hang is most likely the ping timeout of the client translator.
What most likely happened was that, the brick on annex3 was being used
for the read when you pulled its plug. When you pulled the plug, the
connection between the client and annex3 isn't gracefully terminated
and the
10 matches
Mail list logo