From: raghavendra...@gmail.com [raghavendra...@gmail.com] on behalf of
Raghavendra G [raghaven...@gluster.com]
Sent: Tuesday, January 26, 2016 9:49 PM
To: Raghavendra Gowdappa
Cc: Richard Wareing; Gluster Devel
Subject: Re: [Gluster-devel] Feature: Automagic lock-revocation for
features/locks xlator
frame;
unsigned intentries_healed;
unsigned intentries_processed;
unsigned intalready_healed;
Richard
From: Ravishankar N [ravishan...@redhat.com]
Sent: Sunday, February 07, 2016 11:15 PM
To: Shreyas Siravara
> If there is one bucket per client and one thread per bucket, it would be
> difficult to scale as the number of clients increase. How can we do this
> better?
On this note... consider that 10's of thousands of clients are not unrealistic
in production :). Using a thread per bucket would also be
27;s almost certainly a bug or a situation
not fully understood by developers.
Richard
From: Venky Shankar [yknev.shan...@gmail.com]
Sent: Sunday, January 24, 2016 9:36 PM
To: Pranith Kumar Karampuri
Cc: Richard Wareing; Gluster Devel
Subject: Re: [Gluster-d
e:
On 01/25/2016 02:17 AM, Richard Wareing wrote:
Hello all,
Just gave a talk at SCaLE 14x today and I mentioned our new locks revocation
feature which has had a significant impact on our GFS cluster reliability. As
such I wanted to share the patch with the community, so here's the
Here's my tips:
1. General C tricks
- learn to use vim or emacs & read their manuals; customize to suite your style
- use vim w/ pathogen plugins for auto formatting (don't use tabs!) & syntax
- use ctags to jump around functions
- Use ASAN & valgrind to check for memory leaks and heap corruption
Hello all,
Just gave a talk at SCaLE 14x today and I mentioned our new locks revocation
feature which has had a significant impact on our GFS cluster reliability. As
such I wanted to share the patch with the community, so here's the bugzilla
report:
https://bugzilla.redhat.com/show_bug.cgi?id
"323.51",
"gluster.nfsd.inter.fop.access.latency_min_usec": "144.00",
"gluster.nfsd.inter.fop.access.latency_max_usec": "6639.00",
"gluster.nfsd.inter.fop.create.per_sec": "0.00",
*SNIP*
}
There's also aggregate counters which track from process birt
Ok, one more patch and then I'm taking a break from porting for a bit :). This
one we've been using for a few years (in some form or another) and really
helped allow folks embrace GlusterFS; at our scale workflows cannot be
interrupted for nearly any reason...using heuristics to resolve split-b
Hello again,
Following up on the FOP statistics dump feature, here's our FOP sampling patch
as well. This feature allows you to sample a 1:N ratio of FOPs, such that they
can be later analyzed to track down mis-behaving clients, calculate P99/P95 FOP
service times, audit traffic and probably o
Greetings,
Just a heads up, there's a very nasty deadlock bug w/ multi-core e-poll in
v3.6.x (latest branch) which appear to be due to DHT doing naughty things (like
racing among other things). So to fix these issues we had to pull in these
(trivial cherry-pick) commits from master:
565ef0d82
Hey all,
I just uploaded a clean patch for our FOP statistics dump feature @
https://bugzilla.redhat.com/show_bug.cgi?id=1261700 .
Patches cleanly to v3.6.x/v3.7.x release branches, also includes io-stats
support for intel arch atomic operations (ifdef'd for portability) such that
you can coll
of the
higher-level features, rich libraries, mature tool chains, massive developer
pool, support both Thrift (_very_ mature) and Protocol Buffers.
Richard
From: Kaushal M [kshlms...@gmail.com]
Sent: Monday, September 07, 2015 5:20 AM
To: Richard Wareing
Hey Atin (and the wider community),
This looks interesting, though I have a couple questions:
1. Language choice - Why the divergence from Python (which I'm no fan of) which
is already heavily used in GlusterFS? It seems a bit strange to me to
introduce yet another language into the GlusterFS
Hey Nithin,
We have IPv6 going as well (v3.4.x & v3.6.x), so I might be able to help out
here and perhaps combine our efforts. We did something similar here, however
we also tackled the NFS side of the house, which required a bunch of changes
due to how port registration w/ portmapper changed
Hey all,
I'm Craig's manager for the duration of his internship @ FB, so I thought I'd
better chime in here :). As Craig mentioned, our project plan is to implement
C-based CLI utilities similar to what we have for NFS CLI utilities (we'll be
open sourcing this in the coming days so you can se
16 matches
Mail list logo