Also, I've repeated the same "find" test again, but with glusterfs process
launched under valgrind. And here is valgrind output:
https://gist.github.com/097afb01ebb2c5e9e78d
On неділя, 24 січня 2016 р. 09:33:00 EET Mathieu Chateau wrote:
> Thanks for all your tests and times, it looks promising
Here's my tips:
1. General C tricks
- learn to use vim or emacs & read their manuals; customize to suite your style
- use vim w/ pathogen plugins for auto formatting (don't use tabs!) & syntax
- use ctags to jump around functions
- Use ASAN & valgrind to check for memory leaks and heap corruption
On 01/25/2016 02:17 AM, Richard Wareing wrote:
Hello all,
Just gave a talk at SCaLE 14x today and I mentioned our new locks
revocation feature which has had a significant impact on our GFS
cluster reliability. As such I wanted to share the patch with the
community, so here's the bugzilla
Yup per domain would be useful, the patch itself currently honors domains as
well. So locks in a different domains will not be touched during revocation.
In our cases we actually prefer to pull the plug on SHD/DHT domains to ensure
clients do not hang, this is important for DHT self heals
BTW, am I the only one who sees in
max_size=4294965480
almost 2^32? Could that be integer overflow?
On неділя, 24 січня 2016 р. 13:23:55 EET Oleksandr Natalenko wrote:
> The leak definitely remains. I did "find /mnt/volume -type d" over GlusterFS
> volume, with mentioned patches applied and
The leak definitely remains. I did "find /mnt/volume -type d" over GlusterFS
volume, with mentioned patches applied and without "kernel notifier loop
terminated" message, but "glusterfs" process consumed ~4GiB of RAM after
"find" finished.
Here is statedump:
On Mon, Jan 25, 2016 at 11:06:26AM +0530, Ravishankar N wrote:
> Hi,
>
> We are planning to introduce a throttling xlator on the server (brick)
> process to regulate FOPS. The main motivation is to solve complaints about
> AFR selfheal taking too much of CPU resources. (due to too many fops for
>
Hi,
We are planning to introduce a throttling xlator on the server (brick)
process to regulate FOPS. The main motivation is to solve complaints about
AFR selfheal taking too much of CPU resources. (due to too many fops for
entry
self-heal, rchecksums for data self-heal etc.)
The throttling is
On 01/25/2016 12:56 PM, Venky Shankar wrote:
Also, it would be beneficial to have the core TBF implementation as part of
libglusterfs so as to be consumable by the server side xlator component to
throttle dispatched FOPs and for daemons to throttle anything that's outside
"brick" boundary (such
3.5.7 also hangs.only the flush op hung. Yes,off the
performance.client-io-threads ,no hang.
The hang does not relate the client kernel version.
One client statdump about flush op,any abnormal?
[global.callpool.stack.12]
uid=0
gid=0
pid=14432
unique=16336007098
lk-owner=77cb199aa36f3641
Hi all,
below is the current list of bugs that have an incorrect starus. Until
we have the tools that automatically update the status of bugs,
developers are expected to update their bugs when they post patches, and
when all patches have been merged. The release engineer that handles the
minor
On Sun, Jan 24, 2016 at 05:43:39PM +0100, Niels de Vos wrote:
> Hi all,
>
> below is the current list of bugs that have an incorrect starus. Until
> we have the tools that automatically update the status of bugs,
> developers are expected to update their bugs when they post patches, and
> when
Hello all,
Just gave a talk at SCaLE 14x today and I mentioned our new locks revocation
feature which has had a significant impact on our GFS cluster reliability. As
such I wanted to share the patch with the community, so here's the bugzilla
report:
13 matches
Mail list logo