I have straced elasticsearch for a couple of minutes
strace -fp PID -o file.txt
out of the 40k+ events recorded
2.2k + events have resulted in errors like this
https://gist.github.com/vanga/55ca296f737b3c1fb9a2
I think this is the reason for the dentry bloating, though I am not sure if
there is
Actually, the problem has appeared again, memory consumption was stable for
couple of days, then it started increasing, env variable was only set for
that particular session or something, I had to set it again by adding it to
/etc/environment , but this doesn't have any affect anymore.. there ma
So the bloating of dentry cache is because of this
https://bugzilla.redhat.com/show_bug.cgi?id=1044666
My NSS version is 3.18 (Arch Linux, Kernel version. 3.14.21)
Setting NSS_SDB_USE_CACHE=YES has stopped the bloating. I have set this on
one of the three nodes, dentry size hasn't changed a bit
So the bloating of dentry cache is because of this
https://bugzilla.redhat.com/show_bug.cgi?id=1044666
My NSS version is 3.18 (Arch Linux, Kernel version. 3.14.21)
Setting NSS_SDB_USE_CACHE=YES has stopped the bloating. I have set this on
one of the three nodes, dentry size hasn't changed a bit
Thanks Jörg,
Yes it is unusual to have such dentry cache, there is definitely something
fishy going on. Stopping ES clears it up, so it is something related ES I
believe.
On Thu, May 7, 2015 at 8:16 PM, joergpra...@gmail.com wrote:
> On my systems, dentry use is ~18MB while ES 1.5.2 is under he
On my systems, dentry use is ~18MB while ES 1.5.2 is under heavy duty (RHEL
6.6, Java 8u45, on-premise server).
I think you should double check if the effect you see is caused by ES or by
your JVM/Arch Linux/EC2/whatever.
Jörg
On Mon, May 4, 2015 at 12:47 PM, Pradeep Reddy <
pradeepreddy.manu.ii
Hi Mark,
Thanks.
I understand that caching makes ES perform better, and it's normal. What I
don't understand is the unusual size of dentry objects (dentry size
increase at about 200+ mb per day?) for the data size I have. There isn't
this behaviour on the ELK ES where I have many times of data
When the underlying lucene engine interacts with a segment the OS will
leverage free system RAM and keep that segment in memory. However
Elasticsearch/lucene has no way to control of OS level caches.
What exactly is the problem here? This caching is what helps provide
performance for ES.
--
Pl
ES version was actually 1.5.0, I have upgraded to 1.5.2, so restarting the
ES cleared up the dentry cache.
I believe dentry cache is something that is handled by linux, but it seems
like ES/lucene has a role to play how dentry cache is handled. If that is
the case ES/lucene should be able to con
ES version 1.5.2
Arch Linux on Amazon EC2
of the available 16 GB, 8 GB is heap (mlocked). Memory consumption is
continuously increasing (225 MB per day).
Total no of documents is around 800k, 500 MB.
cat /proc/meminfo has
>
> Slab: 3424728 kB
SReclaimable: 3407256 kB
>
curl -XGET 'http:/
10 matches
Mail list logo