Thanks Jörg, Yes it is unusual to have such dentry cache, there is definitely something fishy going on. Stopping ES clears it up, so it is something related ES I believe.
On Thu, May 7, 2015 at 8:16 PM, joergpra...@gmail.com <joergpra...@gmail.com > wrote: > On my systems, dentry use is ~18MB while ES 1.5.2 is under heavy duty > (RHEL 6.6, Java 8u45, on-premise server). > > I think you should double check if the effect you see is caused by ES or > by your JVM/Arch Linux/EC2/whatever. > > Jörg > > On Mon, May 4, 2015 at 12:47 PM, Pradeep Reddy < > pradeepreddy.manu.iit...@gmail.com> wrote: > >> ES version 1.5.2 >> Arch Linux on Amazon EC2 >> of the available 16 GB, 8 GB is heap (mlocked). Memory consumption is >> continuously increasing (225 MB per day). >> Total no of documents is around 800k, 500 MB. >> >> cat /proc/meminfo has >>> >>> Slab: 3424728 kB >> >> SReclaimable: 3407256 kB >>> >> >> >> >> curl -XGET 'http://localhost:9200/_nodes/stats/jvm?pretty' >>> >>> "heap_used_in_bytes" : 5788779888, >>> "heap_used_percent" : 67, >>> "heap_committed_in_bytes" : 8555069440, >>> >>> >> slabtop >> OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE >> NAME >> 17750313 17750313 100% 0.19K 845253 21 3381012K dentry >> >> >> So the continuous increase in memory usage is because of the slab usage I >> think, If I restart ES, then slab memory is freed. I see that ES still has >> some free heap available, but from elastic documentation >> >>> Lucene is designed to leverage the underlying OS for caching in-memory >>> data structures. Lucene segments are stored in individual files. Because >>> segments are immutable, these files never change. This makes them very >>> cache friendly, and the underlying OS will happily keep hot segments >>> resident in memory for faster access. >>> >> >> My question is, should I add more nodes or increase the ram of each node >> to let lucene use as much memory as it wants ? how significant performance >> difference will be there if I choose to upgrade ES machines to have more >> RAM. >> >> Or, can I make some optimizations that decreases the slab usage or clean >> slab memory partially? >> >> >> -- >> You received this message because you are subscribed to the Google Groups >> "elasticsearch" group. >> To unsubscribe from this group and stop receiving emails from it, send an >> email to elasticsearch+unsubscr...@googlegroups.com. >> To view this discussion on the web visit >> https://groups.google.com/d/msgid/elasticsearch/5ccc7887-59f8-4267-ac05-450f00c42045%40googlegroups.com >> <https://groups.google.com/d/msgid/elasticsearch/5ccc7887-59f8-4267-ac05-450f00c42045%40googlegroups.com?utm_medium=email&utm_source=footer> >> . >> For more options, visit https://groups.google.com/d/optout. >> > > -- > Please update your bookmarks! We moved to https://discuss.elastic.co/ > --- > You received this message because you are subscribed to a topic in the > Google Groups "elasticsearch" group. > To unsubscribe from this topic, visit > https://groups.google.com/d/topic/elasticsearch/c8_BLOtFVhs/unsubscribe. > To unsubscribe from this group and all its topics, send an email to > elasticsearch+unsubscr...@googlegroups.com. > To view this discussion on the web visit > https://groups.google.com/d/msgid/elasticsearch/CAKdsXoH9z%2BFiRifw84nbjj2-nr2ixvSW3Xv48oaB4v8%2Bm8Csbg%40mail.gmail.com > <https://groups.google.com/d/msgid/elasticsearch/CAKdsXoH9z%2BFiRifw84nbjj2-nr2ixvSW3Xv48oaB4v8%2Bm8Csbg%40mail.gmail.com?utm_medium=email&utm_source=footer> > . > For more options, visit https://groups.google.com/d/optout. > -- Please update your bookmarks! We moved to https://discuss.elastic.co/ --- You received this message because you are subscribed to the Google Groups "elasticsearch" group. To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CADX9mKM9fY6CZg8u%3DNNUFHwABZyvdZ%2Bhn40pLG_Y9gRmeOyp%2BQ%40mail.gmail.com. For more options, visit https://groups.google.com/d/optout.