ES version was actually 1.5.0, I have upgraded to 1.5.2, so restarting the 
ES cleared up the dentry cache.
I believe dentry cache is something that is handled by linux, but it seems 
like ES/lucene has a role to play how dentry cache is handled. If that is 
the case ES/lucene should be able to control how much dentry cache is there.

Dentry cache is continuously increasing, is this unavoidable considering 
that the data is increasing every day (though not significant) ? I have an 
ELK stack where there are many millions of documents, though there are less 
search requests to the cluster, which doesn't have this problem.

On Monday, May 4, 2015 at 4:17:40 PM UTC+5:30, Pradeep Reddy wrote:
>
> ES version 1.5.2
> Arch Linux on Amazon EC2
> of the available 16 GB, 8 GB is heap (mlocked). Memory consumption is 
> continuously increasing (225 MB per day). 
> Total no of documents is around 800k, 500 MB. 
>
> cat /proc/meminfo has 
>>
>> Slab: 3424728 kB 
>
> SReclaimable: 3407256 kB
>>
>
>  
>
> curl -XGET 'http://localhost:9200/_nodes/stats/jvm?pretty' 
>>
>> "heap_used_in_bytes" : 5788779888,
>>           "heap_used_percent" : 67,
>>           "heap_committed_in_bytes" : 8555069440,
>>
>>
> slabtop
>  OBJS ACTIVE  USE OBJ SIZE  SLABS OBJ/SLAB CACHE SIZE 
> NAME                   
> 17750313 17750313 100%    0.19K 845253       21   3381012K dentry
>  
>
> So the continuous increase in memory usage is because of the slab usage I 
> think, If I restart ES, then slab memory is freed. I see that ES still has 
> some free heap available, but from elastic documentation  
>
>> Lucene is designed to leverage the underlying OS for caching in-memory 
>> data structures. Lucene segments are stored in individual files. Because 
>> segments are immutable, these files never change. This makes them very 
>> cache friendly, and the underlying OS will happily keep hot segments 
>> resident in memory for faster access.
>>
>
> My question is, should I add more nodes or increase the ram of each node 
> to let lucene use as much memory as it wants ? how significant performance 
> difference will be there if I choose to upgrade ES machines to have more 
> RAM. 
>
> Or, can I make some optimizations that decreases the slab usage or clean 
> slab memory partially?
>
>
>

-- 
Please update your bookmarks! We moved to https://discuss.elastic.co/
--- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/551a454e-395f-45e9-a4bc-afedc3e564b8%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to