I believe you are just witnessing the OS caching files in memory.  Lucene 
(and therefore by extension Elasticsearch) uses a large number of files to 
represent segments.  TTL + updates will cause even higher file turnover 
than usual.

The OS manages all of this caching and will reclaim it for other processes 
when needed.  Are you experiencing problems, or just witnessing memory 
usage?  I wouldn't be concerned unless there is an actual problem that you 
are seeing.



On Thursday, March 13, 2014 8:07:13 PM UTC-4, Jos Kraaijeveld wrote:
>
> Hey,
>
> I've run into an issue which is preventing me from moving forwards with 
> ES. I've got an application where I keep 'live' documents in ElasticSearch. 
> Each document is a combination from data from multiple sources, which are 
> merged together using doc_as_upsert. Each document has a TTL which is 
> updated whenever new data comes in for a document, so documents die 
> whenever no data source has given information about it for a while. The 
> amount of documents generally doesn't exceed 15.000 so it's a fairly small 
> data set.
>
> Whenever I leave this running, slowly but surely memory usage on the box 
> creeps up, seemingly unbounded until there is no more resident memory left. 
> The Java process nicely keeps within its set ES_MAX_HEAP bounds, but it 
> seems the mapping from storage on disk to memory is every-increasing, even 
> when the amount of 'live' documents goes to 0. 
>
> I was wondering if anyone has seen such a memory problem before and 
> whether there are ways to debug memory usage which is unaccounted for by 
> processes in 'top'.
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/29a04d80-8cee-4775-b2b7-fb0abb7e865c%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to