Facts:

OS Windows server 2008

4 Cpu
8 GB Ram

Tomcat Service version 7.0 (64 bit)

Only running Solr
Optional JVM parameters set xmx = 3072, xms = 1024
Solr version 4.5.0.

One Core instance (both for querying and indexing)
*Schema config:*
minGramSize="2" maxGramSize="20"
most of the fields are stored = "true" (required)

*Solr config:*
ramBufferSizeMB: 100
maxIndexingThreads: 8
directoryFactory: MMapDirectory
autocommit: maxdocs 10000, maxtime 15000, opensearcher false
cache (defaults): 
filtercache initialsize:512 size: 512 autowarm: 0
queryresultcache initialsize:512 size: 512 autowarm: 0
documentcache initialsize:512 size: 512 autowarm: 0

Problem description:


We're using a .Net Service (based on Solr.Net) for updating and inserting
documents on a single Solr Core instance. The size of documents sent to Solr
vary from 1 Kb up to 8Mb, we're sending the documents in batches, using one
or multiple threads. The current size of the Solr Index is about 15GB.

The indexing service is running around 4 a 5 hours per day, to complete all
inserts and updates to Solr. While the indexing process is running the
Tomcat process memory usage keeps growing up to > 7GB Ram (using Process
Explorer monitor tool) and does not reduce, even after 24 hours. After a
restart of Tomcat, or a Reload Core in the Solr Admin the memory drops back
to 1 a 2 GB Ram. While using a tool like VisualVM to monitor the Tomcat
process, the memory usage of Tomcat seems ok, memory consumption is in range
of defined jvm startup params (see image).

So it seems that filesystem buffers are consuming all the leftover memory??,
and don't release memory, even after a quite amount of time? Is there a way
handle this behaviour, in a way that not all memory is consumed? Are there
other alternatives? Best practices?

<http://lucene.472066.n3.nabble.com/file/n4112262/Capture.png> 

Thanks in advance




--
View this message in context: 
http://lucene.472066.n3.nabble.com/Memory-Usage-on-Windows-Os-while-indexing-tp4112262.html
Sent from the Solr - User mailing list archive at Nabble.com.

Reply via email to