Hi, Thanks for the reply. I am sorry, the logs from where I posted does have a Custom Update Handler.
But I have a local setup, which does not have a custome update handler, its as its downloaded from SOLR site, even that gives me heap space. at java.util.Arrays.copyOf(Unknown Source) at java.lang.AbstractStringBuilder.expandCapacity(Unknown Source) at java.lang.AbstractStringBuilder.append(Unknown Source) at java.lang.StringBuilder.append(Unknown Source) at org.apache.solr.handler.extraction.Solrtik ContentHandler.characters(SolrContentHandler.java:257) at org.apache.tika.sax.ContentHandlerDecorator.characters(ContentHandlerDecorator.java:124) at org.apache.tika.sax.SecureContentHandler.characters(SecureContentHandler.java:153) at org.apache.tika.sax.ContentHandlerDecorator.characters(ContentHandlerDecorator.java:124) at org.apache.tika.sax.ContentHandlerDecorator.characters(ContentHandlerDecorator.java:124) at org.apache.tika.sax.SafeContentHandler.access$001(SafeContentHandler.java:39) at org.apache.tika.sax.SafeContentHandler$1.write(SafeContentHandler.java:61) at org.apache.tika.sax.SafeContentHandler.filter(SafeContentHandler.java:113) at org.apache.tika.sax.SafeContentHandler.characters(SafeContentHandler.java:151) at org.apache.tika.sax.XHTMLContentHandler.characters(XHTMLContentHandler.java:175) at org.apache.tika.parser.txt.TXTParser.parse(TXTParser.java:144) at org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:142) at org.apache.tika.parser.AutoDetectParser.parse(AutoDetectParser.java:99) at org.apache.tika.parser.AutoDetectParser.parse(AutoDetectParser.java:112) at org.apache.solr.handler.extraction.ExtractingDocumentLoader.load(ExtractingDocumentLoader.java:193) at org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:54) at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:131) at org.apache.solr.core.RequestHandlers$LazyRequestHandlerWrapper.handleRequest(RequestHandlers.java:237) at org.apache.solr.core.SolrCore.execute(SolrCore.java:1323) at org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:337) Also, in general, if I post 25 * 100 mb docs to solr, how much should be the ideal heap space set? Also, I see that when I push a single document of 100 mb, in task manager I see that about 900 mb memory is been used up, and some subsequent push keeps the memory about 900mb, so at what point there can be OOM crash? When I ran the YourKit Profiler, I saw that around 1 gig of memory was just consumed by char[] , String []. How can I find out who is creating these(is it SOLR or TIKA) and free up these objects? Thank you so much for your time and help, Regards, Geeta -----Original Message----- From: ysee...@gmail.com [mailto:ysee...@gmail.com] On Behalf Of Yonik Seeley Sent: 17 March, 2011 12:21 PM To: solr-user@lucene.apache.org Cc: Geeta Subramanian Subject: Re: memory not getting released in tomcat after pushing large documents On Thu, Mar 17, 2011 at 12:12 PM, Geeta Subramanian <gsubraman...@commvault.com> wrote: > at > com.commvault.solr.handler.extraction.CVExtractingDocumentLoader.load( > CVExtractingDocumentLoader.java:349) Looks like you're using a custom update handler. Perhaps that's accidentally hanging onto memory? -Yonik http://lucidimagination.com ******************Legal Disclaimer*************************** "This communication may contain confidential and privileged material for the sole use of the intended recipient. Any unauthorized review, use or distribution by others is strictly prohibited. If you have received the message in error, please advise the sender by reply email and delete the message. Thank you." ****************************************************************