Re: Hard Commit giving OOM Error on Index Writer in Solr 4.2.1

2013-05-22 Thread Umesh Prasad
Hi Shawn,
Thanks for the advice :). The JVM heap Size usage on indexer machine
has been consistency about 95% (both total and old gen) for past 3 days. It
might have nothing to do with Solr 3.6 Vs solr 4.2 .. Because Solr 3.6
indexer gets restarted once in 2-3  days.
  Will investigate why memory usage is so high on indexer.



On Wed, May 22, 2013 at 10:03 AM, Shawn Heisey s...@elyograg.org wrote:

 On 5/21/2013 9:22 PM, Umesh Prasad wrote:
  This is our own implementation of data source (canon name
  com.flipkart.w3.solr.MultiSPCMSProductsDataSource) , which pulls the data
  from out downstream service and it doesn't cache data in RAM. It fetches
  the data in batches of 200 and iterates over it when DIH asks for it. I
  will check the possibility of leak, but unlikely.
 Can OOM issue be because during analysis, IndexWriter finds the
  document to be too large to fit in 100 MB memory and can't flush to disk
 ?
  Our analyzer chain doesn't make easy (specially with a field like) (does
 a
  cross product of synonyms terms)

 If your documents are really large (hundreds of KB, or a few MB), you
 might need a bigger ramBufferSizeMB value ... but if that were causing
 problems, I would expect it to show up during import, not at commit time.

 How much of your 32GB heap is in use during indexing?  Would you be able
 to try with the heap at 31GB instead of 32GB?  One of Java's default
 optimizations (UseCompressedOops) gets turned off with a heap size of
 32GB because it doesn't work any more, and that might lead to strange
 things happening.

 Do you have the ability to try 4.3 instead of 4.2.1?

 Thanks,
 Shawn




-- 
---
Thanks  Regards
Umesh Prasad


Re: Hard Commit giving OOM Error on Index Writer in Solr 4.2.1

2013-05-21 Thread Otis Gospodnetic
Hi,

Maybe you van share more info, such as your java command line or jstat
output from right before the oom ...

Otis
Solr  ElasticSearch Support
http://sematext.com/
On May 21, 2013 1:58 AM, Umesh Prasad umesh.i...@gmail.com wrote:

 Hi All,
I am hitting an OOM error while trying to do an hard commit on one of
 the cores.

 Transaction log dir is Empty and DIH shows indexing going on for  13 hrs..

 *Indexing since 13h 22m 22s*
 Requests: 5,211,392 (108/s), Fetched: 1,902,792 (40/s), Skipped: 106,853,
 Processed: 1,016,696 (21/s)
 Started: about 13 hours ago



 response
 lst name=responseHeaderint name=status500/intint
 name=QTime4/int/lstlst name=errorstr name=msgthis writer hit
 an OutOfMemoryError; cannot commit/strstr
 name=tracejava.lang.IllegalStateException: this writer hit an
 OutOfMemoryError; cannot commit
 at

 org.apache.lucene.index.IndexWriter.prepareCommitInternal(IndexWriter.java:2661)
 at
 org.apache.lucene.index.IndexWriter.commitInternal(IndexWriter.java:2827)
 at org.apache.lucene.index.IndexWriter.commit(IndexWriter.java:2807)
 at

 org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:536)
 at

 org.apache.solr.update.processor.RunUpdateProcessor.processCommit(RunUpdateProcessorFactory.java:95)
 at

 org.apache.solr.update.processor.UpdateRequestProcessor.processCommit(UpdateRequestProcessor.java:64)
 at

 org.apache.solr.update.processor.DistributedUpdateProcessor.processCommit(DistributedUpdateProcessor.java:1055)
 at

 org.apache.solr.update.processor.LogUpdateProcessor.processCommit(LogUpdateProcessorFactory.java:157)
 at

 org.apache.solr.handler.RequestHandlerUtils.handleCommit(RequestHandlerUtils.java:69)
 at

 org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:68)
 at

 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
 at org.apache.solr.core.SolrCore.execute(SolrCore.java:1817)
 at

 org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:639)
 at

 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:345)
 at

 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:141)
 at

 org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
 at

 org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
 at

 org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
 at

 org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
 at

 org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
 at

 org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
 at

 org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
 at
 org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:554)
 at
 org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298)
 at
 org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:859)
 at

 org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:588)
 at
 org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489)
 at java.lang.Thread.run(Thread.java:662)




 --
 ---
 Thanks  Regards
 Umesh Prasad



Re: Hard Commit giving OOM Error on Index Writer in Solr 4.2.1

2013-05-21 Thread Shawn Heisey
On 5/20/2013 11:57 PM, Umesh Prasad wrote:
I am hitting an OOM error while trying to do an hard commit on one of
 the cores.
 
 Transaction log dir is Empty and DIH shows indexing going on for  13 hrs..
 
 *Indexing since 13h 22m 22s*
 Requests: 5,211,392 (108/s), Fetched: 1,902,792 (40/s), Skipped: 106,853,
 Processed: 1,016,696 (21/s)
 Started: about 13 hours ago

In addition to what Otis requested, can you also provide your dataimport
config file?  If you need to obscure connection details like username,
password, hostname, and port, that would be perfectly OK, but the
overall details from the connection must be intact.

Please use a paste website, like pastie.org, fpaste.org, or whatever
your favorite is, and send us link(s).

Thanks,
Shawn



Re: Hard Commit giving OOM Error on Index Writer in Solr 4.2.1

2013-05-21 Thread Jack Krupansky

Try again on a machine with more memory. Or did you do that already?

-- Jack Krupansky

-Original Message- 
From: Umesh Prasad

Sent: Tuesday, May 21, 2013 1:57 AM
To: solr-user@lucene.apache.org
Subject: Hard Commit giving OOM Error on Index Writer in Solr 4.2.1

Hi All,
  I am hitting an OOM error while trying to do an hard commit on one of
the cores.

Transaction log dir is Empty and DIH shows indexing going on for  13 hrs..

*Indexing since 13h 22m 22s*
Requests: 5,211,392 (108/s), Fetched: 1,902,792 (40/s), Skipped: 106,853,
Processed: 1,016,696 (21/s)
Started: about 13 hours ago



response
lst name=responseHeaderint name=status500/intint
name=QTime4/int/lstlst name=errorstr name=msgthis writer hit
an OutOfMemoryError; cannot commit/strstr
name=tracejava.lang.IllegalStateException: this writer hit an
OutOfMemoryError; cannot commit
   at
org.apache.lucene.index.IndexWriter.prepareCommitInternal(IndexWriter.java:2661)
   at
org.apache.lucene.index.IndexWriter.commitInternal(IndexWriter.java:2827)
   at org.apache.lucene.index.IndexWriter.commit(IndexWriter.java:2807)
   at
org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:536)
   at
org.apache.solr.update.processor.RunUpdateProcessor.processCommit(RunUpdateProcessorFactory.java:95)
   at
org.apache.solr.update.processor.UpdateRequestProcessor.processCommit(UpdateRequestProcessor.java:64)
   at
org.apache.solr.update.processor.DistributedUpdateProcessor.processCommit(DistributedUpdateProcessor.java:1055)
   at
org.apache.solr.update.processor.LogUpdateProcessor.processCommit(LogUpdateProcessorFactory.java:157)
   at
org.apache.solr.handler.RequestHandlerUtils.handleCommit(RequestHandlerUtils.java:69)
   at
org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:68)
   at
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
   at org.apache.solr.core.SolrCore.execute(SolrCore.java:1817)
   at
org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:639)
   at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:345)
   at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:141)
   at
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
   at
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
   at
org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
   at
org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
   at
org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
   at
org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
   at
org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
   at
org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:554)
   at
org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298)
   at
org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:859)
   at
org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:588)
   at
org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489)
   at java.lang.Thread.run(Thread.java:662)




--
---
Thanks  Regards
Umesh Prasad 



Re: Hard Commit giving OOM Error on Index Writer in Solr 4.2.1

2013-05-21 Thread Umesh Prasad
We have sufficient RAM on machine ..64 GB and we have given JVM 32 GB of
memory. The machine runs Indexing primarily.

The JVM doesn't run out of memory. It is the particular IndexWriterSolrCore
which has .. May be we have specified too low a memory for IndexWriter ..

We index mainly product data and use DIH to pull data from downstream
services. Autocommiit is off. The commit is infrequent  for legacy
reasons.. 1 commit in 2-3 hrs. It it makes a difference, then, a Core can
have more than10 lakh documents uncommitted at a time. IndexWriter has a
memory of 100 MB
 We ran with same config on Solr 3.5 and we never ran out of Memory.
But then, I hadn't tried hard commits on Solr 3.5.

Data-Source Entry :
dataConfig
dataSource name=products type=MultiSPCMSProductsDataSource
spCmsHost=$config.spCmsHost spCmsPort=$config.spCmsPort
spCmsTimeout=3 cmsBatchSize=200 psURL=$config.psUrl
autoCommit=false/
document name=products
entity name=item pk=id
transformer=w3.solr.transformers.GenericProductsTransformer
dataSource=products
/entity
/document
/dataConfig

IndexConfig.

ramBufferSizeMB100/ramBufferSizeMB
maxMergeDocs2147483647/maxMergeDocs
maxFieldLength5/maxFieldLength
writeLockTimeout1000/writeLockTimeout
commitLockTimeout1/commitLockTimeout





On Tue, May 21, 2013 at 7:07 PM, Jack Krupansky j...@basetechnology.comwrote:

 Try again on a machine with more memory. Or did you do that already?

 -- Jack Krupansky

 -Original Message- From: Umesh Prasad
 Sent: Tuesday, May 21, 2013 1:57 AM
 To: solr-user@lucene.apache.org
 Subject: Hard Commit giving OOM Error on Index Writer in Solr 4.2.1


 Hi All,
   I am hitting an OOM error while trying to do an hard commit on one of
 the cores.

 Transaction log dir is Empty and DIH shows indexing going on for  13 hrs..

 *Indexing since 13h 22m 22s*
 Requests: 5,211,392 (108/s), Fetched: 1,902,792 (40/s), Skipped: 106,853,
 Processed: 1,016,696 (21/s)
 Started: about 13 hours ago



 response
 lst name=responseHeaderint name=status500/intint
 name=QTime4/int/lstlst name=errorstr name=msgthis writer hit
 an OutOfMemoryError; cannot commit/strstr
 name=tracejava.lang.**IllegalStateException: this writer hit an
 OutOfMemoryError; cannot commit
at
 org.apache.lucene.index.**IndexWriter.**prepareCommitInternal(**
 IndexWriter.java:2661)
at
 org.apache.lucene.index.**IndexWriter.commitInternal(**
 IndexWriter.java:2827)
at org.apache.lucene.index.**IndexWriter.commit(**
 IndexWriter.java:2807)
at
 org.apache.solr.update.**DirectUpdateHandler2.commit(**
 DirectUpdateHandler2.java:536)
at
 org.apache.solr.update.**processor.RunUpdateProcessor.**processCommit(**
 RunUpdateProcessorFactory.**java:95)
at
 org.apache.solr.update.**processor.**UpdateRequestProcessor.**
 processCommit(**UpdateRequestProcessor.java:**64)
at
 org.apache.solr.update.**processor.**DistributedUpdateProcessor.**
 processCommit(**DistributedUpdateProcessor.**java:1055)
at
 org.apache.solr.update.**processor.LogUpdateProcessor.**processCommit(**
 LogUpdateProcessorFactory.**java:157)
at
 org.apache.solr.handler.**RequestHandlerUtils.**handleCommit(**
 RequestHandlerUtils.java:69)
at
 org.apache.solr.handler.**ContentStreamHandlerBase.**handleRequestBody(**
 ContentStreamHandlerBase.java:**68)
at
 org.apache.solr.handler.**RequestHandlerBase.**handleRequest(**
 RequestHandlerBase.java:135)
at org.apache.solr.core.SolrCore.**execute(SolrCore.java:1817)
at
 org.apache.solr.servlet.**SolrDispatchFilter.execute(**
 SolrDispatchFilter.java:639)
at
 org.apache.solr.servlet.**SolrDispatchFilter.doFilter(**
 SolrDispatchFilter.java:345)
at
 org.apache.solr.servlet.**SolrDispatchFilter.doFilter(**
 SolrDispatchFilter.java:141)
at
 org.apache.catalina.core.**ApplicationFilterChain.**internalDoFilter(**
 ApplicationFilterChain.java:**235)
at
 org.apache.catalina.core.**ApplicationFilterChain.**doFilter(**
 ApplicationFilterChain.java:**206)
at
 org.apache.catalina.core.**StandardWrapperValve.invoke(**
 StandardWrapperValve.java:233)
at
 org.apache.catalina.core.**StandardContextValve.invoke(**
 StandardContextValve.java:191)
at
 org.apache.catalina.core.**StandardHostValve.invoke(**
 StandardHostValve.java:127)
at
 org.apache.catalina.valves.**ErrorReportValve.invoke(**
 ErrorReportValve.java:102)
at
 org.apache.catalina.core.**StandardEngineValve.invoke(**
 StandardEngineValve.java:109)
at
 org.apache.catalina.valves.**AccessLogValve.invoke(**
 AccessLogValve.java:554)
at
 org.apache.catalina.connector.**CoyoteAdapter.service(**
 CoyoteAdapter.java:298)
at
 org.apache.coyote.http11.**Http11Processor.process(**
 Http11Processor.java:859)
at
 org.apache.coyote.http11.**Http11Protocol$**Http11ConnectionHandler.**
 process(Http11Protocol.java:**588)
at
 org.apache.tomcat.util.net.**JIoEndpoint$Worker.run(**
 JIoEndpoint.java:489)
at java.lang.Thread.run

Re: Hard Commit giving OOM Error on Index Writer in Solr 4.2.1

2013-05-21 Thread Shawn Heisey

On 5/21/2013 5:14 PM, Umesh Prasad wrote:

We have sufficient RAM on machine ..64 GB and we have given JVM 32 GB of
memory. The machine runs Indexing primarily.

The JVM doesn't run out of memory. It is the particular IndexWriterSolrCore
which has .. May be we have specified too low a memory for IndexWriter ..

We index mainly product data and use DIH to pull data from downstream
services. Autocommiit is off. The commit is infrequent  for legacy
reasons.. 1 commit in 2-3 hrs. It it makes a difference, then, a Core can
have more than10 lakh documents uncommitted at a time. IndexWriter has a
memory of 100 MB
  We ran with same config on Solr 3.5 and we never ran out of Memory.
But then, I hadn't tried hard commits on Solr 3.5.


Hard commits are the only kind of commits that Solr 3.x has.  It's soft 
commits that are new with 4.x.



Data-Source Entry :
dataConfig
dataSource name=products type=MultiSPCMSProductsDataSource


This appears to be using a custom data source, not one of the well-known 
types.  If it had been JDBC, I would be saying that your JDBC driver is 
trying to cache the entire result set in RAM.  With a MySQL data source, 
a batchSize of -1 fixes this problem, by internally changing the JDBC 
fetchSize to Integer.MIN_VALUE.  Other databases have different mechanisms.


With this data source, I have no idea at all how to make sure that it 
doesn't cache all results in RAM.  It might be that the combination of 
the new Solr and this custom data source causes a memory leak, something 
that doesn't happen with the old Solr version.


You said that the transaction log directory is empty.  That rules out 
one possibility, which would be solved by the autoCommit settings on 
this page:


http://wiki.apache.org/solr/SolrPerformanceProblems#Slow_startup

Aside from the memory leak idea, or possibly having your entire source 
data cached in RAM, I have no idea what's happening here.


Thanks,
Shawn



Re: Hard Commit giving OOM Error on Index Writer in Solr 4.2.1

2013-05-21 Thread Umesh Prasad
Hi Shawn,
This is our own implementation of data source (canon name
com.flipkart.w3.solr.MultiSPCMSProductsDataSource) , which pulls the data
from out downstream service and it doesn't cache data in RAM. It fetches
the data in batches of 200 and iterates over it when DIH asks for it. I
will check the possibility of leak, but unlikely.
   Can OOM issue be because during analysis, IndexWriter finds the
document to be too large to fit in 100 MB memory and can't flush to disk ?
Our analyzer chain doesn't make easy (specially with a field like) (does a
cross product of synonyms terms)

fieldType name=textStemmed class=solr.TextField indexed=true
stored=false multiValued=true positionIncrementGap=100
omitNorms=true
analyzer type=index
tokenizer class=solr.StandardTokenizerFactory/
filter class=solr.LowerCaseFilterFactory/
filter class=solr.StopFilterFactory ignoreCase=true
words=stopwords.txt enablePositionIncrements=true/
filter class=solr.*SynonymFilterFactory* synonyms=*
synonyms_index.txt* ignoreCase=true expand=*true*/
 filter class=solr.KStemFilterFactory /
filter class=solr.EnglishMinimalStemFilterFactory/
   filter class=solr.*SynonymFilterFactory* synonyms=*
synonyms_index.txt* ignoreCase=true expand=true/
filter class=solr.WordDelimiterFilterFactory
generateWordParts=1 generateNumberParts=1 catenateWords=1
catenateNumbers=1 catenateAll=0 splitOnCaseChange=1/
/analyzer
analyzer type=query
tokenizer class=solr.StandardTokenizerFactory/
filter class=solr.LowerCaseFilterFactory/
filter class=solr.StopFilterFactory ignoreCase=true
words=stopwords.txt enablePositionIncrements=true/
filter class=solr.SynonymFilterFactory
synonyms=synonyms_index.txt ignoreCase=true expand=true/
filter class=solr.KStemFilterFactory /
filter
class=solr.EnglishMinimalStemFilterFactory/
 filter class=solr.SynonymFilterFactory
synonyms=synonyms_index.txt ignoreCase=true expand=true/
filter class=solr.WordDelimiterFilterFactory
generateWordParts=1 generateNumberParts=1 catenateWords=1
catenateNumbers=1 catenateAll=0 splitOnCaseChange=1/
 /analyzer
/fieldType




On Wed, May 22, 2013 at 5:03 AM, Shawn Heisey s...@elyograg.org wrote:

 On 5/21/2013 5:14 PM, Umesh Prasad wrote:

 We have sufficient RAM on machine ..64 GB and we have given JVM 32 GB of
 memory. The machine runs Indexing primarily.

 The JVM doesn't run out of memory. It is the particular
 IndexWriterSolrCore
 which has .. May be we have specified too low a memory for IndexWriter ..

 We index mainly product data and use DIH to pull data from downstream
 services. Autocommiit is off. The commit is infrequent  for legacy
 reasons.. 1 commit in 2-3 hrs. It it makes a difference, then, a Core can
 have more than10 lakh documents uncommitted at a time. IndexWriter has a
 memory of 100 MB
   We ran with same config on Solr 3.5 and we never ran out of Memory.
 But then, I hadn't tried hard commits on Solr 3.5.


 Hard commits are the only kind of commits that Solr 3.x has.  It's soft
 commits that are new with 4.x.


  Data-Source Entry :
 dataConfig
 dataSource name=products type=**MultiSPCMSProductsDataSource


 This appears to be using a custom data source, not one of the well-known
 types.  If it had been JDBC, I would be saying that your JDBC driver is
 trying to cache the entire result set in RAM.  With a MySQL data source, a
 batchSize of -1 fixes this problem, by internally changing the JDBC
 fetchSize to Integer.MIN_VALUE.  Other databases have different mechanisms.

 With this data source, I have no idea at all how to make sure that it
 doesn't cache all results in RAM.  It might be that the combination of the
 new Solr and this custom data source causes a memory leak, something that
 doesn't happen with the old Solr version.

 You said that the transaction log directory is empty.  That rules out one
 possibility, which would be solved by the autoCommit settings on this page:

 http://wiki.apache.org/solr/**SolrPerformanceProblems#Slow_**startuphttp://wiki.apache.org/solr/SolrPerformanceProblems#Slow_startup

 Aside from the memory leak idea, or possibly having your entire source
 data cached in RAM, I have no idea what's happening here.

 Thanks,
 Shawn




-- 
---
Thanks  Regards
Umesh Prasad


Re: Hard Commit giving OOM Error on Index Writer in Solr 4.2.1

2013-05-21 Thread Shawn Heisey
On 5/21/2013 9:22 PM, Umesh Prasad wrote:
 This is our own implementation of data source (canon name
 com.flipkart.w3.solr.MultiSPCMSProductsDataSource) , which pulls the data
 from out downstream service and it doesn't cache data in RAM. It fetches
 the data in batches of 200 and iterates over it when DIH asks for it. I
 will check the possibility of leak, but unlikely.
Can OOM issue be because during analysis, IndexWriter finds the
 document to be too large to fit in 100 MB memory and can't flush to disk ?
 Our analyzer chain doesn't make easy (specially with a field like) (does a
 cross product of synonyms terms)

If your documents are really large (hundreds of KB, or a few MB), you
might need a bigger ramBufferSizeMB value ... but if that were causing
problems, I would expect it to show up during import, not at commit time.

How much of your 32GB heap is in use during indexing?  Would you be able
to try with the heap at 31GB instead of 32GB?  One of Java's default
optimizations (UseCompressedOops) gets turned off with a heap size of
32GB because it doesn't work any more, and that might lead to strange
things happening.

Do you have the ability to try 4.3 instead of 4.2.1?

Thanks,
Shawn



Hard Commit giving OOM Error on Index Writer in Solr 4.2.1

2013-05-20 Thread Umesh Prasad
Hi All,
   I am hitting an OOM error while trying to do an hard commit on one of
the cores.

Transaction log dir is Empty and DIH shows indexing going on for  13 hrs..

*Indexing since 13h 22m 22s*
Requests: 5,211,392 (108/s), Fetched: 1,902,792 (40/s), Skipped: 106,853,
Processed: 1,016,696 (21/s)
Started: about 13 hours ago



response
lst name=responseHeaderint name=status500/intint
name=QTime4/int/lstlst name=errorstr name=msgthis writer hit
an OutOfMemoryError; cannot commit/strstr
name=tracejava.lang.IllegalStateException: this writer hit an
OutOfMemoryError; cannot commit
at
org.apache.lucene.index.IndexWriter.prepareCommitInternal(IndexWriter.java:2661)
at
org.apache.lucene.index.IndexWriter.commitInternal(IndexWriter.java:2827)
at org.apache.lucene.index.IndexWriter.commit(IndexWriter.java:2807)
at
org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:536)
at
org.apache.solr.update.processor.RunUpdateProcessor.processCommit(RunUpdateProcessorFactory.java:95)
at
org.apache.solr.update.processor.UpdateRequestProcessor.processCommit(UpdateRequestProcessor.java:64)
at
org.apache.solr.update.processor.DistributedUpdateProcessor.processCommit(DistributedUpdateProcessor.java:1055)
at
org.apache.solr.update.processor.LogUpdateProcessor.processCommit(LogUpdateProcessorFactory.java:157)
at
org.apache.solr.handler.RequestHandlerUtils.handleCommit(RequestHandlerUtils.java:69)
at
org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:68)
at
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1817)
at
org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:639)
at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:345)
at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:141)
at
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
at
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at
org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
at
org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
at
org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
at
org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
at
org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
at
org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:554)
at
org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298)
at
org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:859)
at
org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:588)
at
org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489)
at java.lang.Thread.run(Thread.java:662)




-- 
---
Thanks  Regards
Umesh Prasad