Re: error message in solr logs

2012-08-09 Thread soni.s
Thanks for the reply Eric. But I am not very clear here because we have just
one part of app which adds to the index. And if the code is sending wrong
headers then it should do so for all records? Some parts of the code below.
we use the SolrJ API as i mentioned earlier :

.
SolrInputDocument doc = new SolrInputDocument();

for (String indexField : listOfFieldNames) {
   doc.addField(indexField , 'valueForTheFieldFromAValueObject');
}
commonsHttpSolrServer.add(doc);
/// other code

what we do is load multiple instances of the CommonsHttpSolrServer connected
to a node:core  and cache them in memory and use them for searching/adding
indexes.

so could this reuse of server be causing a problem or am i missing something
here? 

Thanks in advance.



--
View this message in context: 
http://lucene.472066.n3.nabble.com/error-message-in-solr-logs-tp3999328p468.html
Sent from the Solr - User mailing list archive at Nabble.com.


error message in solr logs

2012-08-06 Thread soni.s
Hi, we have a large lucene index base created using solr. Its split into 16
cores. Each core contains almost 10GB of indexes. We have deployed 8
instances of Solr hosting two cores each. The logic of identifying where the
document resides based on the document id, is built within the application.
There are other queries also which query all the cores on all the cores
accross solr instances because the query may not be based on document id. We
use SolrJ to connect to and query the indexes and get results.
We have more reads than writes overall. A document is inserted once and
updated a max of 2 times in a few days. But it could be potentially searched
10s of times in a day.

Lately we are noticing below exception in our solr logs. This happens
sometimes once or twice a day on a few cores.

SEVERE: org.apache.solr.common.SolrException: Invalid chunk header
at org.apache.solr.handler.XMLLoader.load(XMLLoader.java:72)
at
org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:54)
at
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:131)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1316)
at
org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:338)
at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:241)
at
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
at
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at
org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
at
org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
at
org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
at
org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
at
org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
at
org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298)
at
org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:852)
at
org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:588)
at
org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489)
at java.lang.Thread.run(Thread.java:662)
Caused by: com.ctc.wstx.exc.WstxIOException: Invalid chunk header
at
com.ctc.wstx.stax.WstxInputFactory.doCreateSR(WstxInputFactory.java:548)
at
com.ctc.wstx.stax.WstxInputFactory.createSR(WstxInputFactory.java:604)
at
com.ctc.wstx.stax.WstxInputFactory.createSR(WstxInputFactory.java:660)
at
com.ctc.wstx.stax.WstxInputFactory.createXMLStreamReader(WstxInputFactory.java:331)
at org.apache.solr.handler.XMLLoader.load(XMLLoader.java:68)
... 17 more
Caused by: java.io.IOException: Invalid chunk header
at
org.apache.coyote.http11.filters.ChunkedInputFilter.doRead(ChunkedInputFilter.java:133)
at
org.apache.coyote.http11.InternalInputBuffer.doRead(InternalInputBuffer.java:710)
at org.apache.coyote.Request.doRead(Request.java:428)
at
org.apache.catalina.connector.InputBuffer.realReadBytes(InputBuffer.java:304)
at
org.apache.tomcat.util.buf.ByteChunk.substract(ByteChunk.java:405)
at
org.apache.catalina.connector.InputBuffer.read(InputBuffer.java:327)
at
org.apache.catalina.connector.CoyoteInputStream.read(CoyoteInputStream.java:193)
at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:264)

The env consists of:

OS: Enterprise Linux 64 bit
Tomcat version: 6.0.26
solr version: 3.3.0
JDK: 1.6
Total number of solr documents: more than 20 Million.

Can someone please let me know what this is as googling around doesnt give
me much info. Overall i dont see much problem from the application's use but
i wanted to know what this error is and what could the impact be to the app
in future. Thanks for any help in advance.






--
View this message in context: 
http://lucene.472066.n3.nabble.com/error-message-in-solr-logs-tp3999328.html
Sent from the Solr - User mailing list archive at Nabble.com.


read write solr shard setup

2012-08-06 Thread soni.s
Hi, i am trying to use a read/write solr setup. what i mean is that i would
have a common location for lucene indexes and configure one instance of solr
for reads and another instance to only write new indexes. Both the instances
pointing to the same index location. The approach is given here 
http://wiki.apache.org/solr/NearRealtimeSearchTuning.
http://wiki.apache.org/solr/NearRealtimeSearchTuning. . 
My question is: is there a way that i can read the documents from the
read-only instance without calling the empty 'commit()'?  I mean is there
some configuration i can change in solrconfig.xml or something?

I have the following configuration in solrconfig.xml
autoCommit 
   maxDocs1/maxDocs
   maxTime10/maxTime 
/autoCommit

But this doesnt seem to help the RO node to be able to read the
just-commited documents.





--
View this message in context: 
http://lucene.472066.n3.nabble.com/read-write-solr-shard-setup-tp3999357.html
Sent from the Solr - User mailing list archive at Nabble.com.