On 5/17/2016 12:29 AM, scott.chu wrote:
> I build Solrcloud with 2 nodes, 1 shard, 2 replica. I add doc in xml format 
> using post.jar up to 2.85M+ no. of docs and 10gb index size. When I add more 
> docs. the solr.log shows:
>
> --------------------------------------
>     2016-05-17 14:01:09,024 WARN  (main) [   ] o.e.j.s.h.RequestLogHandler 
> !RequestLog
>     2016-05-17 14:01:09,275 WARN  (main) [   ] o.e.j.s.SecurityHandler 
> ServletContext@o.e.j.w.WebAppContext@57fffcd7{/solr,file:/D:/portable_sw/solr-5.4.1/server/solr-webapp/webapp/,STARTING}{D:\portable_sw\solr-5.4.1\server/solr-webapp/webapp}
>  has uncovered http methods for path: /
>     2016-05-17 14:01:09,346 WARN  (main) [   ] o.a.s.c.CoreContainer Couldn't 
> add files from D:\portable_sw\solr-5.4.1\mynodes\cloud\node1\lib to 
> classpath: D:\portable_sw\solr-5.4.1\mynodes\cloud\node1\lib
>     2016-05-17 14:01:11,419 WARN  
> (coreLoadExecutor-7-thread-1-processing-n:10.18.59.179:8983_solr) [c:cugna 
> s:shard1 r:core_node2 x:cugna_shard1_replica2] o.a.s.u.UpdateLog Exception 
> reverse reading log 
> java.io.EOFException
> ...
> --------------------------------------

I have no idea what those warnings are saying.  The first two are from
Jetty.

I assume that your xml is in the Solr XML update format.  How big is the
xml file you are sending?  Solr has a default 2MB limit on the POST body
size.  This can be increased with the formdataUploadLimitInKB parameter
... but do not increase it too much.  One of the reason that the limits
are there to keep memory usage down.  Instead of trying to index one
really massive file, break it into pieces small enough to fit in the
POST size limit.

> Later I stop all and kill write.lock (I ususally do this in Solr 3 when add 
> doc fails) and add doc again but Solrcloud show can't find write.lock. So I 
> recover write.lock and call post.jar again. The output shows:

If you need to delete write.lock after you stop Solr, that means you are
forcibly terminating Solr, not asking it to stop and waiting for
graceful shutdown.  When doing a normal shutdown, Solr will erase all
the write.lock files itself.  The bin/solr script only waits five
seconds for Solr to stop before proceeding with forcible termination,
which often isn't enough, and should be adjusted.



Replying to things stated later in the thread:

If you only supply Solr with one zookeeper host, then Solr will stop
working if that zookeeper host goes down, because it will not have any
information about the other servers.  Zookeeper servers do not currently
inform zookeeper clients about other servers in the ensemble.  When
Zookeeper 3.5 comes out and Solr updates to use that version, this
*might* work, but I am not sure.  The best option is to simply inform
Solr about all your zookeeper servers when you start it, particularly
because you may not be sure which of those servers are actually *up*
when you start.

Zookeeper 3.5 will support dynamic cluster membership -- adding and
removing servers without restarting servers or clients.  You should
still give Solr at least three servers when you start it, though -- so
it can reliably *find* the cluster.



This might be info you know already:  Running everything on one server
is NOT suitable for production.  Once you do get to production with a
minimum of three zookeeper machines and two Solr machines (which may
also be zookeeper machines), if you do end up needing to run multiple
Solr instances per machine, this is the recommended way to do it (on
Linux and other *NIX operating systems):

https://cwiki.apache.org/confluence/display/solr/Taking+Solr+to+Production#TakingSolrtoProduction-RunningmultipleSolrnodesperhost

Thanks,
Shawn

Reply via email to