Hi,

What you are doing sounds fine.  You don't need to commit while indexing, 
though, just commit/optimize at the end.  I'm not saying this will solve your 
problem, but give it a try.
 
Otis
----
Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch
Hadoop ecosystem search :: http://search-hadoop.com/



----- Original Message ----
> From: "Sethi, Parampreet" <parampreet.se...@corp.aol.com>
> To: solr-user@lucene.apache.org
> Sent: Fri, April 16, 2010 1:13:57 PM
> Subject: Solr Index Lock Issue
> 
> Hi All,

We are facing the issue with the Solr server in the DMOZ data 
> migration.
The Solr has 0 records when the migration starts and the data is 
> added
into Solr in the batches of 20000 records. The commit is called on 
> Solr
after 20k records are processed. 

While commiting the data into 
> Solr, a lucene lock file is created in the
<Solr_Home>/data/index 
> folder which is automatically released once the
successful commit happens. 
> But after 4-5 batches, the lock file remains
there and Solr just hangs and 
> does not add any new records. Some times
the whole migration goes through 
> without any errors.

Kindly let me know in case some setting needs to be 
> required on Solr
side, which ensures that until the Solr commits the index, 
> the next set
of records should not be added.

Thanks,
Param

Reply via email to