[ 
https://issues.apache.org/jira/browse/SOLR-240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12506983
 ] 

Will Johnson commented on SOLR-240:
-----------------------------------

longer >>than without.


No, after I applied the patch I have never seen a lockup. 

oldest Solr collections have been running in CNET for 2 years now, and
I've never seen this happen).   What I *have* seen is that exact
exception when the server died, restarted, and then couldn't grab the
write lock.... normally due to not a big enough heap causing excessive
GC and leading resin's wrapper to restart the container.

Another reason to use native locking.  From the lucene native fs lock
javadocs:  "Furthermore, if the JVM crashes, the OS will free any held
locks, whereas SimpleFSLockFactory will keep the locks held, requiring
manual removal before re-running Lucene."  

My hunch (and that's all it is) is that people seeing/not seeing the
issue may come down to usage patterns.  My project is heavily focused on
low indexing latency so we're doing huge numbers of
add/deletes/commits/searches in very fast succession and from multiple
clients.  A more batch oriented update usage pattern may not see the
issue.

The patch because as is, it doesn't change any api or cause any change
of existing functionality whatsoever unless you use the new option in
solrconfig.  I would argue that using native locking should be the
default though.

- will    


> java.io.IOException: Lock obtain timed out: SimpleFSLock
> --------------------------------------------------------
>
>                 Key: SOLR-240
>                 URL: https://issues.apache.org/jira/browse/SOLR-240
>             Project: Solr
>          Issue Type: Bug
>          Components: update
>    Affects Versions: 1.2
>         Environment: windows xp
>            Reporter: Will Johnson
>         Attachments: IndexWriter.patch, IndexWriter2.patch, stacktrace.txt, 
> ThrashIndex.java
>
>
> when running the soon to be attached sample application against solr it will 
> eventually die.  this same error has happened on both windows and rh4 linux.  
> the app is just submitting docs with an id in batches of 10, performing a 
> commit then repeating over and over again.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to