[ 
https://issues.apache.org/jira/browse/SOLR-240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12506973
 ] 

Yonik Seeley commented on SOLR-240:
-----------------------------------

> And so I made the attached patch which seems to run at least 100x longer than 
> without.

Does this mean you still had occasional issues with native locking?

Does anyone ever see exceptions relating to removal of the lockfile (presumably 
that's why it can't be aquired by the new IndexWriter instance?)

It's worrying that it's also reproducable on Linux... (although the oldest Solr 
collections have been running in CNET for 2 years now, and I've never seen this 
happen).   What I *have* seen is that exact exception when the server died, 
restarted, and then couldn't grab the write lock.... normally due to not a big 
enough heap causing excessive GC and leading resin's wrapper to restart the 
container.

> java.io.IOException: Lock obtain timed out: SimpleFSLock
> --------------------------------------------------------
>
>                 Key: SOLR-240
>                 URL: https://issues.apache.org/jira/browse/SOLR-240
>             Project: Solr
>          Issue Type: Bug
>          Components: update
>    Affects Versions: 1.2
>         Environment: windows xp
>            Reporter: Will Johnson
>         Attachments: IndexWriter.patch, IndexWriter2.patch, stacktrace.txt, 
> ThrashIndex.java
>
>
> when running the soon to be attached sample application against solr it will 
> eventually die.  this same error has happened on both windows and rh4 linux.  
> the app is just submitting docs with an id in batches of 10, performing a 
> commit then repeating over and over again.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to