On 3/14/2019 1:13 AM, VAIBHAV SHUKLA shuklavaibha...@yahoo.in wrote:
When I restart Solr it throws the following error. Solr collection indexed to 
pdf in hdfs throws error during solr restart.

Error

<snip>

Caused by: org.apache.lucene.store.LockObtainFailedException: Index dir 
'hdfs://192.168.1.16:8020/PDFIndex/data/index/' of core 'PDFIndex' is already 
locked. The most likely cause is another Solr server (or another solr core in 
this server) also configured to use this directory; other possible causes may 
be specific to lockType: hdfs

Solr has been shut down forcefully, so the lockfile is remaining in the core's directory (which in your case is in HDFS). A graceful shutdown would have deleted the lockfile.

What version of Solr, and what OS do you have it running on?

For a while now, on non-windows operating systems, the "stop" action in the bin/solr script has waited up to 3 minutes for Solr to gracefully shut down before forcefully killing it. This has eliminated most of these problems when running on one of those operating systems.

On Windows, the bin\solr script is only waiting 5 seconds before forcefully killing Solr, which used to happen on all operating systems. This is extremely likely to cause problems like this. Fixing this on Windows is on the radar, but in general we lack adept skill with Windows, so it's not proceeding quickly.

I'm having trouble locating the issue for fixing the problem on Windows.

To fix it, find the "write.lock" file in your core's HDFS storage location and delete it.

Thanks,
Shawn

Reply via email to