@Shawn Heisey Yeah, delete "write.lock" files manually is ok finally。
@Walter Underwood Have some performace evaluation about Solr on HDFS vs
LocalFS recently?
Shawn Heisey 于2018年8月28日周二 上午4:10写道:
> On 8/26/2018 7:47 PM, zhenyuan wei wrote:
> > I found an exception when running Solr on
On 8/26/2018 7:47 PM, zhenyuan wei wrote:
I found an exception when running Solr on HDFS。The detail is:
Running solr on HDFS,and update doc was running always,
then,kill -9 solr JVM or reboot linux os/shutdown linux os,then restart all.
If you use "kill -9" to stop a Solr instance, the
I accidentally put my Solr indexes on NFS once about ten years ago.
It was 100X slower. I would not recommend that.
wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/ (my blog)
> On Aug 27, 2018, at 1:39 AM, zhenyuan wei wrote:
>
> Thanks for your answer! @Erick
Thanks for your answer! @Erick Erickson
So, It's not recommended to run Solr on NFS ( like HDFS) now? Maybe
because of crash error or performance problem.
I have a look at SOLR-8335, there is no good solution for this
now, And maybe manual removal is the best option?
Erick Erickson
Because HDFS doesn't follow the file semantics that Solr expects.
There's quite a bit of background here:
https://issues.apache.org/jira/browse/SOLR-8335
Best,
Erick
On Sun, Aug 26, 2018 at 6:47 PM zhenyuan wei wrote:
>
> Hi all,
> I found an exception when running Solr on HDFS。The detail
Hi all,
I found an exception when running Solr on HDFS。The detail is:
Running solr on HDFS,and update doc was running always,
then,kill -9 solr JVM or reboot linux os/shutdown linux os,then restart all.
The exception appears like:
2018-08-26 22:23:12.529 ERROR