Re: SolrCloud backup/restore

2016-04-05 Thread Zisis Tachtsidis
Thank you both for the clarification and proposals!

This solrcloud_manager looks very promising. I'll try it out, the shared
filesystem requirement is no issue for me. 



--
View this message in context: 
http://lucene.472066.n3.nabble.com/SolrCloud-backup-restore-tp4267954p4268197.html
Sent from the Solr - User mailing list archive at Nabble.com.


SolrCloud backup/restore

2016-04-04 Thread Zisis Tachtsidis
I've tested backup/restore successfully in a SolrCloud installation with a
single node (no replicas). This has been achieved in
https://issues.apache.org/jira/browse/SOLR-6637 
Can you do something similar when more replicas are involved? What I'm
looking for is a restore command that will restore index in all replicas of
a collection.
Judging from the code in /ReplicationHandler.java/ and
https://issues.apache.org/jira/browse/SOLR-5750 I assume that more work
needs to be done to achieve this.

Is my understanding correct? If the situation is like this I guess an
alternative would be to just create a new collection, restore index and then
add replicas. (I'm using Solr 5.5.0)



--
View this message in context: 
http://lucene.472066.n3.nabble.com/SolrCloud-backup-restore-tp4267954.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: XJoin, a way to use external data sources with Solr

2016-03-28 Thread Zisis Tachtsidis
Hi Tom,

Thanks for clarifying the purpose of XJoin, makes sense now. Hope it makes
it into Solr's main branch, this could prove useful! For the time being
PostFilter covers my needs. 



--
View this message in context: 
http://lucene.472066.n3.nabble.com/XJoin-a-way-to-use-external-data-sources-with-Solr-tp4254055p4266407.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: XJoin, a way to use external data sources with Solr

2016-03-08 Thread Zisis Tachtsidis
Hi Charlie, 

This looks like an interesting feature, but I have a couple of questions
before giving it a try. 

I had similar needs - filtering results based on information outside of the
queried Solr collection - and I went down the post-filtering path.
More specifically I've implemented a *PostFilter* which gets the info
outside of the current Solr collection and based on that it filters the
normal search results in the *collect(int docNumber)* method later on. It's
something similar to the approach described at 
http://qaware.blogspot.com.tr/2014/11/how-to-write-postfilter-for-solr-49.html

  

Did you consider such an approach? Do you think there are some downsides of
the post-filtering approach? And what extra functionality can I get from
XJoin (if they are doing the same thing more or less)?



--
View this message in context: 
http://lucene.472066.n3.nabble.com/XJoin-a-way-to-use-external-data-sources-with-Solr-tp4254055p4262540.html
Sent from the Solr - User mailing list archive at Nabble.com.


Single-sharded SolrCloud vs Lucene indexing speed

2015-11-28 Thread Zisis Tachtsidis
I'm conducting some indexing experiments in SolrCloud and I want to confirm
my conclusions and ask for suggestions on how to improve performance.

My setup includes a single-sharded collection with 1 additional replica in
SolrCloud 5.3.1. I'm using SolrJ and the indexing speed refers to the actual
SolrJ call that adds the document. I've run some indexing tests and it seems
that Lucene indexing is equal to or better than Solr's in all cases. In all
cases the same documents are sent to both Lucene&Solr and the same analysis
is performed on the documents. 

- 2 replicas, leader is a replica on a machine under heavy load => ~3x
slower than Lucene.
- 2 replicas, leader is a replica on a machine under light load => ~2x
slower than Lucene.
- 1 replica on a machine under light load => indexing speed similar to
Lucene.

Conclusions
(*) It seems that the slowest replica determines the indexing speed. 
(*) It gets even worse if the slowest replica is the leader. This is
justified if it's true that only after the leader finishes indexing it
forwards the request to the remaining replicas.

Regarding improvements
(*) I'm indexing pretty big documents 0.5MBhttp://lucene.472066.n3.nabble.com/Single-sharded-SolrCloud-vs-Lucene-indexing-speed-tp4242568.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Index directory containing only segments.gen

2015-02-13 Thread Zisis Tachtsidis
Erick Erickson wrote
> OK, I think this is the root of your problem:
> 
> bq:  Everything was setup using the - now deprecated - tags 
> 
> and 
> 
>   inside solr.xml.
> 
> There are a bunch of ways this could go wrong. I'm pretty sure you
> have something that would take quite a while to untangle, so unless
> you have a _very_ good reason for making this work, I'd blow
> everything away.

I've started playing with SolrCloud before the new solr.xml made its
appearance (in the example files of 4.4 distribution If I'm not mistaken)
and since it was classified only as deprecated I decided to postpone the
transition to the new solr.xml for the migration to Solr 5.0. Anyway, what
you are saying is that the use of the new solrcloud-friendly configuration
file is accompanied by changes in SolrCloud behavior?


Erick Erickson wrote
> If you're using an external Zookeeper shut if off and, 'rm -rf
> /tmp/zookeeper'. If using embedded, you can remove zoo_data under your
> SOLR_HOME.

Do you mean getting rid of Zookeeper snapshot and transcation logs,
basically clearing things and removing zknodes like clusterstate.json,
overseer and the like?


Erick Erickson wrote
> OK, now use the Collections API to create your collection, see:
> https://cwiki.apache.org/confluence/display/solr/Collections+API and
> go from there (don't forget to push your configs to Zookeeper first)
> and go from there.

I've successfully tried your proposed approach using the new solr.xml but
I've bypassed the collections API and added core.properties files inside my
collection directories. Directories contain no other files and configuration
has been preloaded into Zookeeper. I prefer to have everything ready before
starting the Solr servers. Do you see anything unusual there?

One last thing, what exactly is HttpShardHandlerFactory responsible for?
Because there was no such definition in the deprecated solr.xml I was using.

Thanks Erick,
Zisis T.





--
View this message in context: 
http://lucene.472066.n3.nabble.com/Index-directory-containing-only-segments-gen-tp4186045p4186316.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Index directory containing only segments.gen

2015-02-12 Thread Zisis Tachtsidis
>From the logs I've got one instance failing as described in my first comment
and the other two failing during PeerSync recovery when trying to
communicate with the server that was missing the segments_* files. The
exception follows


org.apache.solr.client.solrj.SolrServerException: IOException occured when
talking to server at: http://server:host/solr/core
at
org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:566)
at
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:210)
at
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:206)
at
org.apache.solr.handler.component.HttpShardHandler$1.call(HttpShardHandler.java:157)
at
org.apache.solr.handler.component.HttpShardHandler$1.call(HttpShardHandler.java:119)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.http.client.ClientProtocolException
at
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:909)
at
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:805)
at
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:784)
at
org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:448)
... 10 more
Caused by: org.apache.http.ProtocolException: Invalid header: ,code=500}
at
org.apache.http.impl.io.AbstractMessageParser.parseHeaders(AbstractMessageParser.java:232)
at
org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:267)
at
org.apache.http.impl.AbstractHttpClientConnection.receiveResponseHeader(AbstractHttpClientConnection.java:283)
at
org.apache.http.impl.conn.DefaultClientConnection.receiveResponseHeader(DefaultClientConnection.java:252)
at
org.apache.http.impl.conn.ManagedClientConnectionImpl.receiveResponseHeader(ManagedClientConnectionImpl.java:191)
at
org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:271)
at
org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:123)
at
org.apache.http.impl.client.DefaultRequestDirector.tryExecute(DefaultRequestDirector.java:713)
at
org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:518)
at
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:906)
... 13 more




--
View this message in context: 
http://lucene.472066.n3.nabble.com/Index-directory-containing-only-segments-gen-tp4186045p4186113.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Index directory containing only segments.gen

2015-02-12 Thread Zisis Tachtsidis
Well, I don't know If I'm being helpful but here goes.
My clusterstate.json actually has no leader for the shard in question. I
have 2 nodes as "recovery_failed" and one as "down". No leaders there. I've
not used core admin or collections api to create anything. Everything was
setup using the - now deprecated - tags  and   inside solr.xml. 

Also index directories are different since I ended up copying the index from
the one node that still had it to the other too and restarting again. 




--
View this message in context: 
http://lucene.472066.n3.nabble.com/Index-directory-containing-only-segments-gen-tp4186045p4186107.html
Sent from the Solr - User mailing list archive at Nabble.com.


Index directory containing only segments.gen

2015-02-12 Thread Zisis Tachtsidis
I'm using SolrCloud 4.10.3 and the current setup is simple using 3 nodes with
1 shard. After a rolling restart of the Solr cluster I've ended up with 2
failing nodes reporting the following

org.apache.solr.servlet.SolrDispatchFilter
null:org.apache.solr.common.SolrException: SolrCore 'core' is not available
due to init failure: Error opening new searcher
Caused by: org.apache.solr.common.SolrException: Error opening new searcher
at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:1574)
at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1686)
at org.apache.solr.core.SolrCore.(SolrCore.java:853)
... 8 more
Caused by: java.nio.file.NoSuchFileException: /path/to/index/segments_1
at
sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)
at
sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
at
sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
at
sun.nio.fs.UnixFileSystemProvider.newFileChannel(UnixFileSystemProvider.java:177)
at java.nio.channels.FileChannel.open(FileChannel.java:287)
at java.nio.channels.FileChannel.open(FileChannel.java:334)
at
org.apache.lucene.store.MMapDirectory.openInput(MMapDirectory.java:196)
at
org.apache.lucene.store.Directory.openChecksumInput(Directory.java:113)
at org.apache.lucene.index.SegmentInfos.read(SegmentInfos.java:341)
at
org.apache.lucene.index.SegmentInfos$1.doBody(SegmentInfos.java:454)
at
org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:906)
at
org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:752)
at org.apache.lucene.index.SegmentInfos.read(SegmentInfos.java:450)
at org.apache.lucene.index.IndexWriter.(IndexWriter.java:792)
at
org.apache.solr.update.SolrIndexWriter.(SolrIndexWriter.java:77)
at
org.apache.solr.update.SolrIndexWriter.create(SolrIndexWriter.java:64)
at
org.apache.solr.update.DefaultSolrCoreState.createMainIndexWriter(DefaultSolrCoreState.java:279)
at
org.apache.solr.update.DefaultSolrCoreState.getIndexWriter(DefaultSolrCoreState.java:111)
at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:1537)
... 10 more

Checking the index directory of each node I found out that only
*segments.gen* was inside. What I could not determine is how I ended up with
this single file. Looking at the logs I could not find anything related. The
3rd node had its index intact.
Has anyone else encountered something similar?



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Index-directory-containing-only-segments-gen-tp4186045.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: PostingsHighlighter highlighted snippet size (fragsize)

2015-01-28 Thread Zisis Tachtsidis
It seems that a solution has been found.

PostingsHighlighter uses by default Java's SENTENCE BreakIterator so it
breaks the snippets into fragments per sentence.
In my text_en analysis chain though I was using a filter that lowercases
input and this seems to mess with the logic of SENTENCE BreakIterator.
Removing the filter did the trick.

Apart from that there is a new issue now. I'm trying to search on one field
and highlight another and this seems to not be working even If I use the
exact same analyzers for both fields. I get the correct results in the
highlighting section but there is no highlight. Digging deeper I've found
inside PostingsHighlighter.highlightFieldsAsObjects() (line 393 in version
4.10.3) that the fields to be highlighted (I guess) are the intersection of
the query terms set (fields used in the search query) and the set of fields
to be highlighted (defined by the hl.fl param). So, unless I use the field
to be highlighted in the search query I get no highlight.



--
View this message in context: 
http://lucene.472066.n3.nabble.com/PostingsHighlighter-highlighted-snippet-size-fragsize-tp4180634p4182596.html
Sent from the Solr - User mailing list archive at Nabble.com.


PostingsHighlighter highlighted snippet size (fragsize)

2015-01-20 Thread Zisis Tachtsidis
Hi all,

I'm using SolrCloud 4.10.0 and trying to incorporate
PostingsSolrHighlighter. One issue that I'm having is that I cannot have the
functionality of "hl.fragsize" in PostingsSolrHighlighter. How can I limit
the size of the highlighted text? I get highlighted results but their
snippet size varies and can be quite large in some cases (>1000 chars). Note
that I've done this successfully using hl.fragsize and the default Solr
highlighter.

The field I want highlighting on is defined as 
//

"text_en" is the default definition. I've even tried using only
StandardTokenizer (no filters) for index/query chains to avoid issues
described at https://issues.apache.org/jira/browse/LUCENE-4641.

and the highlighter is defined as follows in solrconfg.xml (all other
highlight components are commented out)
/
   

/
My search query looks like
//select?q=highlighted_text:introduction&wt=json&indent=true
&hl=true&hl.fl=highlighted_text&hl.simple.pre=&hl.simple.post=/



--
View this message in context: 
http://lucene.472066.n3.nabble.com/PostingsHighlighter-highlighted-snippet-size-fragsize-tp4180634.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: SolrCloud shard leader elections - Altering zookeeper sequence numbers

2015-01-13 Thread Zisis Tachtsidis
Daniel Collins wrote
> Is it important where your leader is?  If you just want to minimize
> leadership changes during rolling re-start, then you could restart in the
> opposite order (S3, S2, S1).  That would give only 1 transition, but the
> end result would be a leader on S2 instead of S1 (not sure if that
> important to you or not).  I know its not a "fix", but it might be a
> workaround until the whole leadership moving is done?

I think that rolling restarting the machines in the opposite order
(S3,S2,S1) will result in S3 being the leader. It's a valid approach but
shouldn't I have to revert to the original order (S1,S2,S3) to achieve the
same result in the following rolling restart? This includes operational
costs and complexity that I want to avoid.


Erick Erickson wrote
>> Just skimming, but the problem here that I ran into was with the
>> listeners. Each _Solr_ instance out there is listening to one of the
>> ephemeral nodes (the "one in front"). So deleting a node does _not_
>> change which ephemeral node the associated Solr instance is listening
>> to.
>>
>> So, for instance, when you delete S2..n-01 and re-add it, S2 is
>> still looking at S1n-00 and will continue looking at
>> S1...n-00 until S1n-00 is deleted.
>>
>> Deleting S2..n-01 will wake up S3 though, which should now be
>> looking at S1n-000. Now you have two Solr listeners looking at
>> the same ephemeral node. The key is that deleting S2...n-01 does
>> _not_ wake up S2, just any solr instance that has a watch on the
>> associated ephemeral node.

Thanks for the info Erick. I wasn't aware of this "linked-list" listeners
structure between the zk nodes. Based on what you've said though I've
changed my implementation a bit and it seems to be working at first glance.
Of course it's not reliable yet but it looks promising.

My original attempt
> S1:-n_00 (no code running here)
> S2:-n_04 (code deleting zknode -n_01 and creating
> -n_04)
> S3:-n_03 (code deleting zknode -n_02 and creating
> -n_03) 

has been changed to 
S1:-n_00 (no code running here)
S2:-n_03 (code deleting zknode -n_01 and creating
-n_03 using EPHEMERAL_SEQUENTIAL)
S3:-n_02 (no code running here) 

Once S1 is shutdown S3 becomes leader since it listens to S1 now according
to what you've said

The original reason I pursued this "minimize leadership changes" quest was
that it _could_ lead to "data loss" in some scenarios. I'm not entirely sure
though and you could correct me on this and but I'm explaining myself.

If you have incoming indexing requests during a rolling restart, could there
be a case during the "current leader shutdown" where the "leader-to-be-node"
could not have the time to sync with the
"current-leader-that-shut-downs-node" in which case everyone will now sync
to the new leader thus missing some updates. I've seen an installation
having different index sizes in each replica that deteriorated over time.




--
View this message in context: 
http://lucene.472066.n3.nabble.com/SolrCloud-shard-leader-elections-Altering-zookeeper-sequence-numbers-tp4178973p4179147.html
Sent from the Solr - User mailing list archive at Nabble.com.


SolrCloud shard leader elections - Altering zookeeper sequence numbers

2015-01-12 Thread Zisis Tachtsidis
SolrCloud uses ZooKeeper sequence flags to keep track of the order in which
nodes register themselves as leader candidates. The node with the lowest
sequence number wins as leader of the shard.

What I'm trying to do is to keep the leader re-assignments to the minimum
during a rolling restart. In this direction I change the zk sequence numbers
on the SolrCloud nodes when all nodes of the cluster are up and active. I'm
using Solr 4.10.0 and I'm aware of SOLR-6491 which has a similar purpose but
I'm trying to do it from "outside", using the existing APIs without editing
Solr source code.

== TYPICAL SCENARIO ==
Suppose we have 3 Solr instances S1,S2,S3. They are started in the same
order and the zk sequences assigned have as follows
S1:-n_00 (LEADER)
S2:-n_01
S3:-n_02

In a rolling restart we'll get S2 as leader (after S1 shutdown), then S3
(after S2 shutdown) and finally S1(after S3 shutdown), 3 changes in total.

== MY ATTEMPT ==
By using SolrZkClient and the Zookeeper multi API  I found a way to get rid
of the old zknodes that participate in a shard's leader election and write
new ones where we can assign the sequence number of our liking. 

S1:-n_00 (no code running here)
S2:-n_04 (code deleting zknode -n_01 and creating
-n_04)
S3:-n_03 (code deleting zknode -n_02 and creating
-n_03)

In a rolling restart I'd expect to have S3 as leader (after S1 shutdown), no
change (after S2 shutdown) and finally S1(after S3 shutdown), that is 2
changes. This will be constant no matter how many servers are added in
SolrCloud while in the first scenarion the # of re-assignments equals the #
of Solr servers.

The problem occurs when S1 (LEADER) is shut down. The elections that take
place still set S2 as leader, It's like ignoring the new sequence numbers.
When I go to /solr/#/~cloud?view=tree the new sequence numbers are listed
under "/collections" based on which S3 should have become the leader.
Do you have any idea why the new state is not acknowledged during the
elections? Is something cached? Or to put it bluntly do I have any chance
down this path? If not what are my options? Is it possible to apply all
patches under SOLR-6491 in isolation and continue from there?

Thank you. 

Extra info which might help follows
1. Some logging related to leader elections after S1 has been shut down
S2 - org.apache.solr.cloud.SyncStrategy Leader's attempt to sync with
shard failed, moving to the next candidate
S2 - org.apache.solr.cloud.ShardLeaderElectionContext We failed sync,
but we have no versions - we can't sync in that 
   case - we were active before, so become leader anyway

S3 - org.apache.solr.cloud.LeaderElector Our node is no longer in line
to be leader

2. And some sample code on how I perform the ZK re-sequencing
   // Read current zk nodes for a specific collection
 
solrServer.getZkStateReader().getZkClient().getSolrZooKeeper().getChildren("/collections/core/leader_elect/shard1
  /election", true)
   // node deletion
  Op.delete(path, -1) 
   // node creation
  Op.create(createPath, new byte[0], ZooDefs.Ids.OPEN_ACL_UNSAFE,
CreateMode.EPHEMERAL_SEQUENTIAL);
   // Perform operations
 
solrServer.getZkStateReader().getZkClient().getSolrZooKeeper().multi(opsList);
  solrServer.getZkStateReader().updateClusterState(true);




--
View this message in context: 
http://lucene.472066.n3.nabble.com/SolrCloud-shard-leader-elections-Altering-zookeeper-sequence-numbers-tp4178973.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: SolrCloud use of "min_rf" through SolrJ

2014-11-03 Thread Zisis Tachtsidis
In case anyone else runs into this, I've managed to make it work. I didn't
notice in the ticket discussion that the specific feature is enabled when
min_rf >=2, I was setting min_rf=1. It goes without saying that you should
also have at least 2 replicas in your SolrCloud configuration. The actual
code I've used to make it return "rf" is

UpdateRequest req = new UpdateRequest();
req.setParam(UpdateRequest.MIN_REPFACT, "2");
req.add(doc);
NamedList response = solrServer.request(req);





--
View this message in context: 
http://lucene.472066.n3.nabble.com/SolrCloud-use-of-min-rf-through-SolrJ-tp4164966p4167250.html
Sent from the Solr - User mailing list archive at Nabble.com.


SolrCloud use of "min_rf" through SolrJ

2014-10-20 Thread Zisis Tachtsidis
Hi all,

I'm trying to make use of the "min_rf" (minimum replication factor) feature
described in https://issues.apache.org/jira/browse/SOLR-5468. According to
the ticket, all that is needed is to pass "min_rf" param into the update
request and get back the "rf" param from the response or even easier make
use of CloudSolrServer.getMinAchievedReplicationFactor().

I'm using SolrJ's CloudSolrServer but I couldn't find any way to pass
"min_rf" using the available add() methods when sending a document to Solr,
so I resorted to the following

UpdateRequest req = new UpdateRequest();
req.setParam(UpdateRequest.MIN_REPFACT, "1");
req.add(doc);
UpdateResponse response = req.process(cloudSolrServer);
int rf = cloudSolrServer.getMinAchievedReplicationFactor("collection_name",
response.getResponse());

Still the returned "rf" value is always -1. How can I utilize "min_rf"
through SolrJ?
I'm using Solr 4.10.0 with a collection that has 2 replicas (one leader, one
replica).

Thanks



--
View this message in context: 
http://lucene.472066.n3.nabble.com/SolrCloud-use-of-min-rf-through-SolrJ-tp4164966.html
Sent from the Solr - User mailing list archive at Nabble.com.


BlendedInfixSuggester index write.lock failures on core reload

2014-08-14 Thread Zisis Tachtsidis
Hi all, 

I'm using Solr 4.9.0 and have setup a spellcheck component for returning
suggestions. The  configuration inside my solr.SpellCheckComponent has as
follows. 

org.apache.solr.spelling.suggest.Suggester
org.apache.solr.spelling.suggest.fst.BlendedInfixLookupFactory
along with a custom value for 


The server is starting properly and data gets indexed but once i hit the
'Reload' button from 'Core Admin' I get the following error.

null:org.apache.solr.common.SolrException: Error handling 'reload' action
at
org.apache.solr.handler.admin.CoreAdminHandler.handleReloadAction(CoreAdminHandler.java:791)
at
org.apache.solr.handler.admin.CoreAdminHandler.handleRequestInternal(CoreAdminHandler.java:224)
at
org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:187)
at
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
at
org.apache.solr.servlet.SolrDispatchFilter.handleAdminRequest(SolrDispatchFilter.java:729)
at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:258)
at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:207)
at
com.caucho.server.dispatch.FilterFilterChain.doFilter(FilterFilterChain.java:89)
at
com.caucho.server.webapp.WebAppFilterChain.doFilter(WebAppFilterChain.java:156)
at
com.caucho.server.webapp.AccessLogFilterChain.doFilter(AccessLogFilterChain.java:95)
at
com.caucho.server.dispatch.ServletInvocation.service(ServletInvocation.java:289)
at 
com.caucho.server.http.HttpRequest.handleRequest(HttpRequest.java:838)
at
com.caucho.network.listen.TcpSocketLink.dispatchRequest(TcpSocketLink.java:1345)
at
com.caucho.network.listen.TcpSocketLink.handleRequest(TcpSocketLink.java:1301)
at
com.caucho.network.listen.TcpSocketLink.handleRequestsImpl(TcpSocketLink.java:1285)
at
com.caucho.network.listen.TcpSocketLink.handleRequests(TcpSocketLink.java:1193)
at
com.caucho.network.listen.TcpSocketLink.handleAcceptTaskImpl(TcpSocketLink.java:992)
at
com.caucho.network.listen.ConnectionTask.runThread(ConnectionTask.java:117)
at com.caucho.network.listen.ConnectionTask.run(ConnectionTask.java:93)
at
com.caucho.network.listen.SocketLinkThreadLauncher.handleTasks(SocketLinkThreadLauncher.java:169)
at
com.caucho.network.listen.TcpSocketAcceptThread.run(TcpSocketAcceptThread.java:61)
at com.caucho.env.thread2.ResinThread2.runTasks(ResinThread2.java:173)
at com.caucho.env.thread2.ResinThread2.run(ResinThread2.java:118)
Caused by: org.apache.solr.common.SolrException: Unable to reload core:
autocomplete
at
org.apache.solr.core.CoreContainer.recordAndThrow(CoreContainer.java:911)
at org.apache.solr.core.CoreContainer.reload(CoreContainer.java:660)
at
org.apache.solr.handler.admin.CoreAdminHandler.handleReloadAction(CoreAdminHandler.java:789)
... 24 more
Caused by: org.apache.solr.common.SolrException
at org.apache.solr.core.SolrCore.(SolrCore.java:868)
at org.apache.solr.core.SolrCore.reload(SolrCore.java:426)
at org.apache.solr.core.CoreContainer.reload(CoreContainer.java:650)
... 25 more
Caused by: java.lang.RuntimeException
at
org.apache.solr.spelling.suggest.fst.BlendedInfixLookupFactory.create(BlendedInfixLookupFactory.java:102)
at org.apache.solr.spelling.suggest.Suggester.init(Suggester.java:105)
at
org.apache.solr.handler.component.SpellCheckComponent.inform(SpellCheckComponent.java:636)
at
org.apache.solr.core.SolrResourceLoader.inform(SolrResourceLoader.java:651)
at org.apache.solr.core.SolrCore.(SolrCore.java:851)
... 27 more

Debugging Solr code I found out that the original exception comes from the
IndexWriter construction inside AnalyzingInfixSuggester.java ( more
specifically org.apache.lucene.store.Lock:89). The exception is "Lock obtain
timed out: NativeFSLock@$indexPath/write.lock" but seems to be hidden by the
RuntimeException thrown by BlendedInfixLookupFactory.

If I use the default "indexPath" I get another error (write lock related
again) in the logs.
org.apache.lucene.store.LockObtainFailedException: Lock obtain timed out:
NativeFSLock@$indexPath/blendedInfixSuggesterIndexDir/write.lock
at org.apache.lucene.store.Lock.obtain(Lock.java:89)
at org.apache.lucene.index.IndexWriter.(IndexWriter.java:724)
at
org.apache.lucene.search.suggest.analyzing.AnalyzingInfixSuggester.build(AnalyzingInfixSuggester.java:222)
at org.apache.lucene.search.suggest.Lookup.build(Lookup.java:190)
at org.apache.solr.spelling.suggest.Suggester.build(Suggester.java:142)
at
org.apache.solr.handler.component.SpellCheckComponent$SpellCheckerListener.buildSpellIndex(SpellCheckComponent.java:737)
at
org.apache.solr.handler.component.SpellCheckComponent$SpellChecker