Re: Adding UniqueKey to an existing Solr 6.4 Index

2017-09-15 Thread Erick Erickson
Not really. Do note that atomic updates require
1> all _original_ fields (i.e. fields that are _not_ destinations for
copyFields) have stored=true
2> no destination of a copyField has stored=true
3> compose the original document from stored fields and re-index the
doc. This latter just means that atomic updates are actually slightly
more work than just re-indexing the doc from the system-of-record (as
far as Solr is concerned).

The decision to use atomic updates is up to you of course, the slight
extra work may be bettern than getting the docs from the original
source...

Best,
Erick

On Fri, Sep 15, 2017 at 10:38 AM, Pankaj Gurumukhi
 wrote:
> Hello,
>
> I have a single node Solr 6.4 server, with a Index of 100 Million documents. 
> The default "id" is the primary key of this index. Now, I would like to setup 
> an update process to insert new documents, and update existing documents 
> based on availability of value in another field (say ProductId), that is 
> different from the default "id". Now, to ensure that I use the Solr provided 
> De-Duplication method by having a new field SignatureField using the 
> ProductId as UniqueKey. Considering the millions of documents I have, I would 
> like to ask if its possible to setup a De-Duplication mechanism in an 
> existing solr index with the following steps:
>
> a. Add new field SignatureField, and configure it as UniqueKey in Solr 
> schema.
>
> b.Run an Atomic Update process on all documents, to update the value of 
> this new field SignatureField.
>
> Is there an easier/better way to add a SignatureField to an existing large 
> index?
>
> Thx,
> Pankaj
>


Re: solr 6.6.1: Lock held by this virtual machine

2017-09-15 Thread mshankar
Pls note that we are using Solr 6.4.2.



--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html


Re: solr 6.6.1: Lock held by this virtual machine

2017-09-15 Thread mshankar
Hi Eric,

We are seeing the same issue on our production as well. Changing the
LockWriteTimeout to 5000 did not help. Please let me know if there are other
things we can try out to recover from this issue.

Thanks.



--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html


Adding UniqueKey to an existing Solr 6.4 Index

2017-09-15 Thread Pankaj Gurumukhi
Hello,

I have a single node Solr 6.4 server, with a Index of 100 Million documents. 
The default "id" is the primary key of this index. Now, I would like to setup 
an update process to insert new documents, and update existing documents based 
on availability of value in another field (say ProductId), that is different 
from the default "id". Now, to ensure that I use the Solr provided 
De-Duplication method by having a new field SignatureField using the ProductId 
as UniqueKey. Considering the millions of documents I have, I would like to ask 
if its possible to setup a De-Duplication mechanism in an existing solr index 
with the following steps:

a. Add new field SignatureField, and configure it as UniqueKey in Solr 
schema.

b.Run an Atomic Update process on all documents, to update the value of 
this new field SignatureField.

Is there an easier/better way to add a SignatureField to an existing large 
index?

Thx,
Pankaj



Re: Two joins from different cores with OR

2017-09-15 Thread Erick Erickson
Two things:

1> The query language does not implement pure boolean logic, although
you can get that behavior with careful parenthesizing, see:
https://lucidworks.com/2011/12/28/why-not-and-or-and-not/

2> add =query to the url and see how things are actually parsed
to try to chase down why you are getting the results you are.

Best,
Erick

On Fri, Sep 15, 2017 at 2:02 AM, Сергей Твердохлеб  wrote:
> Hi all,
>
> I have two joins and I need to link them using OR statement.
>
> The first one is
>
>> fq={!join fromIndex=master_Category_flip from=manufactureName_string
>> to=nameString_string}visibility_string_mv:"Test_B2BUnit" AND
>> itemtype_string:"Model"
>
> which returns result A
>
> The second one is
>
>> fq=({!join fromIndex=master_Part_flip from=manufacturerNameFacet_string_mv
>> to=nameString_string}visibility_string_mv:"Test_B2BUnit")
>
>  which returns result B
>
> Using them like
>
>> fq=({!join fromIndex=master_Category_flip from=manufactureName_string
>> to=nameString_string}visibility_string_mv:"Test_B2BUnit" AND
>> itemtype_string:"Model") OR ({!join fromIndex=master_Part_flip
>> from=manufacturerNameFacet_string_mv
>> to=nameString_string}visibility_string_mv:"Test_B2BUnit")
>
>  It returns only result B
>
> Using it with AND instead of OR returns no result.
> Using 2 different fq statements also returns no results.
>
> With different data:
> First query returns A B C D
> Second query returns A B
> With OR returns only A B
> With AND returns no results
> With 2 fq's returns A B.
>
> Is there any way to get all results as I need?
> --
> Regards,
> Sergey Tverdokhleb


Re: Two separate instances sharing the same zookeeper cluster

2017-09-15 Thread James Keeney
Mike -

Thank you, this was very helpful. I've doing some research and
experimenting.

As currently configured solr is launched as a service. I looked at the
sol.in.sh file in /etc/default and we are running using a list of servers
for the zookeeper cluster.

so I think that is translated to -z zookeeper1,zookeeper2,zookeeper3 (these
are defined in the hosts file)

If I understand what I am reading setting a specific configset path would
be done explicitly by adding the path to the end of the zookeeper server
list:

-z zookeeper1,zookeeper2,zookeeper3/solr_dev for example.

However, I'm not sure how to switch the production cluster to explicitly
reference the directory it currently uses. Do I need to setup the directory
first?

As per this?
https://lucene.apache.org/solr/guide/6_6/taking-solr-to-production.html#TakingSolrtoProduction-ZooKeeperchroot

Would I setup say solr_prod, upconfig all the configs, switch over one node
and then migrate over the rest of the nodes , ending with the leader?

Would that then move production to solr_prod as the config base?

Once that is done I would then setup the dev.

Does any of this make sense?

Jim K.


On Thu, Sep 14, 2017 at 4:08 PM Mike Drob  wrote:

> When you specify the zk string for a solr instance, you typically include a
> chroot in it. I think the default is /solr, but it doesn't have to be, so
> you should be able to run with -z zk1:2181/sorl-dev and /solr-prod
>
>
> https://lucene.apache.org/solr/guide/6_6/setting-up-an-external-zookeeper-ensemble.html#SettingUpanExternalZooKeeperEnsemble-PointSolrattheinstance
>
> On Thu, Sep 14, 2017 at 3:01 PM, James Keeney 
> wrote:
>
> > I have a staging and a production solr cluster. I'd like to have them use
> > the same zookeeper cluster. It seems like it is possible if I can set a
> > different directory for the second cluster. I've looked through the
> > documentation though and I can't quite figure out where to set that up.
> As
> > a result my staging cluster nodes keep trying to add themselves tot he
> > production cluster.
> >
> > If someone could point me in the right direction?
> >
> > Jim K.
> > --
> > Jim Keeney
> > President, FitterWeb
> > E: j...@fitterweb.com
> > M: 703-568-5887 <(703)%20568-5887>
> >
> > *FitterWeb Consulting*
> > *Are you lean and agile enough? *
> >
>
-- 
Jim Keeney
President, FitterWeb
E: j...@fitterweb.com
M: 703-568-5887

*FitterWeb Consulting*
*Are you lean and agile enough? *


Re: Meet CorruptIndexException while shutdown one node in Solr cloud

2017-09-15 Thread Erick Erickson
bq: This means Solr may get update request during shutdown. I think
that is the reason we get  CorruptIndexException.

This is unlikely, Solr should handle this quite well. More likely you
encountered some other issue, one possibility is that you had a disk
full situation and that was the root of your issue.

I'll add as an aside that having openSearcher set to true in your
autoCommit setting _and_ setting autoSoftCommit is unnecessary, choose
one or the other.

See: 
https://lucidworks.com/2013/08/23/understanding-transaction-logs-softcommit-and-commit-in-sorlcloud/

Best,
Erick

On Fri, Sep 15, 2017 at 3:55 AM, wg85907  wrote:
> Hi team,
> Currently I am using Solr 4.10 in tomcat. I have a one shard Solr
> Cloud with 3 replicas. I set heap size to 15GB for each node. As I have big
> data volume and large amount of query request. So always meet frequent full
> GC issue. We have checked this and found that many memory was used as field
> cache by Solr. To avoid this, we begin to reboot tomcat instance one by one
> in schedule. We don't kill any process but run script  "catalina.sh stop" to
> shutdown tomcat gracefully. To keep message not pending,  we receive message
> from user all the time and send update request to Solr once get new message.
> This means Solr may get update request during shutdown. I think that is the
> reason we get  CorruptIndexException. Since we begin to do the reboot, we
> always get CorruptIndexException. The trace is as below:
> 2017-09-14 04:25:49,241
> ERROR[commitScheduler-15-thread-1][R31609](CommitTracker) - auto commit
> error...:org.apache.solr.common.SolrException: Error opening new searcher
> at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:1565)
> at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1677)
> at
> org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:607)
> at org.apache.solr.update.CommitTracker.run(CommitTracker.java:216)
> at
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178)
> at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.lucene.index.CorruptIndexException:
> liveDocs.count()=33574 info.docCount=34156 info.getDelCount()=584
> (filename=_1uvck_k.del)
> at
> org.apache.lucene.codecs.lucene40.Lucene40LiveDocsFormat.readLiveDocs(Lucene40LiveDocsFormat.java:96)
> at
> org.apache.lucene.index.SegmentReader.(SegmentReader.java:116)
> at
> org.apache.lucene.index.ReadersAndUpdates.getReader(ReadersAndUpdates.java:144)
> at
> org.apache.lucene.index.BufferedUpdatesStream.applyDeletesAndUpdates(BufferedUpdatesStream.java:282)
> at
> org.apache.lucene.index.IndexWriter.applyAllDeletesAndUpdates(IndexWriter.java:3271)
> at
> org.apache.lucene.index.IndexWriter.maybeApplyDeletes(IndexWriter.java:3262)
> at
> org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:421)
> at
> org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:279)
> at
> org.apache.lucene.index.DirectoryReader.openIfChanged(DirectoryReader.java:251)
> at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:1476)
> ... 10 more
>
>
> As we shutdown Solr gracefully, I think Solr should be strong enough
> to handle this case. Please give me some advice about why this happen and
> what we can do to avoid this. Ps below is some of our solrConfig cotent:
>
> 
> 6
> true
> 
> 
> 1000
> 
>
> Regards,
> Geng, Wei
>
>
>
> --
> Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html


Re: Provide suggestion on indexing performance

2017-09-15 Thread Shawn Heisey
On 9/11/2017 9:06 PM, Aman Tandon wrote:
> We want to know about the indexing performance in the below mentioned
> scenarios, consider the total number of 10 string fields and total number
> of documents are 10 million.
>
> 1) indexed=true, stored=true
> 2) indexed=true, docValues=true
>
> Which one should we prefer in terms of indexing performance, please share
> your experience.

There are several settings in the schema for each field, things like
indexed, stored, docValues, multiValued, and others.  You should base
your choices on what you need Solr to do.  Choosing these settings based
purely on desired indexing speed may result in Solr not doing what you
want it to do.

When the indexing system sends data to Solr with several threads or
processes, Solr is *usually* capable of indexing data faster than most
systems can supply it.  The more settings you disable on a field, the
faster Solr will be able to index.

It is not possible to provide precise numbers, because performance
depends on many factors, some of which you may not even know until you
build a production system.

https://lucidworks.com/sizing-hardware-in-the-abstract-why-we-dont-have-a-definitive-answer/

All that said ... docValues MIGHT be a little bit faster than stored,
because stored data is compressed, and the compression takes CPU time. 
On a fully populated production system, that statement might turn out to
be wrong.  There may be factors that result in stored fields working
better.  The best way to decide is to try it both ways with all your data.

Thanks,
Shawn



Re: Solr list operator

2017-09-15 Thread Shawn Heisey
On 9/12/2017 7:21 AM, Nick Way wrote:
> Thank you very much Erik, Walter and Susheel.
>
> To be honest I didn't really understand the suggested routes (due to my
> limited knowledge) but managed to get things working by inserting my data
> with a double comma at the beginning eg:
>
> custom field "listOfIDs" = ",,1,2,4,33"
>
> and then searching for "*,myVal,*" which seems to work.

If you're going to index it as a single string and leave it that way,
that will be your only real option.  A wildcard search is generally
slower than other options.

Instead, you should break that list apart in your indexing software, and
index multiple values for that field instead of a single value
containing commas.  Then you can search for single values easily and
quickly.  To do this, the field must be marked as multiValued in your
schema.

Alternatively, you could use a TextField type in your schema and include
a tokenizer or filter that will split on the commas.  If the field is
stored, the search results would contain the original comma separated
string, not the separate values.  Also, if the list will always be
numbers, you would not be able to do a numeric range query, because the
values would be strings, not numbers.

> Out of interest does anyone have experience accessing Solr via Adobe
> Coldfusion (as this is what we do) - and it would be helpful to have a
> contact for some Solr consulting from time to time, if anyone might be
> interested in that?

I have never done any work in ColdFusion.

There are some CF Solr libraries.  This page also says that version 9
includes it natively:

https://wiki.apache.org/solr/IntegratingSolr

Thanks,
Shawn



Re: solr Facet.contains

2017-09-15 Thread Michael Kuhlmann
What is the field type? Which Analyzers are configured?
How do you split at "~"? (You have to do it by yourself, or configure
some tokenizer for that.)
What do you get when you don't filter your facets?
What do you mean with "it is not working"? What is your result now?

-Michael


 Am 15.09.2017 um 13:43 schrieb vobium:
> Hello,
>
> I want to limit my facet data by using substring (only that contain
> specified substring). My solr version is 4.8.0
>
> e.g if doc with such type of string (field with such type of data is
> multivalued and splited with "~")
>
>  India/maha/mumbai~India/gujarat/badoda
>  India/goa/xyz
>  India/raj/jaypur
>  1236/maha/890~India/maha/kolhapur
>  India/maha/mumbai
>  India/maha/nashik
>  Uk/Abc/Cde
>
>
> Expected  facet Data that contain maha as  substring
> o/p
> India/maha/mumbai (2)
>  India/maha/kolhapur(1)
>  India/maha/nashik(1)
> 1236/maha/890(1)
>
> I tried it by using facet.contains but it is not working
> so plz give solution for this issue 
>
>
>
> --
> Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html




solr Facet.contains

2017-09-15 Thread vobium
Hello,

I want to limit my facet data by using substring (only that contain
specified substring). My solr version is 4.8.0

e.g if doc with such type of string (field with such type of data is
multivalued and splited with "~")

 India/maha/mumbai~India/gujarat/badoda
 India/goa/xyz
 India/raj/jaypur
 1236/maha/890~India/maha/kolhapur
 India/maha/mumbai
 India/maha/nashik
 Uk/Abc/Cde


Expected  facet Data that contain maha as  substring
o/p
India/maha/mumbai (2)
 India/maha/kolhapur(1)
 India/maha/nashik(1)
1236/maha/890(1)

I tried it by using facet.contains but it is not working
so plz give solution for this issue 



--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html


solr Facet.contains

2017-09-15 Thread vobium
Hello,

I want to limit my facet data by using substring (only that contain
specified substring). My solr version is 4.8.0

e.g if doc with such type of string (field with such type of data is
multivalued and splited with "~")

 India/maha/mumbai~India/gujarat/badoda
 India/goa/xyz
 India/raj/jaypur
 India/maha/kolhapur
 India/maha/mumbai
 India/maha/nashik

Expected  facet Data that contain *maha *as  substring 
o/p
India/maha/mumbai (2)
 India/maha/kolhapur(1)
 India/maha/nashik(1)

I tried it by using facet.contains but it is not working
so plz give solution for this issue






--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html


solr Facet.contains

2017-09-15 Thread vobium
Hello,

I want to limit my facet data by using substring (only that contain
specified substring). My solr version is 4.8.0

e.g if doc with such type of string (field with such type of data is
multivalued and splited with "~")

 India/maha/mumbai~India/gujarat/badoda
 India/goa/xyz
 India/raj/jaypur
 India/maha/kolhapur
 India/maha/mumbai
 India/maha/nashik

Expected  facet Data that contain maha as  substring
o/p
India/maha/mumbai (2)
 India/maha/kolhapur(1)
 India/maha/nashik(1)

I tried it by using facet.contains but it is not working
so plz give solution for this issue 



--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html


Meet CorruptIndexException while shutdown one node in Solr cloud

2017-09-15 Thread wg85907
Hi team,
Currently I am using Solr 4.10 in tomcat. I have a one shard Solr
Cloud with 3 replicas. I set heap size to 15GB for each node. As I have big
data volume and large amount of query request. So always meet frequent full
GC issue. We have checked this and found that many memory was used as field
cache by Solr. To avoid this, we begin to reboot tomcat instance one by one
in schedule. We don't kill any process but run script  "catalina.sh stop" to
shutdown tomcat gracefully. To keep message not pending,  we receive message
from user all the time and send update request to Solr once get new message.
This means Solr may get update request during shutdown. I think that is the
reason we get  CorruptIndexException. Since we begin to do the reboot, we
always get CorruptIndexException. The trace is as below:
2017-09-14 04:25:49,241
ERROR[commitScheduler-15-thread-1][R31609](CommitTracker) - auto commit
error...:org.apache.solr.common.SolrException: Error opening new searcher
at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:1565)
at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1677)
at
org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:607)
at org.apache.solr.update.CommitTracker.run(CommitTracker.java:216)
at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178)
at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.lucene.index.CorruptIndexException:
liveDocs.count()=33574 info.docCount=34156 info.getDelCount()=584
(filename=_1uvck_k.del)
at
org.apache.lucene.codecs.lucene40.Lucene40LiveDocsFormat.readLiveDocs(Lucene40LiveDocsFormat.java:96)
at
org.apache.lucene.index.SegmentReader.(SegmentReader.java:116)
at
org.apache.lucene.index.ReadersAndUpdates.getReader(ReadersAndUpdates.java:144)
at
org.apache.lucene.index.BufferedUpdatesStream.applyDeletesAndUpdates(BufferedUpdatesStream.java:282)
at
org.apache.lucene.index.IndexWriter.applyAllDeletesAndUpdates(IndexWriter.java:3271)
at
org.apache.lucene.index.IndexWriter.maybeApplyDeletes(IndexWriter.java:3262)
at
org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:421)
at
org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:279)
at
org.apache.lucene.index.DirectoryReader.openIfChanged(DirectoryReader.java:251)
at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:1476)
... 10 more


As we shutdown Solr gracefully, I think Solr should be strong enough
to handle this case. Please give me some advice about why this happen and
what we can do to avoid this. Ps below is some of our solrConfig cotent:


6
true


1000


Regards,
Geng, Wei



--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html


Two joins from different cores with OR

2017-09-15 Thread Сергей Твердохлеб
Hi all,

I have two joins and I need to link them using OR statement.

The first one is

> fq={!join fromIndex=master_Category_flip from=manufactureName_string
> to=nameString_string}visibility_string_mv:"Test_B2BUnit" AND
> itemtype_string:"Model"

which returns result A

The second one is

> fq=({!join fromIndex=master_Part_flip from=manufacturerNameFacet_string_mv
> to=nameString_string}visibility_string_mv:"Test_B2BUnit")

 which returns result B

Using them like

> fq=({!join fromIndex=master_Category_flip from=manufactureName_string
> to=nameString_string}visibility_string_mv:"Test_B2BUnit" AND
> itemtype_string:"Model") OR ({!join fromIndex=master_Part_flip
> from=manufacturerNameFacet_string_mv
> to=nameString_string}visibility_string_mv:"Test_B2BUnit")

 It returns only result B

Using it with AND instead of OR returns no result.
Using 2 different fq statements also returns no results.

With different data:
First query returns A B C D
Second query returns A B
With OR returns only A B
With AND returns no results
With 2 fq's returns A B.

Is there any way to get all results as I need?
-- 
Regards,
Sergey Tverdokhleb


Spellceck Componet Exception when querying for specific word

2017-09-15 Thread Noriyuki TAKEI
Hi,all

An exception as below occurred when I used spellcheck component only for
specific word "さいじんg".

2017-09-13 23:07:30.911 INFO  (qtp1712536284-299) [c:hoge s:shard2
r:core_node4 x:hoge_shard2_replica2] o.a.s.c.S.Request
[hoge_shard2_replica2]  webapp=/solr path=/suggest_ja
params={q=*:*=さいじんg=true=json=false} hits=2
status=0 QTime=90
2017-09-13 23:07:43.922 ERROR (qtp1712536284-20) [c:test s:shard2
r:core_node3 x:test_shard2_replica1] o.a.s.h.RequestHandlerBase
java.lang.StringIndexOutOfBoundsException: String index out of range: -1
at
java.lang.AbstractStringBuilder.replace(AbstractStringBuilder.java:851)
at java.lang.StringBuilder.replace(StringBuilder.java:262)
at
org.apache.solr.spelling.SpellCheckCollator.getCollation(SpellCheckCollator.java:238)
at
org.apache.solr.spelling.SpellCheckCollator.collate(SpellCheckCollator.java:93)
at
org.apache.solr.handler.component.SpellCheckComponent.addCollationsToResponse(SpellCheckComponent.java:297)
at
org.apache.solr.handler.component.SpellCheckComponent.process(SpellCheckComponent.java:209)
at
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:295)
at
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:153)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:2213)
at
org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:654)
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:460)
at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:303)
at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:254)
at
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668)
at
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)
at
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
at
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
at
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
at
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1160)
at
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511)
at
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
at
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1092)
at
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
at
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
at
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
at
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
at org.eclipse.jetty.server.Server.handle(Server.java:518)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:308)
at
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:244)
at
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
at
org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
at
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceAndRun(ExecuteProduceConsume.java:246)
at
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:156)
at
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:654)
at
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:572)
at java.lang.Thread.run(Thread.java:745)


However, When I restored indexing data from backup data by collection API,
this exception resolved.

Why this exception resolved?

※Solr Config is as below. 

 
 
  suggest_dict 
  solr.Suggester 
  AnalyzingLookupFactory 
  suggest 
  suggest_ja 
  true 
  true 
  text_ja_romaji 
 
   

   
 
  suggest 
  AND 
  0 
  true 

  true 
  suggest 
  1000 
  1 

  true 
  suggest_dict 
  10 
  true 
  30 
  10 
  true 
 
 
  suggest_ja 
 
   



--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html