high cpu threads (solr 7.5) - EPollArrayWrapper.epollWait

2019-03-29 Thread Hari Nakka
Version: solr cloud 7.5
OS: CentOS 7
JDK: Oracle JDK 1.8.0_191

We are noticing high CPU utilization on below threads.  Looks like a known 
issue with. (https://github.com/netty/netty/issues/327)
But not sure if this has been addressed in any of the 1.8 releases.
Please help.



"qtp574568002-3821728" #3821728 prio=5 os_prio=0 tid=0x7f4f20018000 
nid=0x4996 runnable [0x7f51fc6d8000]
   java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
- locked <0x00064cded430> (a sun.nio.ch.Util$3)
- locked <0x00064cded418> (a java.util.Collections$UnmodifiableSet)
- locked <0x00064cdf6e38> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101)
at 
org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:396)
at 
org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:333)
at 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:357)
at 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:181)
at 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:168)
at 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:126)
at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:366)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:762)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:680)
at java.lang.Thread.run(Thread.java:748)


Re: Solr indexing with Tika DIH local vs network share

2019-03-29 Thread Erick Erickson
So just try adding the autocommit and auotsoftcommit settings. All of the 
example configs have these entries and you can copy/paste/change

> On Mar 29, 2019, at 10:35 AM, neilb  wrote:
> 
> Hi Erick, I am using solrconfig.xml from samples only and has very few
> entries. I have attached my config files for review along with reply.
> 
> Thanks
> solrconfig.xml
>   
> tika-data-config.xml
>   
> managed-schema
> 
>   
> 
> 
> 
> 
> 
> --
> Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html



Re: CommonTerms & slow queries

2019-03-29 Thread Erie Data Systems
>
> All great advice thanks Michael, have an excellent weekend! Testing the
> common grams
>
-Craig


Solr 7.7 - group faceting errors

2019-03-29 Thread Jay Potharaju
Hi,
I am running into a bug when doing group faceting. This is the same error I
ran into when upgrading from 5.5 to 6.x.
http://lucene.472066.n3.nabble.com/solr-6-6-3-intermittent-group-faceting-errors-td4385692.html#a4385865
I had bypassed the error in solr 6.x by turning off docvalues. But in solr
7.7 it wont let me do faceting without enabling docvalues.
Anyone else ran into this issue?

Sample Document
{ "id":"123:34!1:1",
"product_id":1,
"sku_id":1
},
{
"id":"123:34!2:1",
"category_id": [1,2]
"product_id":2,
"sku_id":1
}
solr 7.7 schema




solr 6..6 schema





 2019-03-29 20:14:30.188 ERROR (qtp2051853139-14) [c:collection1
s:shard2 r:core_node4 x:collection1_shard2_replica_n2]
o.a.s.h.RequestHandlerBase org.apache.solr.common.SolrException:
Exception during facet.field: category_id
at 
org.apache.solr.request.SimpleFacets.lambda$getFacetFieldCounts$0(SimpleFacets.java:832)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at org.apache.solr.request.SimpleFacets$3.execute(SimpleFacets.java:771)
at 
org.apache.solr.request.SimpleFacets.getFacetFieldCounts(SimpleFacets.java:841)
at 
org.apache.solr.handler.component.FacetComponent.getFacetCounts(FacetComponent.java:329)
at 
org.apache.solr.handler.component.FacetComponent.process(FacetComponent.java:273)
at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:298)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:2551)
at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:710)
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:516)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:395)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:341)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1602)
at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:540)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)
at 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
at 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1588)
at 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1345)
at 
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)
at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:480)
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1557)
at 
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)
at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1247)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144)
at 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:220)
at 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:126)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
at 
org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
at org.eclipse.jetty.server.Server.handle(Server.java:502)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:364)
at 
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:260)
at 
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:305)
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:103)
at org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:118)
at 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:333)
at 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:310)
at 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:168)
at 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:126)
at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:366)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:765)
   

Re: CommonTerms & slow queries

2019-03-29 Thread Michael Gibney
You might take a look at CommonGramsFilter (
https://lucene.apache.org/solr/guide/6_6/filter-descriptions.html#FilterDescriptions-CommonGramsFilter),
especially if you're either not using pf, or if ps=0. An absolute setting
of mm=2 strikes me as unusual (though quite possibly appropriate for your
use case). mm=2 would force scoring of all docs for which >=2 terms match,
which for any query containing the words "a" and "the" for example, could
easily be the majority of the index.
Another thought, re: single-core: sharding would allow you to effectively
parallelize query processing to a certain extent, which I expect might
speed things up for your use case.

On Fri, Mar 29, 2019 at 1:13 PM Erie Data Systems 
wrote:

> Michael,
>
>
> select/?=12=title+description=once+upon+a+time+in+the+west=*=true=desc=250=20=1=1=title=2=edismax=off=on=json=true
> "rawquerystring":"once upon a time in the west",
> "querystring":"once upon a time in the west",
> "parsedquery":"+(DisjunctionMaxQuery((description:once | title:once))
> DisjunctionMaxQuery((description:upon | title:upon))
> DisjunctionMaxQuery((description:a | title:a))
> DisjunctionMaxQuery((description:time | title:time))
> DisjunctionMaxQuery((description:in | title:in))
> DisjunctionMaxQuery((description:the | title:the))
> DisjunctionMaxQuery((description:west | title:west)))~2",
> "parsedquery_toString":"+(((description:once | title:once)
> (description:upon | title:upon) (description:a | title:a) (description:time
> | title:time) (description:in | title:in) (description:the | title:the)
> (description:west | title:west))~2)"
>
> Removing pf cuts time almost half but its still 5+sec
>
> Thank you for your help, more than happy to include more output..
> -Craig
>
>
> On Fri, Mar 29, 2019 at 12:24 PM Michael Gibney  >
> wrote:
>
> > Can you post the query that's actually built for some of these inputs
> > ("parsedquery" or "parsedquery_toString" output included for requests
> with
> > "debug=query" parameter)? What is performance like if you turn off pf
> > (i.e., no implicit phrase searching)?
> > Michael
> >
> > On Fri, Mar 29, 2019 at 11:53 AM Erie Data Systems <
> eriedata...@gmail.com>
> > wrote:
> >
> > > Using Solr 8.0.0, single instance, single core, 50m records (38gb
> index)
> > > on one SSD, 96gb ram, 16 cores CPU
> > >
> > > Most queries run very very fast <1 sec however we have noticed queries
> > > containing "common" words are quite slow sometimes 10+sec , currently
> > using
> > > edismax with 2 text_general fields,. qf, and pf, qs=0,ps=0
> > >
> > > I came across these which describe the issue.
> > >
> > >
> >
> https://www.hathitrust.org/blogs/large-scale-search/slow-queries-and-common-words-part-2
> > >
> > >
> > >
> >
> https://lucene.apache.org/core/5_5_3/queries/org/apache/lucene/queries/CommonTermsQuery.html
> > >
> > > Test queries with issues :
> > > 1. things to do in seattle with eric
> > > 2. year of the cat
> > > 3. time of my life
> > > 4. when will i be loved
> > > 5. once upon a time in the west
> > >
> > > Stopwords are not an option as in the case of #2, if of and the are
> > removed
> > > it essentially destroys relevance.  Is there a common suggested
> solution
> > to
> > > what would seem to be a common issue besides adding stopwords.
> > >
> > > Thank you.
> > > Craig Stadler
> > >
> >
>


Re: hl.preserveMulti in Unified highlighter?

2019-03-29 Thread Walter Underwood
We are testing 6.6.1.

wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/  (my blog)

> On Mar 29, 2019, at 11:02 AM, Walter Underwood  wrote:
> 
> In testing, hl.preserveMulti=true works with the unified highlighter. But the 
> documentation says that the parameter is only implemented in the original 
> highlighter.
> 
> Is the documentation wrong? Can we trust this to keep working with unified?
> 
> wunder
> Walter Underwood
> wun...@wunderwood.org
> http://observer.wunderwood.org/  (my blog)
> 
>> On Mar 26, 2019, at 12:08 PM, Walter Underwood  wrote:
>> 
>> It looks like hl.preserveMulti is only implemented in the Original 
>> highlighter. Has anyone looked at doing this for the Unified highlighter?
>> 
>> We need to preserve order in the highlights for a multi-valued field.
>> 
>> wunder
>> Walter Underwood
>> wun...@wunderwood.org 
>> http://observer.wunderwood.org/  (my blog)
>> 
> 



Re: hl.preserveMulti in Unified highlighter?

2019-03-29 Thread Walter Underwood
In testing, hl.preserveMulti=true works with the unified highlighter. But the 
documentation says that the parameter is only implemented in the original 
highlighter.

Is the documentation wrong? Can we trust this to keep working with unified?

wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/  (my blog)

> On Mar 26, 2019, at 12:08 PM, Walter Underwood  wrote:
> 
> It looks like hl.preserveMulti is only implemented in the Original 
> highlighter. Has anyone looked at doing this for the Unified highlighter?
> 
> We need to preserve order in the highlights for a multi-valued field.
> 
> wunder
> Walter Underwood
> wun...@wunderwood.org 
> http://observer.wunderwood.org/  (my blog)
> 



Re: Solr indexing with Tika DIH local vs network share

2019-03-29 Thread neilb
Hi Erick, I am using solrconfig.xml from samples only and has very few
entries. I have attached my config files for review along with reply.

Thanks
solrconfig.xml
  
tika-data-config.xml
  
managed-schema
 
 





--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html


Re: CommonTerms & slow queries

2019-03-29 Thread Erie Data Systems
Michael,

select/?=12=title+description=once+upon+a+time+in+the+west=*=true=desc=250=20=1=1=title=2=edismax=off=on=json=true
"rawquerystring":"once upon a time in the west",
"querystring":"once upon a time in the west",
"parsedquery":"+(DisjunctionMaxQuery((description:once | title:once))
DisjunctionMaxQuery((description:upon | title:upon))
DisjunctionMaxQuery((description:a | title:a))
DisjunctionMaxQuery((description:time | title:time))
DisjunctionMaxQuery((description:in | title:in))
DisjunctionMaxQuery((description:the | title:the))
DisjunctionMaxQuery((description:west | title:west)))~2",
"parsedquery_toString":"+(((description:once | title:once)
(description:upon | title:upon) (description:a | title:a) (description:time
| title:time) (description:in | title:in) (description:the | title:the)
(description:west | title:west))~2)"

Removing pf cuts time almost half but its still 5+sec

Thank you for your help, more than happy to include more output..
-Craig


On Fri, Mar 29, 2019 at 12:24 PM Michael Gibney 
wrote:

> Can you post the query that's actually built for some of these inputs
> ("parsedquery" or "parsedquery_toString" output included for requests with
> "debug=query" parameter)? What is performance like if you turn off pf
> (i.e., no implicit phrase searching)?
> Michael
>
> On Fri, Mar 29, 2019 at 11:53 AM Erie Data Systems 
> wrote:
>
> > Using Solr 8.0.0, single instance, single core, 50m records (38gb  index)
> > on one SSD, 96gb ram, 16 cores CPU
> >
> > Most queries run very very fast <1 sec however we have noticed queries
> > containing "common" words are quite slow sometimes 10+sec , currently
> using
> > edismax with 2 text_general fields,. qf, and pf, qs=0,ps=0
> >
> > I came across these which describe the issue.
> >
> >
> https://www.hathitrust.org/blogs/large-scale-search/slow-queries-and-common-words-part-2
> >
> >
> >
> https://lucene.apache.org/core/5_5_3/queries/org/apache/lucene/queries/CommonTermsQuery.html
> >
> > Test queries with issues :
> > 1. things to do in seattle with eric
> > 2. year of the cat
> > 3. time of my life
> > 4. when will i be loved
> > 5. once upon a time in the west
> >
> > Stopwords are not an option as in the case of #2, if of and the are
> removed
> > it essentially destroys relevance.  Is there a common suggested solution
> to
> > what would seem to be a common issue besides adding stopwords.
> >
> > Thank you.
> > Craig Stadler
> >
>


Re: CommonTerms & slow queries

2019-03-29 Thread Michael Gibney
Can you post the query that's actually built for some of these inputs
("parsedquery" or "parsedquery_toString" output included for requests with
"debug=query" parameter)? What is performance like if you turn off pf
(i.e., no implicit phrase searching)?
Michael

On Fri, Mar 29, 2019 at 11:53 AM Erie Data Systems 
wrote:

> Using Solr 8.0.0, single instance, single core, 50m records (38gb  index)
> on one SSD, 96gb ram, 16 cores CPU
>
> Most queries run very very fast <1 sec however we have noticed queries
> containing "common" words are quite slow sometimes 10+sec , currently using
> edismax with 2 text_general fields,. qf, and pf, qs=0,ps=0
>
> I came across these which describe the issue.
>
> https://www.hathitrust.org/blogs/large-scale-search/slow-queries-and-common-words-part-2
>
>
> https://lucene.apache.org/core/5_5_3/queries/org/apache/lucene/queries/CommonTermsQuery.html
>
> Test queries with issues :
> 1. things to do in seattle with eric
> 2. year of the cat
> 3. time of my life
> 4. when will i be loved
> 5. once upon a time in the west
>
> Stopwords are not an option as in the case of #2, if of and the are removed
> it essentially destroys relevance.  Is there a common suggested solution to
> what would seem to be a common issue besides adding stopwords.
>
> Thank you.
> Craig Stadler
>


Re: Solr indexing with Tika DIH local vs network share

2019-03-29 Thread Erick Erickson
I suspect is that your autocommit settings in solrconfig.xml 
are something like 

hard commit: has openSearcher set to “false”
soft commit: has the interval set to -1 (never)

That means that until an external commit is executed, you won’t see any 
documents. Try setting your soft commit  to something like, say, 5 minutes (or 
even one minute). That would reduce the interval before docs become searchable.

I think DIH issues a commit at the end of the run, so that would be why you 
didn’t see anything for so long if I’m right.

Here’s more than you want to know about all this: 
https://lucidworks.com/2013/08/23/understanding-transaction-logs-softcommit-and-commit-in-sorlcloud/

I _still_ recommend you move the Tika processing off of Solr. 4G of memory is 
easily exceeded with the right (well, wrong) PDF document. And since Tika is 
runing inside Solr, that’ll mean Solr has an OOM and at that point you really 
don’t know the state of Solr and must restart. Running Tika in a different 
process will insulate Solr from this kind of thing.

Best,
Erick


> On Mar 29, 2019, at 8:36 AM, neilb  wrote:
> 
> Hi Erick, thanks a lot for your suggestions. I will look into it. But to
> answer my own query, I was little impatient and checking indexing status
> after every minute. What I found is after few hours, status started updating
> with document count and finished the indexing process in around 5Hrs.
> Do you see anything wrong with current setup of Solr and Tika DIH? All I am
> looking for PDF full text search results and have it integrated in web app
> dashboard using ajax queries. Also this particular  article
>    was helpful
> to get Solr running as windows service with 4G memory configuration under
> localsystem account.
> 
> Thanks again!
> 
> 
> 
> --
> Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html



CommonTerms & slow queries

2019-03-29 Thread Erie Data Systems
Using Solr 8.0.0, single instance, single core, 50m records (38gb  index)
on one SSD, 96gb ram, 16 cores CPU

Most queries run very very fast <1 sec however we have noticed queries
containing "common" words are quite slow sometimes 10+sec , currently using
edismax with 2 text_general fields,. qf, and pf, qs=0,ps=0

I came across these which describe the issue.
https://www.hathitrust.org/blogs/large-scale-search/slow-queries-and-common-words-part-2

https://lucene.apache.org/core/5_5_3/queries/org/apache/lucene/queries/CommonTermsQuery.html

Test queries with issues :
1. things to do in seattle with eric
2. year of the cat
3. time of my life
4. when will i be loved
5. once upon a time in the west

Stopwords are not an option as in the case of #2, if of and the are removed
it essentially destroys relevance.  Is there a common suggested solution to
what would seem to be a common issue besides adding stopwords.

Thank you.
Craig Stadler


Re: Solr indexing with Tika DIH local vs network share

2019-03-29 Thread neilb
Hi Erick, thanks a lot for your suggestions. I will look into it. But to
answer my own query, I was little impatient and checking indexing status
after every minute. What I found is after few hours, status started updating
with document count and finished the indexing process in around 5Hrs.
Do you see anything wrong with current setup of Solr and Tika DIH? All I am
looking for PDF full text search results and have it integrated in web app
dashboard using ajax queries. Also this particular  article
   was helpful
to get Solr running as windows service with 4G memory configuration under
localsystem account.

Thanks again!



--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html


Re: Stopwords param of edismax parser not working

2019-03-29 Thread Branham, Jeremy (Experis)
Hi Ashish –
Are you using v7.3?
If so, I think this is the spot in code that should be executing:
https://github.com/apache/lucene-solr/blob/releases/lucene-solr/7.3.0/solr/core/src/java/org/apache/solr/search/ExtendedDismaxQParser.java#L310

 Haven’t dug into the logic, but I tested on my server [v7.7.0], and the debug 
output doesn’t show whether or not the stopword filter was removed.
I don’t know your use-case, but maybe you could use the field analysis tool in 
Solr Admin to get more insight.
 
Jeremy Branham
jb...@allstate.com

On 3/28/19, 4:47 AM, "Ashish Bisht"  wrote:

Hi,

We are trying  to remove stopwords from analysis using edismax parser
parameter.The documentation says

*stopwords
A Boolean parameter indicating if the StopFilterFactory configured in the
query analyzer should be respected when parsing the query. If this is set to
false, then the StopFilterFactory in the query analyzer is ignored.*


https://urldefense.proofpoint.com/v2/url?u=https-3A__lucene.apache.org_solr_guide_7-5F3_the-2Dextended-2Ddismax-2Dquery-2Dparser.html=DwICAg=gtIjdLs6LnStUpy9cTOW9w=0SwsmPELGv6GC1_5JSQ9T7ZPMLljrIkbF_2jBCrKXI0=e4J09_tlle6pJ7cObY_3FNbT4FR9VqDKCmLDx2B1ZCs=fcdcV-zmNEPuHTwm3OIwC_pnXlfnBWBPxjH5Ah-5dsI=


But seems like its not working.


https://urldefense.proofpoint.com/v2/url?u=http-3A__Box-2D1-3A8983_solr_SalesCentralDev-5F4_select-3Fq-3Dinternet=DwICAg=gtIjdLs6LnStUpy9cTOW9w=0SwsmPELGv6GC1_5JSQ9T7ZPMLljrIkbF_2jBCrKXI0=e4J09_tlle6pJ7cObY_3FNbT4FR9VqDKCmLDx2B1ZCs=tsSjzyF4rk8ld7IZKfbLbXeTqLlRxChfOr8kJw5ASr4=
 of
things=0=edismax=search_field
content*=false*=true


"parsedquery":"+(DisjunctionMaxQuery((content:internet |
search_field:internet)) DisjunctionMaxQuery((content:thing |
search_field:thing)))",
  *  "parsedquery_toString":"+((content:internet | search_field:internet)
(content:thing | search_field:thing))",*


Are we missing something here?



--
Sent from: 
https://urldefense.proofpoint.com/v2/url?u=http-3A__lucene.472066.n3.nabble.com_Solr-2DUser-2Df472068.html=DwICAg=gtIjdLs6LnStUpy9cTOW9w=0SwsmPELGv6GC1_5JSQ9T7ZPMLljrIkbF_2jBCrKXI0=e4J09_tlle6pJ7cObY_3FNbT4FR9VqDKCmLDx2B1ZCs=zUk8ppVtIoJ0kfwqBmFVsGooDkMnNjeHYp_yfZkGgDk=




Re: Solr 7.5 multi-valued fields will not update with multiple values

2019-03-29 Thread Erick Erickson
Separate out the author bits. Instead of

"author_fullname":["Author 1","Author 2”,”Author 3”]

use

"author_fullname":"Author 1”,
"author_fullname":"Author 2”,
"author_fullname":”Author 3”

> On Mar 29, 2019, at 6:16 AM, Eivind Hodneland 
>  wrote:
> 
> curl -X POST -H 'Content-Type: application/json' 
> 'http://localhost:18080/solr/customer_core/update/' --data-binary 
> '[{"id":"MyId","author_fullname":["Author 1","Author 2”,”Author 3”]}]'



Re: Problem with white space or special characters in function queries

2019-03-29 Thread Erick Erickson
Ahamed:

Please start a new thread. Although your question is somewhat related, it’s not 
the same thing at all. It’s called “thread hijacking” and makes it difficult to 
keep track of.

> On Mar 29, 2019, at 4:46 AM, Ahemad Ali  
> wrote:
> 
> Do you have any suggestions for querying the indexed data with whitespaces 
> and special charectors ?
> 
> Sent from Yahoo Mail on Android 
> 
>  On Fri, Mar 29, 2019 at 16:59, Yonik Seeley wrote:   On 
> Thu, Mar 28, 2019 at 6:05 PM Jan Høydahl  wrote:
> 
>> Functions can never contain spaces.
> 
> 
> Spaces work fine in functions in general.
> The issue is the "bf" parameter as it uses whitespace to delimit multiple
> functions IIRC.
> 
> -Yonik
> 
> 
> 
>> Try to substitute the term with a variable, i.e. a request parameter, e.g.
>> 
>> 
>> bf=if(termfreq(ADSKFeature,$myTerm),log(CaseCount),sqrt(CaseCount))=CUI+(Command)
>> 
>> --
>> Jan Høydahl, search solution architect
>> Cominvent AS - www.cominvent.com
>> 
>>> 28. mar. 2019 kl. 18:51 skrev shamik :
>>> 
>>> Ahemad, I don't think its related to the field definition, rather looks
>> like
>>> an inherent bug. For the time being, I created a copyfield which uses a
>>> custom regex to remove whitespace and special characters and use it in
>> the
>>> function. I'll debug the source code and confirm if it's bug, will raise
>> a
>>> JIRA if needed.
>>> 
>>> 
>>> 
>>> --
>>> Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
>> 
>> 



Re: Solr unable to start up after setting up SSL in Solr 7.4.0

2019-03-29 Thread Erick Erickson
What version of Java are you using?

> On Mar 28, 2019, at 8:27 PM, Zheng Lin Edwin Yeo  wrote:
> 
> Hi,
> 
> Regarding the issue with jetty-ssl.xml which I have mentioned previously,
> seems that the issue still exists in Solr 8.0.0 if I use the jetty-ssl.xml
> that comes with Solr 8.0.0.
> 
> Regards,
> Edwin
> 
> On Fri, 24 Aug 2018 at 09:19, Zheng Lin Edwin Yeo 
> wrote:
> 
>> Thanks for the advice.
>> 
>> Regards,
>> Edwin
>> 
>> On Thu, 23 Aug 2018 at 17:43, Shawn Heisey  wrote:
>> 
>>> On 8/23/2018 2:42 AM, Jan Høydahl wrote:
 Don't need a git checkout to pull a text file :)
 https://github.com/apache/lucene-solr/blob/branch_7x/solr/bin/solr.cmd
>>> 
 
>>> https://github.com/apache/lucene-solr/blob/branch_7x/solr/server/scripts/cloud-scripts/zkcli.bat
>>> <
>>> https://github.com/apache/lucene-solr/blob/branch_7x/solr/server/scripts/cloud-scripts/zkcli.bat
 
>>> 
>>> Good point.  That can save some considerable time, especially with a
>>> slow Internet connection.
>>> 
>>> Thanks,
>>> Shawn
>>> 
>>> 



Re: High CPU usage with Solr 7.7.0

2019-03-29 Thread Erick Erickson
Thanks all. I pushed changes last night, this should be fixed in 7.7.2, 8.1 and 
master.

Meanwhile, this is a trivial change to one line, so two ways to get by would be

1> just make the change yourself locally. Building Solr from scratch is 
actually not hard. The “ant package” target will get you the same thing you’d 
get from downloading the distribution.

2> use Java 9 or greater.

Best,
Erick

> On Mar 25, 2019, at 1:58 AM, Lukas Weiss  wrote:
> 
> I forward this message. Thanks Adam.
> 
> Hi,
> Apologies, I can’t figure out how to reply to the Solr mailing list.
> I just ran across the same high CPU usage issue. I believe it’’s caused by 
> this commit which was introduced in Solr 7.7.0 
> https://github.com/apache/lucene-solr/commit/eb652b84edf441d8369f5188cdd5e3ae2b151434#diff-e54b251d166135a1afb7938cfe152bb5
> There is a bug in JDK versions <=8 where using 0 threads in the 
> ScheduledThreadPool causes high CPU usage: 
> https://bugs.openjdk.java.net/browse/JDK-8129861
> Oddly, the latest version 
> of solr/core/src/java/org/apache/solr/update/CommitTracker.java on 
> master still uses 0 executors as the default. Presumably most everyone is 
> using JDK 9 or greater which has the bug fixed, so they don’t experience 
> the bug.
> Feel free to relay this back to the mailing list.
> Thanks,
> Adam Guthrie
> 
> 
> 
> 
> 
> Von:"Lukas Weiss" 
> An: solr-user@lucene.apache.org, 
> Datum:  27.02.2019 11:13
> Betreff:High CPU usage with Solr 7.7.0
> 
> 
> 
> Hello,
> 
> we recently updated our Solr server from 6.6.5 to 7.7.0. Since then, we 
> have problems with the server's CPU usage.
> We have two Solr cores configured, but even if we clear all indexes and do 
> 
> not start the index process, we see 100 CPU usage for both cores.
> 
> Here's what our top says:
> 
> root@solr:~ # top
> top - 09:25:24 up 17:40,  1 user,  load average: 2,28, 2,56, 2,68
> Threads:  74 total,   3 running,  71 sleeping,   0 stopped,   0 zombie
> %Cpu0  :100,0 us,  0,0 sy,  0,0 ni,  0,0 id,  0,0 wa,  0,0 hi,  0,0 si, 
> 0,0 st
> %Cpu1  :100,0 us,  0,0 sy,  0,0 ni,  0,0 id,  0,0 wa,  0,0 hi,  0,0 si, 
> 0,0 st
> %Cpu2  : 11,3 us,  1,0 sy,  0,0 ni, 86,7 id,  0,7 wa,  0,0 hi,  0,3 si, 
> 0,0 st
> %Cpu3  :  3,0 us,  3,0 sy,  0,0 ni, 93,7 id,  0,3 wa,  0,0 hi,  0,0 si, 
> 0,0 st
> KiB Mem :  8388608 total,  7859168 free,   496744 used,32696 
> buff/cache
> KiB Swap:  2097152 total,  2097152 free,0 used.  7859168 avail Mem 
> 
> 
> 
>  PID USER  PR  NIVIRTRESSHR S %CPU %MEM TIME+ COMMAND 
> 
>  P 
> 10209 solr  20   0 6138468 452520  25740 R 99,9  5,4  29:43.45 java 
> -server -Xms1024m -Xmx1024m -XX:NewRatio=3 -XX:SurvivorRatio=4 
> -XX:TargetSurvivorRatio=90 -XX:MaxTenuringThreshold=8 
> -XX:+UseConcMarkSweepGC -XX:ConcGCThreads=4 + 24 
> 10214 solr  20   0 6138468 452520  25740 R 99,9  5,4  28:42.91 java 
> -server -Xms1024m -Xmx1024m -XX:NewRatio=3 -XX:SurvivorRatio=4 
> -XX:TargetSurvivorRatio=90 -XX:MaxTenuringThreshold=8 
> -XX:+UseConcMarkSweepGC -XX:ConcGCThreads=4 + 25
> 
> The solr server is installed on a Debian Stretch 9.8 (64bit) on Linux LXC 
> dedicated Container.
> 
> Some more server info:
> 
> root@solr:~ # java -version
> openjdk version "1.8.0_181"
> OpenJDK Runtime Environment (build 1.8.0_181-8u181-b13-2~deb9u1-b13)
> OpenJDK 64-Bit Server VM (build 25.181-b13, mixed mode)
> 
> root@solr:~ # free -m
>  totalusedfree  shared  buff/cache 
> available
> Mem:   8192 4847675 701  31 7675
> Swap:  2048   02048
> 
> We also found something strange if we do an strace of the main process, we 
> 
> get lots of ongoing connection timeouts:
> 
> root@solr:~ # strace -F -p 4136
> strace: Process 4136 attached with 48 threads
> strace: [ Process PID=11089 runs in x32 mode. ]
> [pid  4937] epoll_wait(139,  
> [pid  4936] restart_syscall(<... resuming interrupted futex ...> 
> 
> [pid  4909] restart_syscall(<... resuming interrupted futex ...> 
> 
> [pid  4618] epoll_wait(136,  
> [pid  4576] futex(0x7ff61ce66474, FUTEX_WAIT_PRIVATE, 1, NULL  ...>
> [pid  4279] futex(0x7ff61ce62b34, FUTEX_WAIT_PRIVATE, 2203, NULL 
> 
> [pid  4244] restart_syscall(<... resuming interrupted futex ...> 
> 
> [pid  4227] futex(0x7ff56c71ae14, FUTEX_WAIT_PRIVATE, 2237, NULL 
> 
> [pid  4243] restart_syscall(<... resuming interrupted futex ...> 
> 
> [pid  4228] futex(0x7ff5608331a4, FUTEX_WAIT_PRIVATE, 2237, NULL 
> 
> [pid  4208] futex(0x7ff61ce63e54, FUTEX_WAIT_PRIVATE, 5, NULL  ...>
> [pid  4205] restart_syscall(<... resuming interrupted futex ...> 
> 
> [pid  4204] restart_syscall(<... resuming interrupted futex ...> 
> 
> [pid  4196] restart_syscall(<... resuming interrupted futex ...> 
> 
> [pid  4195] restart_syscall(<... resuming interrupted futex ...> 
> 
> [pid  4194] restart_syscall(<... resuming interrupted futex ...> 
> 
> [pid  4193] restart_syscall(<... resuming 

Re: security.json "all" predefined permission

2019-03-29 Thread Jason Gerlowski
Thanks for the pointer Jan.

I spent much of yesterday experimenting with the ordering to make sure
that wasn't a factor and I was able to eventually rule it out with
some debug logging that showed that the requests were being allowed
because it couldn't find any governing permission rules. Apparently
RBAP fails "open"
(https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/security/RuleBasedAuthorizationPlugin.java#L208)

Anyway, I'm pretty convinced this is a bug.  Most handlers implement
the PermissionNameProvider interface, which has a method that spits
out the required permission for that request handler.  (e.g.
CoreAdminHandler.getPermissionName() returns either CORE_READ_PERM or
CORE_EDIT_PERM based on the request's query params).  When the
request-handler is-a PermissionNameProvider, we do string matching to
see whether we have permissions, but we don't check for the "all"
special case.  So RBAP checks for "all" if the handler wasn't a
PermissionNameProvider (causing SOLR-13344's Admin UI behavior), but
it doesn't check for all when the handler is a PermissionNameProvider
(causing the buggy behavior I described above).

We should definitely be checking for all when there is a
PermissionNameProvider, so I'll create a JIRA for this.

Best,

Jason

On Thu, Mar 28, 2019 at 6:11 PM Jan Høydahl  wrote:
>
> There was some other issues with the "all" permission as well lately, see 
> https://issues.apache.org/jira/browse/SOLR-13344 
> 
> Order matters in permissions, the first permission matching is used, but I 
> don't know how that would change anything here.
> One thing to try could be to start with an empty RuleBasedAuth and then use 
> the REST API to add all the permissions and roles,
> in that way you are sure that they are syntactically correct, and hopefully 
> you get some errors if you do something wrong?
>
> --
> Jan Høydahl, search solution architect
> Cominvent AS - www.cominvent.com
>
> > 28. mar. 2019 kl. 20:24 skrev Jason Gerlowski :
> >
> > Hi all,
> >
> > Diving into the RuleBasedAuthorizationPlugin for the first time in
> > awhile, and found that the predefined permission "all" isn't behaving
> > the way I'd expect it to.  I'm trying to figure out whether it doesn't
> > work the way I think, whether I'm just making a dumb mistake, or
> > whether it's currently broken on master (and some 7x versions)
> >
> > My intent is to create two users, one with readonly access, and an
> > admin user with access to all APIs.  I'm trying to achieve this with
> > the security.json below:
> >
> > {
> >  "authentication": {
> >"blockUnknown": true,
> >"class": "solr.BasicAuthPlugin",
> >"credentials": {
> >  "readonly": "",
> >  "admin": ""}},
> >  "authorization": {
> >"class": "solr.RuleBasedAuthorizationPlugin",
> >"permissions": [
> >  {"name":"read","role": "*"},
> >  {"name":"schema-read", "role":"*"},
> >  {"name":"config-read", "role":"*"},
> >  {"name":"collection-admin-read", "role":"*"},
> >  {"name":"metrics-read", "role":"*"},
> >  {"name":"core-admin-read","role":"*"},
> >  {"name": "all", "role": "admin_role"}
> >],
> >"user-role": {
> >  "readonly": "readonly_role",
> >  "admin": "admin_role"
> >}}}
> >
> > When I go to test this though, I'm surprised to find that the
> > "readonly" user is still able to access APIs that I would expect to be
> > locked down.  The "readonly" user can even update security permissions
> > with the curl command below!
> >
> > curl -X POST -H 'Content-Type: application/json' -u
> > "readonly:readonlyPassword"
> > http://localhost:8983/solr/admin/authorization --d
> > @some_auth_json.json
> >
> > My expectation was that the predefined "all" permission would act as a
> > catch all, and restrict all requests to "admin_role" that require
> > permissions I didn't explicitly give to my "readonly" user.  But it
> > doesn't seem to work that way.  Am I misunderstanding what the "all"
> > permission does, or is this a bug?
> >
> > Thanks for any help or clarification.
> >
> > Jason
>


Solr 7 not removing a node completely due to too small thread pool

2019-03-29 Thread Roger Lehmann
Situation

I'm currently trying to set up SolrCloud in an AWS Autoscaling Group, so
that it can scale dynamically.

I've also added the following triggers to Solr, so that each node will have
1 (and only one) replication of each collection:

{
"set-cluster-policy": [
  {"replica": "<2", "shard": "#EACH", "node": "#EACH"}
  ],
  "set-trigger": [{
"name": "node_added_trigger",
"event": "nodeAdded",
"waitFor": "5s",
"preferredOperation": "ADDREPLICA"
  },{
"name": "node_lost_trigger",
"event": "nodeLost",
"waitFor": "120s",
"preferredOperation": "DELETENODE"
  }]
}

This works pretty well. But my problem is that when the a node gets
removed, it doesn't remove all 19 replicas from this node and I have
problems when accessing the "nodes" page:

[image: enter image description here] 

In the logs, this exception occurs:

Operation deletenode
failed:java.util.concurrent.RejectedExecutionException: Task
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$$Lambda$45/1104948431@467049e2
rejected from 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor@773563df[Running,
pool size = 10, active threads = 10, queued tasks = 0, completed tasks
= 1]
at 
java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063)
at 
java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830)
at 
java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379)
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.execute(ExecutorUtil.java:194)
at 
java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:134)
at 
org.apache.solr.cloud.api.collections.DeleteReplicaCmd.deleteCore(DeleteReplicaCmd.java:276)
at 
org.apache.solr.cloud.api.collections.DeleteReplicaCmd.deleteReplica(DeleteReplicaCmd.java:95)
at 
org.apache.solr.cloud.api.collections.DeleteNodeCmd.cleanupReplicas(DeleteNodeCmd.java:109)
at 
org.apache.solr.cloud.api.collections.DeleteNodeCmd.call(DeleteNodeCmd.java:62)
at 
org.apache.solr.cloud.api.collections.OverseerCollectionMessageHandler.processMessage(OverseerCollectionMessageHandler.java:292)
at 
org.apache.solr.cloud.OverseerTaskProcessor$Runner.run(OverseerTaskProcessor.java:496)
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)

Problem description

So, the problem is that it only has a pool size of 10, of which 10 are busy
and nothing gets queued (synchronous execution). In fact, it really only
removed 10 replicas and the other 9 replicas stayed there. When manually
sending the API command to delete this node it works fine, since Solr only
needs to remove the remaining 9 replicas and everything is good again.
Question

How can I either increase this (small) thread pool size and/or activate
queueing the remaining deletion tasks? Another solution might be to retry
the failed task until it succeeds.

Using Solr 7.7.1 on Ubuntu Server installed with the installation script
from Solr (so I guess it's using Jetty?).

Thanks for your help!


Solr 7.5 multi-valued fields will not update with multiple values

2019-03-29 Thread Eivind Hodneland
Hi,

I am running a Solr 7.5 index for a customer.
I have recently discovered that none of the multivalued string/text fields are 
filled with more than one value each.

Example of indexing (edited and abbreviated):
curl -X POST -H 'Content-Type: application/json' 
'http://localhost:18080/solr/customer_core/update/' --data-binary 
'[{"id":"MyId","author_fullname":["Author 1","Author 2","Author 3"]}]'

The multivalued field author_fullname only gets one value, namely "Author 1".
This is also the case for the other multivalued fields in the schema.

Definition of author_fullname and its corresponding type from managed-schema:

  




  
  
  
  


  
  
  
  

  

Uptime Consulting | Eivind Hodneland | Senior Consultant | Munchs gate 7, 
NO-0165 Oslo, Norway
Tel: +47 22 33 71 00 | Mob: +47 971 76 083 | 
eivind.hodnel...@uptimeconsulting.no
  | www.uptimeconsulting.no
--
Search and Big Data solutions
Software Development
IT outsourcing services and consultancy

[4180EEB7]



Re: dataimport for full-import

2019-03-29 Thread Alexandre Rafalovitch
It is probably autocommit setting in your solrconfig.xml.

But you may also want to consider indexing into a new core and then doing a
core swap at the end. Or re-aliasing if you are running a multiCore
collection.

Regards,
 Alex

On Fri, Mar 29, 2019, 2:25 AM 黄云尧,  wrote:

> when I do the full-import , it may take about 1 hours,but these old
> documents will be deleted after 10 minite, it cause query nothing 。what do
> something method to controller the old documents  will be deleted after
> longer?
>
>
>
>


Re: Problem with white space or special characters in function queries

2019-03-29 Thread Ahemad Ali
Do you have any suggestions for querying the indexed data with whitespaces and 
special charectors ?

Sent from Yahoo Mail on Android 
 
  On Fri, Mar 29, 2019 at 16:59, Yonik Seeley wrote:   On 
Thu, Mar 28, 2019 at 6:05 PM Jan Høydahl  wrote:

> Functions can never contain spaces.


Spaces work fine in functions in general.
The issue is the "bf" parameter as it uses whitespace to delimit multiple
functions IIRC.

-Yonik



> Try to substitute the term with a variable, i.e. a request parameter, e.g.
>
>
> bf=if(termfreq(ADSKFeature,$myTerm),log(CaseCount),sqrt(CaseCount))=CUI+(Command)
>
> --
> Jan Høydahl, search solution architect
> Cominvent AS - www.cominvent.com
>
> > 28. mar. 2019 kl. 18:51 skrev shamik :
> >
> > Ahemad, I don't think its related to the field definition, rather looks
> like
> > an inherent bug. For the time being, I created a copyfield which uses a
> > custom regex to remove whitespace and special characters and use it in
> the
> > function. I'll debug the source code and confirm if it's bug, will raise
> a
> > JIRA if needed.
> >
> >
> >
> > --
> > Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
>
>  


Re: Problem with white space or special characters in function queries

2019-03-29 Thread Yonik Seeley
On Thu, Mar 28, 2019 at 6:05 PM Jan Høydahl  wrote:

> Functions can never contain spaces.


Spaces work fine in functions in general.
The issue is the "bf" parameter as it uses whitespace to delimit multiple
functions IIRC.

-Yonik



> Try to substitute the term with a variable, i.e. a request parameter, e.g.
>
>
> bf=if(termfreq(ADSKFeature,$myTerm),log(CaseCount),sqrt(CaseCount))=CUI+(Command)
>
> --
> Jan Høydahl, search solution architect
> Cominvent AS - www.cominvent.com
>
> > 28. mar. 2019 kl. 18:51 skrev shamik :
> >
> > Ahemad, I don't think its related to the field definition, rather looks
> like
> > an inherent bug. For the time being, I created a copyfield which uses a
> > custom regex to remove whitespace and special characters and use it in
> the
> > function. I'll debug the source code and confirm if it's bug, will raise
> a
> > JIRA if needed.
> >
> >
> >
> > --
> > Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
>
>


Re: Upgrade Solr 8.0.0 issue

2019-03-29 Thread Zheng Lin Edwin Yeo
Yes, delete both the data folder and the core.properties file. When you
create the collection they will be created again.

If you have indexed data to the collection before, then you can backup to
keep the old index if you want. Otherwise, it is not necessary. You can use
back the same data directory.

Regards,
Edwin

On Fri, 29 Mar 2019 at 16:32, vishal patel 
wrote:

>
> If I delete the product_shard1_replica_n1 folder then what about data
> folder? because its in this folder . It also need to delete?. Do you need
> to backup of data folder or change the data directory?
>
> Sent from Outlook 
> --
> *From:* Zheng Lin Edwin Yeo 
> *Sent:* Friday, March 29, 2019 1:39 PM
> *To:* vishal patel
> *Subject:* Re: Upgrade Solr 8.0.0 issue
>
> Yes.
> Also delete the product_shard1_replica_n1 folder under server\solr, so
> that you can start everything from fresh from creating the collection.
>
> Regards,
> Edwin
>
>
> On Fri, 29 Mar 2019 at 15:43, vishal patel 
> wrote:
>
> But by mistake zoo_data version2 folder delete then upconfig and again
> create the collection?
>
> Sent from Outlook 
> --
> *From:* Zheng Lin Edwin Yeo 
> *Sent:* Friday, March 29, 2019 1:07 PM
> *To:* vishal patel
> *Cc:* solr-user@lucene.apache.org
> *Subject:* Re: Upgrade Solr 8.0.0 issue
>
> Yes, if you have changes to solrconfig.xml or schema.xml, just upconfig
> and reload the collection. Not necessary to restart Solr.
>
> It's not necessary to change the index directly, but you can change it in
> the core.properties if you want to store the index elsewhere (Eg: in
> another drive).
>
> Regards,
> Edwin
>
>
> On Fri, 29 Mar 2019 at 14:11, vishal patel 
> wrote:
>
> Ohk got your point.
>
> i will create the collection again but sometimes any changes in
> Solrconfig.xml or schema.xml then what to do??
> As per my opinion, again upconfig and start the solr and reload the
> collection. Is it true?
> By my mistake I removed zoo_data version2 folder from zoo keeper and then
> upconfig and start the solr.
> When i did upconfig and started the solr , Error came about coreNodeName .
> So I created collection again but my index data folder was overwrite. is
> it necessary to change index data directory ?
> My existing data directory :
> F:\SolrCloud-8-0-0\solr-8.0.0-shard-1\server\solr
> product_shard1_replica_n1
>--- data
>--- core.properties
>
> Note : I don't want to re-index after the change in Solrconfig.xml.
>
> Sent from Outlook 
> --
> *From:* Zheng Lin Edwin Yeo 
> *Sent:* Friday, March 29, 2019 11:00 AM
> *To:* vishal patel
> *Cc:* solr-user@lucene.apache.org
> *Subject:* Re: Upgrade Solr 8.0.0 issue
>
> Usually I will create the collection again since I will re-index after the
> upgrade. If I create the collection again, the new core.properties will be
> created.
>
> If you plan to use the same schema.xml, have to check if there are class
> that have become deprecated, as usually some old class will get deprecated
> in the new version.
>
> Regards,
> Edwin
>
> On Fri, 29 Mar 2019 at 12:48, vishal patel 
> wrote:
>
> i will re-index with clean after the solr 8.0.0 upgrade.Is it necessary to
> change the core.properties?
> In solr 6.0.0 i wrote only name,shard and collection in core.properties i
> didn't write coreNodeName and collection.configName. For starting solr,
> first i delete the zoo_data version-2 folder and upconfig command and then
> start the solr and successfully worked.
>
> In solr 6.0.0 i never mentioned coreNodeName  but when i did upconfig its
> automatic entry in zoo keeper and after the solr start it worked fine.
>
> why is not working in solr 8.0.0? Is necessary to create a collection
> again using admin GUI or changes in core.properties for solr 8.0.0?
>
> Note : I don't want to copy of data from solr 6.1.0 to solr 8.0.0 i will
> re-index after the upgrade.I want only same schema.xml of solr 6.0.0.
>
> Sent from Outlook 
> --
> *From:* Zheng Lin Edwin Yeo 
> *Sent:* Friday, March 29, 2019 8:47 AM
> *To:* solr-user@lucene.apache.org
> *Subject:* Re: Upgrade Solr 8.0.0 issue
>
> Hi Vishal,
>
> There could be problem with your index if you upgrade directly from Solr
> 6.1.0 to Solr 8.0.0, which is two major version, as Solr only supports
> upgrade for one major version.
>
> Regards,
> Edwin
>
> On Thu, 28 Mar 2019 at 21:30, vishal patel 
> wrote:
>
> > Hi
> >
> > I am upgrading solr 6.1.0 to 8.0.0. In solr 6.0.0 my folder structure
> below
> > ---product
> > ---conf
> >  ---schema.xml
> > ---solrconfig.xml
> > ---core.properties
> > ---solr.xml
> >
> > core.properties contains
> > name=product
> > shard=shard1
> > collection=product
> >
> > upconfig command :
> > zkcli.bat -cmd bootstrap -solrhome
> > F:\SolrCloud-6.1.0\solr-6.1.0-shard-1\server\solr -z
> 192.168.100.222:3181,
> 

Re: Upgrade Solr 8.0.0 issue

2019-03-29 Thread Jan Høydahl
I recommend to do a CLEAN install of Solr 8. Wipe out all old stuff. Completely 
Empty :)
Then create your collection and uploading your config at the same time:

bin/solr create -c mycollection -replicationFactor 2 -d /path/to/conf/folder

Then re-index your content and you're good to go. Do not bother with 
core.properties, zoo_data or any old cruft

--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com

> 29. mar. 2019 kl. 09:32 skrev vishal patel :
> 
> 
> If I delete the product_shard1_replica_n1 folder then what about data folder? 
> because its in this folder . It also need to delete?. Do you need to backup 
> of data folder or change the data directory?
> 
> Sent from Outlook
> 
> From: Zheng Lin Edwin Yeo 
> Sent: Friday, March 29, 2019 1:39 PM
> To: vishal patel
> Subject: Re: Upgrade Solr 8.0.0 issue
> 
> Yes.
> Also delete the product_shard1_replica_n1 folder under server\solr, so that 
> you can start everything from fresh from creating the collection.
> 
> Regards,
> Edwin
> 
> 
> On Fri, 29 Mar 2019 at 15:43, vishal patel 
> mailto:vishalpatel200...@outlook.com>> wrote:
> But by mistake zoo_data version2 folder delete then upconfig and again create 
> the collection?
> 
> Sent from Outlook
> 
> From: Zheng Lin Edwin Yeo mailto:edwinye...@gmail.com>>
> Sent: Friday, March 29, 2019 1:07 PM
> To: vishal patel
> Cc: solr-user@lucene.apache.org
> Subject: Re: Upgrade Solr 8.0.0 issue
> 
> Yes, if you have changes to solrconfig.xml or schema.xml, just upconfig and 
> reload the collection. Not necessary to restart Solr.
> 
> It's not necessary to change the index directly, but you can change it in the 
> core.properties if you want to store the index elsewhere (Eg: in another 
> drive).
> 
> Regards,
> Edwin
> 
> 
> On Fri, 29 Mar 2019 at 14:11, vishal patel 
> mailto:vishalpatel200...@outlook.com>> wrote:
> Ohk got your point.
> 
> i will create the collection again but sometimes any changes in 
> Solrconfig.xml or schema.xml then what to do??
> As per my opinion, again upconfig and start the solr and reload the 
> collection. Is it true?
> By my mistake I removed zoo_data version2 folder from zoo keeper and then 
> upconfig and start the solr.
> When i did upconfig and started the solr , Error came about coreNodeName . So 
> I created collection again but my index data folder was overwrite. is it 
> necessary to change index data directory ?
> My existing data directory :
> F:\SolrCloud-8-0-0\solr-8.0.0-shard-1\server\solr
> product_shard1_replica_n1
>   --- data
>   --- core.properties
> 
> Note : I don't want to re-index after the change in Solrconfig.xml.
> 
> Sent from Outlook
> 
> From: Zheng Lin Edwin Yeo mailto:edwinye...@gmail.com>>
> Sent: Friday, March 29, 2019 11:00 AM
> To: vishal patel
> Cc: solr-user@lucene.apache.org
> Subject: Re: Upgrade Solr 8.0.0 issue
> 
> Usually I will create the collection again since I will re-index after the 
> upgrade. If I create the collection again, the new core.properties will be 
> created.
> 
> If you plan to use the same schema.xml, have to check if there are class that 
> have become deprecated, as usually some old class will get deprecated in the 
> new version.
> 
> Regards,
> Edwin
> 
> On Fri, 29 Mar 2019 at 12:48, vishal patel 
> mailto:vishalpatel200...@outlook.com>> wrote:
> i will re-index with clean after the solr 8.0.0 upgrade.Is it necessary to 
> change the core.properties?
> In solr 6.0.0 i wrote only name,shard and collection in core.properties i 
> didn't write coreNodeName and collection.configName. For starting solr, first 
> i delete the zoo_data version-2 folder and upconfig command and then start 
> the solr and successfully worked.
> 
> In solr 6.0.0 i never mentioned coreNodeName  but when i did upconfig its 
> automatic entry in zoo keeper and after the solr start it worked fine.
> 
> why is not working in solr 8.0.0? Is necessary to create a collection again 
> using admin GUI or changes in core.properties for solr 8.0.0?
> 
> Note : I don't want to copy of data from solr 6.1.0 to solr 8.0.0 i will 
> re-index after the upgrade.I want only same schema.xml of solr 6.0.0.
> 
> Sent from Outlook
> 
> From: Zheng Lin Edwin Yeo mailto:edwinye...@gmail.com>>
> Sent: Friday, March 29, 2019 8:47 AM
> To: solr-user@lucene.apache.org
> Subject: Re: Upgrade Solr 8.0.0 issue
> 
> Hi Vishal,
> 
> There could be problem with your index if you upgrade directly from Solr
> 6.1.0 to Solr 8.0.0, which is two major version, as Solr only supports
> upgrade for one major version.
> 
> Regards,
> Edwin
> 
> On Thu, 28 Mar 2019 at 21:30, vishal patel 
> 

Re: How to implement security for solr admin page

2019-03-29 Thread Jan Høydahl
You are probably looking for authentication and authorization. The 
documentation has you covered:
https://lucene.apache.org/solr/guide/7_7/securing-solr.html 


--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com

> 28. mar. 2019 kl. 21:14 skrev Jagannath Bilgi :
> 
> Hi Team,
> 
> Working solR search. Able to create schema and query and get results.
> 
> Problem:
> Any one having admin url would be able to read and write to solR.
> There by looking for some mechanism like access_key/userid/password
> etc to prevent unauthorized users to admin url.
> 
> Would you please suggest what would be the best solution.
> 
> Thanks and regards
> 
> Jagannath S Bilgi



Re: Upgrade Solr 8.0.0 issue

2019-03-29 Thread vishal patel

If I delete the product_shard1_replica_n1 folder then what about data folder? 
because its in this folder . It also need to delete?. Do you need to backup of 
data folder or change the data directory?

Sent from Outlook

From: Zheng Lin Edwin Yeo 
Sent: Friday, March 29, 2019 1:39 PM
To: vishal patel
Subject: Re: Upgrade Solr 8.0.0 issue

Yes.
Also delete the product_shard1_replica_n1 folder under server\solr, so that you 
can start everything from fresh from creating the collection.

Regards,
Edwin


On Fri, 29 Mar 2019 at 15:43, vishal patel 
mailto:vishalpatel200...@outlook.com>> wrote:
But by mistake zoo_data version2 folder delete then upconfig and again create 
the collection?

Sent from Outlook

From: Zheng Lin Edwin Yeo mailto:edwinye...@gmail.com>>
Sent: Friday, March 29, 2019 1:07 PM
To: vishal patel
Cc: solr-user@lucene.apache.org
Subject: Re: Upgrade Solr 8.0.0 issue

Yes, if you have changes to solrconfig.xml or schema.xml, just upconfig and 
reload the collection. Not necessary to restart Solr.

It's not necessary to change the index directly, but you can change it in the 
core.properties if you want to store the index elsewhere (Eg: in another drive).

Regards,
Edwin


On Fri, 29 Mar 2019 at 14:11, vishal patel 
mailto:vishalpatel200...@outlook.com>> wrote:
Ohk got your point.

i will create the collection again but sometimes any changes in Solrconfig.xml 
or schema.xml then what to do??
As per my opinion, again upconfig and start the solr and reload the collection. 
Is it true?
By my mistake I removed zoo_data version2 folder from zoo keeper and then 
upconfig and start the solr.
When i did upconfig and started the solr , Error came about coreNodeName . So I 
created collection again but my index data folder was overwrite. is it 
necessary to change index data directory ?
My existing data directory :
F:\SolrCloud-8-0-0\solr-8.0.0-shard-1\server\solr
product_shard1_replica_n1
   --- data
   --- core.properties

Note : I don't want to re-index after the change in Solrconfig.xml.

Sent from Outlook

From: Zheng Lin Edwin Yeo mailto:edwinye...@gmail.com>>
Sent: Friday, March 29, 2019 11:00 AM
To: vishal patel
Cc: solr-user@lucene.apache.org
Subject: Re: Upgrade Solr 8.0.0 issue

Usually I will create the collection again since I will re-index after the 
upgrade. If I create the collection again, the new core.properties will be 
created.

If you plan to use the same schema.xml, have to check if there are class that 
have become deprecated, as usually some old class will get deprecated in the 
new version.

Regards,
Edwin

On Fri, 29 Mar 2019 at 12:48, vishal patel 
mailto:vishalpatel200...@outlook.com>> wrote:
i will re-index with clean after the solr 8.0.0 upgrade.Is it necessary to 
change the core.properties?
In solr 6.0.0 i wrote only name,shard and collection in core.properties i 
didn't write coreNodeName and collection.configName. For starting solr, first i 
delete the zoo_data version-2 folder and upconfig command and then start the 
solr and successfully worked.

In solr 6.0.0 i never mentioned coreNodeName  but when i did upconfig its 
automatic entry in zoo keeper and after the solr start it worked fine.

why is not working in solr 8.0.0? Is necessary to create a collection again 
using admin GUI or changes in core.properties for solr 8.0.0?

Note : I don't want to copy of data from solr 6.1.0 to solr 8.0.0 i will 
re-index after the upgrade.I want only same schema.xml of solr 6.0.0.

Sent from Outlook

From: Zheng Lin Edwin Yeo mailto:edwinye...@gmail.com>>
Sent: Friday, March 29, 2019 8:47 AM
To: solr-user@lucene.apache.org
Subject: Re: Upgrade Solr 8.0.0 issue

Hi Vishal,

There could be problem with your index if you upgrade directly from Solr
6.1.0 to Solr 8.0.0, which is two major version, as Solr only supports
upgrade for one major version.

Regards,
Edwin

On Thu, 28 Mar 2019 at 21:30, vishal patel 
mailto:vishalpatel200...@outlook.com>>
wrote:

> Hi
>
> I am upgrading solr 6.1.0 to 8.0.0. In solr 6.0.0 my folder structure below
> ---product
> ---conf
>  ---schema.xml
> ---solrconfig.xml
> ---core.properties
> ---solr.xml
>
> core.properties contains
> name=product
> shard=shard1
> collection=product
>
> upconfig command :
> zkcli.bat -cmd bootstrap -solrhome
> F:\SolrCloud-6.1.0\solr-6.1.0-shard-1\server\solr -z 
> 192.168.100.222:3181,
> 192.168.100.222:3182,192.168.100.222:3183
>
> Note : In solr 6.1.0 , I did not create the product collection just copy
> from Solr 5.2.0 and changed solrconfig.xml.
>
> when I start the solr 6.1.0, 

Re: Upgrade Solr 8.0.0 issue

2019-03-29 Thread vishal patel
But by mistake zoo_data version2 folder delete then upconfig and again create 
the collection?

Sent from Outlook

From: Zheng Lin Edwin Yeo 
Sent: Friday, March 29, 2019 1:07 PM
To: vishal patel
Cc: solr-user@lucene.apache.org
Subject: Re: Upgrade Solr 8.0.0 issue

Yes, if you have changes to solrconfig.xml or schema.xml, just upconfig and 
reload the collection. Not necessary to restart Solr.

It's not necessary to change the index directly, but you can change it in the 
core.properties if you want to store the index elsewhere (Eg: in another drive).

Regards,
Edwin


On Fri, 29 Mar 2019 at 14:11, vishal patel 
mailto:vishalpatel200...@outlook.com>> wrote:
Ohk got your point.

i will create the collection again but sometimes any changes in Solrconfig.xml 
or schema.xml then what to do??
As per my opinion, again upconfig and start the solr and reload the collection. 
Is it true?
By my mistake I removed zoo_data version2 folder from zoo keeper and then 
upconfig and start the solr.
When i did upconfig and started the solr , Error came about coreNodeName . So I 
created collection again but my index data folder was overwrite. is it 
necessary to change index data directory ?
My existing data directory :
F:\SolrCloud-8-0-0\solr-8.0.0-shard-1\server\solr
product_shard1_replica_n1
   --- data
   --- core.properties

Note : I don't want to re-index after the change in Solrconfig.xml.

Sent from Outlook

From: Zheng Lin Edwin Yeo mailto:edwinye...@gmail.com>>
Sent: Friday, March 29, 2019 11:00 AM
To: vishal patel
Cc: solr-user@lucene.apache.org
Subject: Re: Upgrade Solr 8.0.0 issue

Usually I will create the collection again since I will re-index after the 
upgrade. If I create the collection again, the new core.properties will be 
created.

If you plan to use the same schema.xml, have to check if there are class that 
have become deprecated, as usually some old class will get deprecated in the 
new version.

Regards,
Edwin

On Fri, 29 Mar 2019 at 12:48, vishal patel 
mailto:vishalpatel200...@outlook.com>> wrote:
i will re-index with clean after the solr 8.0.0 upgrade.Is it necessary to 
change the core.properties?
In solr 6.0.0 i wrote only name,shard and collection in core.properties i 
didn't write coreNodeName and collection.configName. For starting solr, first i 
delete the zoo_data version-2 folder and upconfig command and then start the 
solr and successfully worked.

In solr 6.0.0 i never mentioned coreNodeName  but when i did upconfig its 
automatic entry in zoo keeper and after the solr start it worked fine.

why is not working in solr 8.0.0? Is necessary to create a collection again 
using admin GUI or changes in core.properties for solr 8.0.0?

Note : I don't want to copy of data from solr 6.1.0 to solr 8.0.0 i will 
re-index after the upgrade.I want only same schema.xml of solr 6.0.0.

Sent from Outlook

From: Zheng Lin Edwin Yeo mailto:edwinye...@gmail.com>>
Sent: Friday, March 29, 2019 8:47 AM
To: solr-user@lucene.apache.org
Subject: Re: Upgrade Solr 8.0.0 issue

Hi Vishal,

There could be problem with your index if you upgrade directly from Solr
6.1.0 to Solr 8.0.0, which is two major version, as Solr only supports
upgrade for one major version.

Regards,
Edwin

On Thu, 28 Mar 2019 at 21:30, vishal patel 
mailto:vishalpatel200...@outlook.com>>
wrote:

> Hi
>
> I am upgrading solr 6.1.0 to 8.0.0. In solr 6.0.0 my folder structure below
> ---product
> ---conf
>  ---schema.xml
> ---solrconfig.xml
> ---core.properties
> ---solr.xml
>
> core.properties contains
> name=product
> shard=shard1
> collection=product
>
> upconfig command :
> zkcli.bat -cmd bootstrap -solrhome
> F:\SolrCloud-6.1.0\solr-6.1.0-shard-1\server\solr -z 
> 192.168.100.222:3181,
> 192.168.100.222:3182,192.168.100.222:3183
>
> Note : In solr 6.1.0 , I did not create the product collection just copy
> from Solr 5.2.0 and changed solrconfig.xml.
>
> when I start the solr 6.1.0, product collection is automatic created and
> also I can access in GUI admin.
> But in Solr 8.0.0, collection is not created automatic and Error came. I
> used same core.properties.
> Why is it not working??
>
> Regards,
> Vishal
> Sent from Outlook
>


Re: Upgrade Solr 8.0.0 issue

2019-03-29 Thread Zheng Lin Edwin Yeo
Yes, if you have changes to solrconfig.xml or schema.xml, just upconfig and
reload the collection. Not necessary to restart Solr.

It's not necessary to change the index directly, but you can change it in
the core.properties if you want to store the index elsewhere (Eg: in
another drive).

Regards,
Edwin


On Fri, 29 Mar 2019 at 14:11, vishal patel 
wrote:

> Ohk got your point.
>
> i will create the collection again but sometimes any changes in
> Solrconfig.xml or schema.xml then what to do??
> As per my opinion, again upconfig and start the solr and reload the
> collection. Is it true?
> By my mistake I removed zoo_data version2 folder from zoo keeper and then
> upconfig and start the solr.
> When i did upconfig and started the solr , Error came about coreNodeName .
> So I created collection again but my index data folder was overwrite. is
> it necessary to change index data directory ?
> My existing data directory :
> F:\SolrCloud-8-0-0\solr-8.0.0-shard-1\server\solr
> product_shard1_replica_n1
>--- data
>--- core.properties
>
> Note : I don't want to re-index after the change in Solrconfig.xml.
>
> Sent from Outlook 
> --
> *From:* Zheng Lin Edwin Yeo 
> *Sent:* Friday, March 29, 2019 11:00 AM
> *To:* vishal patel
> *Cc:* solr-user@lucene.apache.org
> *Subject:* Re: Upgrade Solr 8.0.0 issue
>
> Usually I will create the collection again since I will re-index after the
> upgrade. If I create the collection again, the new core.properties will be
> created.
>
> If you plan to use the same schema.xml, have to check if there are class
> that have become deprecated, as usually some old class will get deprecated
> in the new version.
>
> Regards,
> Edwin
>
> On Fri, 29 Mar 2019 at 12:48, vishal patel 
> wrote:
>
> i will re-index with clean after the solr 8.0.0 upgrade.Is it necessary to
> change the core.properties?
> In solr 6.0.0 i wrote only name,shard and collection in core.properties i
> didn't write coreNodeName and collection.configName. For starting solr,
> first i delete the zoo_data version-2 folder and upconfig command and then
> start the solr and successfully worked.
>
> In solr 6.0.0 i never mentioned coreNodeName  but when i did upconfig its
> automatic entry in zoo keeper and after the solr start it worked fine.
>
> why is not working in solr 8.0.0? Is necessary to create a collection
> again using admin GUI or changes in core.properties for solr 8.0.0?
>
> Note : I don't want to copy of data from solr 6.1.0 to solr 8.0.0 i will
> re-index after the upgrade.I want only same schema.xml of solr 6.0.0.
>
> Sent from Outlook 
> --
> *From:* Zheng Lin Edwin Yeo 
> *Sent:* Friday, March 29, 2019 8:47 AM
> *To:* solr-user@lucene.apache.org
> *Subject:* Re: Upgrade Solr 8.0.0 issue
>
> Hi Vishal,
>
> There could be problem with your index if you upgrade directly from Solr
> 6.1.0 to Solr 8.0.0, which is two major version, as Solr only supports
> upgrade for one major version.
>
> Regards,
> Edwin
>
> On Thu, 28 Mar 2019 at 21:30, vishal patel 
> wrote:
>
> > Hi
> >
> > I am upgrading solr 6.1.0 to 8.0.0. In solr 6.0.0 my folder structure
> below
> > ---product
> > ---conf
> >  ---schema.xml
> > ---solrconfig.xml
> > ---core.properties
> > ---solr.xml
> >
> > core.properties contains
> > name=product
> > shard=shard1
> > collection=product
> >
> > upconfig command :
> > zkcli.bat -cmd bootstrap -solrhome
> > F:\SolrCloud-6.1.0\solr-6.1.0-shard-1\server\solr -z
> 192.168.100.222:3181,
> > 192.168.100.222:3182,192.168.100.222:3183
> >
> > Note : In solr 6.1.0 , I did not create the product collection just copy
> > from Solr 5.2.0 and changed solrconfig.xml.
> >
> > when I start the solr 6.1.0, product collection is automatic created and
> > also I can access in GUI admin.
> > But in Solr 8.0.0, collection is not created automatic and Error came. I
> > used same core.properties.
> > Why is it not working??
> >
> > Regards,
> > Vishal
> > Sent from Outlook
> >
>
>


Re: IndexWriter has closed

2019-03-29 Thread Aroop Ganguly
Trying again .. Any idea why this might happen?


> On Mar 27, 2019, at 10:43 PM, Aroop Ganguly  wrote:
> 
> Hi Everyone
> 
> My indexing jobs are failing with “this IndexWriter has closed” errors..
> This is a solr 7.5 setup, with an NRT index.
> 
> In deeper logs I see, some of these exceptions,
> Any idea what could have caused this ?
> 
> o.a.s.s.HttpSolrCall null:org.apache.solr.common.SolrException: 
> java.io.IOException: Input/output error
>   at 
> org.apache.solr.update.TransactionLog.writeCommit(TransactionLog.java:477)
>   at org.apache.solr.update.UpdateLog.postCommit(UpdateLog.java:833)
>   at org.apache.solr.update.UpdateLog.preCommit(UpdateLog.java:817)
>   at 
> org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:669)
>   at 
> org.apache.solr.update.processor.RunUpdateProcessor.processCommit(RunUpdateProcessorFactory.java:93)
>   at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processCommit(UpdateRequestProcessor.java:68)
>   at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.doLocalCommit(DistributedUpdateProcessor.java:1959)
>   at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.processCommit(DistributedUpdateProcessor.java:1935)
>   at 
> org.apache.solr.update.processor.LogUpdateProcessorFactory$LogUpdateProcessor.processCommit(LogUpdateProcessorFactory.java:160)
>   at 
> org.apache.solr.handler.RequestHandlerUtils.handleCommit(RequestHandlerUtils.java:69)
>   at 
> org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:62)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2541)
>   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:709)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:515)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:377)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:323)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1634)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:533)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1595)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1317)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:473)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1564)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1219)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144)
>   at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:219)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:126)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
>   at 
> org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
>   at org.eclipse.jetty.server.Server.handle(Server.java:531)
>   at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:352)
>   at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:260)
>   at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:281)
>   at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:102)
>   at org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:118)
>   at 
> org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:333)
>   at 
> org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:310)
>   at 
> org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:168)
>   at 
> org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:126)
>   at 
> 

dataimport for full-import

2019-03-29 Thread 黄云尧
when I do the full-import , it may take about 1 hours,but these old  documents 
will be deleted after 10 minite, it cause query nothing 。what do something 
method to controller the old documents  will be deleted after longer?





Re: Upgrade Solr 8.0.0 issue

2019-03-29 Thread vishal patel
Ohk got your point.

i will create the collection again but sometimes any changes in Solrconfig.xml 
or schema.xml then what to do??
As per my opinion, again upconfig and start the solr and reload the collection. 
Is it true?
By my mistake I removed zoo_data version2 folder from zoo keeper and then 
upconfig and start the solr.
When i did upconfig and started the solr , Error came about coreNodeName . So I 
created collection again but my index data folder was overwrite. is it 
necessary to change index data directory ?
My existing data directory :
F:\SolrCloud-8-0-0\solr-8.0.0-shard-1\server\solr
product_shard1_replica_n1
   --- data
   --- core.properties

Note : I don't want to re-index after the change in Solrconfig.xml.

Sent from Outlook

From: Zheng Lin Edwin Yeo 
Sent: Friday, March 29, 2019 11:00 AM
To: vishal patel
Cc: solr-user@lucene.apache.org
Subject: Re: Upgrade Solr 8.0.0 issue

Usually I will create the collection again since I will re-index after the 
upgrade. If I create the collection again, the new core.properties will be 
created.

If you plan to use the same schema.xml, have to check if there are class that 
have become deprecated, as usually some old class will get deprecated in the 
new version.

Regards,
Edwin

On Fri, 29 Mar 2019 at 12:48, vishal patel 
mailto:vishalpatel200...@outlook.com>> wrote:
i will re-index with clean after the solr 8.0.0 upgrade.Is it necessary to 
change the core.properties?
In solr 6.0.0 i wrote only name,shard and collection in core.properties i 
didn't write coreNodeName and collection.configName. For starting solr, first i 
delete the zoo_data version-2 folder and upconfig command and then start the 
solr and successfully worked.

In solr 6.0.0 i never mentioned coreNodeName  but when i did upconfig its 
automatic entry in zoo keeper and after the solr start it worked fine.

why is not working in solr 8.0.0? Is necessary to create a collection again 
using admin GUI or changes in core.properties for solr 8.0.0?

Note : I don't want to copy of data from solr 6.1.0 to solr 8.0.0 i will 
re-index after the upgrade.I want only same schema.xml of solr 6.0.0.

Sent from Outlook

From: Zheng Lin Edwin Yeo mailto:edwinye...@gmail.com>>
Sent: Friday, March 29, 2019 8:47 AM
To: solr-user@lucene.apache.org
Subject: Re: Upgrade Solr 8.0.0 issue

Hi Vishal,

There could be problem with your index if you upgrade directly from Solr
6.1.0 to Solr 8.0.0, which is two major version, as Solr only supports
upgrade for one major version.

Regards,
Edwin

On Thu, 28 Mar 2019 at 21:30, vishal patel 
mailto:vishalpatel200...@outlook.com>>
wrote:

> Hi
>
> I am upgrading solr 6.1.0 to 8.0.0. In solr 6.0.0 my folder structure below
> ---product
> ---conf
>  ---schema.xml
> ---solrconfig.xml
> ---core.properties
> ---solr.xml
>
> core.properties contains
> name=product
> shard=shard1
> collection=product
>
> upconfig command :
> zkcli.bat -cmd bootstrap -solrhome
> F:\SolrCloud-6.1.0\solr-6.1.0-shard-1\server\solr -z 
> 192.168.100.222:3181,
> 192.168.100.222:3182,192.168.100.222:3183
>
> Note : In solr 6.1.0 , I did not create the product collection just copy
> from Solr 5.2.0 and changed solrconfig.xml.
>
> when I start the solr 6.1.0, product collection is automatic created and
> also I can access in GUI admin.
> But in Solr 8.0.0, collection is not created automatic and Error came. I
> used same core.properties.
> Why is it not working??
>
> Regards,
> Vishal
> Sent from Outlook
>