Re: CPU usage goes high when indexing one document in Solr 8.0.0

2019-04-10 Thread vishal patel
Thanks for your reply..

Yes same problem like SOLR-13349.

We will wait for Solr 8.1.

Sent from Outlook

From: Erick Erickson 
Sent: Wednesday, April 10, 2019 8:41 PM
To: solr-user@lucene.apache.org
Subject: Re: CPU usage goes high when indexing one document in Solr 8.0.0

Possibly SOLR-13349?

> On Apr 9, 2019, at 11:41 PM, vishal patel  
> wrote:
>
> I am upgrading solr 6.1.0 to 8.0.0. When I indexed only one document in solr 
> 8.0.0, My CPU usage got high for some times even though indexing done.
> I noticed that my autocommit maxTime was 60 in solrconfig.xml of Solr 
> 6.1.0. And same value I used in solrconfig.xml of solr 8.0.0.
> After I replaced ${solr.autoCommit.maxTime:15000} which took from default 
> collection, It is working fine and CPU usage does not high for more time.
>
> I have attached my solrconfig.xml of 6.1.0 and 8.0.0. Some changes i have 
> updated in solrconfig.xml so Please tell me is there any kind of changes 
> needed in this?
>
> Sent from Outlook
> 



Re: Solr 8.0.0 - CPU usage 100% when indexed documents

2019-04-10 Thread vishal patel
Thanks for your reply..

All 4 CPU core got high by 12 to 15 seconds. we used java 8.

I got your point. We will wait solr 8.1 rather upgrade OpenJDK 11.

Sent from Outlook

From: Shawn Heisey 
Sent: Wednesday, April 10, 2019 9:07 PM
To: solr-user@lucene.apache.org
Subject: Re: Solr 8.0.0 - CPU usage 100% when indexed documents

On 4/9/2019 10:53 PM, vishal patel wrote:
> Still my CPU usage went high and my CPU has 4 core and no other application 
> running in my machine.

I was asking how many CPUs went to 100 percent, not how many CPUs you
have.  And I also asked how long CPU usage remains at 100 percent after
indexing a single document.

What Java version are you running?  We do have a possible bug that could
be affecting you.

https://issues.apache.org/jira/browse/SOLR-13349

If this is the problem you're experiencing, the solution would be to
either upgrade to Java 11 or wait for Solr 8.1 to be released.

Note that Oracle requires payment if you use their Java 11 in
production.  You're likely going to want to use OpenJDK.

Thanks,
Shawn


Unable to start sole-exporter on branch_7x

2019-04-10 Thread Karl Stoney
Hi,
I’m getting the following error when trying to start `solr-exporter` on branch 
`7_x`.

INFO  - 2019-04-10 23:36:10.872; org.apache.solr.core.SolrResourceLoader; solr 
home defaulted to 'solr/' (could not find system property or JNDI)
Exception in thread "main" java.lang.NoClassDefFoundError: 
org/apache/lucene/util/IOUtils
at 
org.apache.solr.core.SolrResourceLoader.close(SolrResourceLoader.java:881)
at 
org.apache.solr.prometheus.exporter.SolrExporter.loadMetricsConfiguration(SolrExporter.java:221)
at 
org.apache.solr.prometheus.exporter.SolrExporter.main(SolrExporter.java:205)
Caused by: java.lang.ClassNotFoundException: org.apache.lucene.util.IOUtils
at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 3 more

Any ideas?
This e-mail is sent on behalf of Auto Trader Group Plc, Registered Office: 1 
Tony Wilson Place, Manchester, Lancashire, M15 4FN (Registered in England No. 
9439967). This email and any files transmitted with it are confidential and may 
be legally privileged, and intended solely for the use of the individual or 
entity to whom they are addressed. If you have received this email in error 
please notify the sender. This email message has been swept for the presence of 
computer viruses.


Re: CDCR one source multiple targets

2019-04-10 Thread Arnold Bronley
This had a very simple solution if anybody else is wondering about the same
issue.I had to define separate replica elements inside cdcr. Following is
an example.

  target1:2181  techproducts techproducts   target2:2181  techproducts techproducts   8 1000 128   1000disabled  

On Thu, Mar 21, 2019 at 10:40 AM Arnold Bronley 
wrote:

> I see a similar question asked but no answers there too.
> http://lucene.472066.n3.nabble.com/CDCR-Replication-from-one-source-to-multiple-targets-td4308717.html
> OP there is using multiple cdcr request handlers but in my case I am using
> multiple zkhost strings. It will be pretty limiting if we cannot use cdcr
> for one source- multiple target cluster situation.
> Can somebody please confirm whether this is even supported?
>
>
> On Wed, Mar 20, 2019 at 1:12 PM Arnold Bronley 
> wrote:
>
>> Hi,
>>
>> is it possible to use CDCR with one source SolrCloud cluster and multiple
>> target SolrCloud clusters? I tried to edit the zkHost setting in source
>> cluster's solrconfig file by adding multiple comma separated values for
>> target zkhosts for multuple target clusters. But the CDCR replication
>> happens only to one of the zkhosts and not all. If this is not supported
>> then how should I go about implementing something like this?
>>
>>
>


Re: I it possible to configure solr to show time stamps without the 'Z'- character in the end

2019-04-10 Thread Jan Høydahl
Perhaps an UpdateProcessor is what you need?
https://lucene.apache.org/solr/7_7_0//solr-core/org/apache/solr/update/processor/ParseDateFieldUpdateProcessorFactory.html
 

https://lucene.apache.org/solr/guide/7_7/update-request-processors.html 
 

--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com

> 8. apr. 2019 kl. 12:38 skrev Miettinen Jaana (STAT) :
> 
> Dear recipient,
> 
> I have a problem in solr: I should add several (old) time stamps into my solr 
> documents, but all of them are in  local time (UTC+2 or UTC+3 depending on 
> day-light-saving situation). As default solr excepts all time stamps to be in 
> UTC-time and adds the 'Z'-character into the end of the time stamp-strings to 
> indicate, that the date should be considered as UTC-time.
> 
> Is it possible to change this 'Z'-notation ? Either I would want to get rid 
> of that 'Z' or change it to denote UTC+2.
> 
> I noticed that there's variable SOLR_TIMEZONE in 
> solr-7.6.0/bin/solr.in.sh-file. I changed it to  SOLR_TIMEZONE="EST", 
> re-created my solr-servers, but nothing changed. Why was that configuration 
> file ignored (I also changed the port to check whether it was ignored really) 
> ? And what is the purpose of  SOLR_TIMEZONE-variable ?
> 
> Br, Jaana Miettinen



Question for separate query and updates with TLOG and PULL replicas

2019-04-10 Thread Wei
Hi,

I have a question about how to complete separate queries and updates in a
cluster of mixed TLOG and PULL replicas.

solr cloud setup:  Solr-7.6.0,  10 shards,  each shard has 2 TLOG + 4 PULL
replicas.
In solrconfig.xml we set preferred replica type for queries to PULL:

  replica.type:PULL 


A load-balancer is set up in front of the solr cloud, including both TLOG
and PULL replicas.  Also we use a http client for queries.  Some
observations:

1.  In the TLOG replicas, I see about the same number of external queries
in jetty access log. It is expected as our load balancer does not
differentiate TLOG and PULL replicas.  My question is,  when the TLOG
replica receives an external query, will it forward to one of the PULL
replicas? Or will it send the shard request to PULL replicas but still
serve as the aggregate node for the query?

2.  In the TLOG replicas,  I am still seeing some internal shard request,
but in much lower volume compare to PULL replicas.  I checked one leader
TLOG replica, the number of shard requests is 1% of that on PULL replicas
in the same shard.  With shards.preference=replica.type:PULL,  why would
the TLOG receive any internal shard request?

To completely separate query and updates, I think that I might need to have
the load-balancer set up to include only the PULL replicas.  Is there any
other option?

Thanks,
Wei


Re: gatherNodes question. Is this a bug?

2019-04-10 Thread Joel Bernstein
What you're trying to do should work. Possibly of you provide more detail
like the full query with some sample outputs I might be able to see what
the issue is.

Joel Bernstein
http://joelsolr.blogspot.com/


On Wed, Apr 10, 2019 at 10:55 AM Kojo  wrote:

> Hello everybody I have a question about Streaming Expression/Graph
> Traversal.
>
> The following pseudocode works fine:
>
> complement( search(),
> sort(
> gatherNodes( collection, search())
> ),
> )
>
>
> However, when I feed the SE resultset above to another gatherNodes
> function, I have a result different from what I expected. It returns the
> root nodes (branches) of the inner gatherNodes:
>
> gatherNodes( collection,
> complement( search(),
> sort(
>gatherNodes( collection, search())
> ),
> ),
> )
>
> In the case I tested, the outer gatherNodes does not have leaves. I was
> waiting to have the result from the "complement" function as the root nodes
> of the outter gatherNodes function. Do you know how can I achieve this?
>
> Thank you,
>


Re: var, sttdev Streaming Evaluators.

2019-04-10 Thread Joel Bernstein
They currently are not. You can use describe() to get these values and
getValue() if you want to use a specific value.

 let(arr=array(1,3,3), m=describe(a), s=getValue(m, stdev))

It makes sense to add these on there own as well.



Joel Bernstein
http://joelsolr.blogspot.com/


On Wed, Apr 10, 2019 at 11:13 AM Nazerke S  wrote:

> Hi,
>
> I have got a question about Streaming Expression evaluators.
> I would like to calculate mean, standard deviation and variance of the
> given array.
>
> For example, the following code works for the mean:
>let(arr=array(1,3,3), m=mean(a))
>
> Also, I want to compute variance and standard deviation as well i.e.:
>  let(echo="m,v,sd", arr=array(1,3,3), m=mean(a), v=var(a),
> sd=stddev(a))
>
> It seems the var(), stddev() evaluator functions are not implemented as
> separate functions??
>
> __Nazerke
>


Re: How to configure default replication type?

2019-04-10 Thread Tulsi
Hi Roger, 
Have you tried shards.preference parameter? You can specify the replica.type
as TLOG or PULL(default is NRT) in solrconfig.xml using this parameter. 

Example:
shards.preference=replica.TLOG


Note: earlier this parameter was preferLocalShards which has been
deprecated. 



--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html


Re: Solr 8.0.0 - CPU usage 100% when indexed documents

2019-04-10 Thread Shawn Heisey

On 4/9/2019 10:53 PM, vishal patel wrote:

Still my CPU usage went high and my CPU has 4 core and no other application 
running in my machine.


I was asking how many CPUs went to 100 percent, not how many CPUs you 
have.  And I also asked how long CPU usage remains at 100 percent after 
indexing a single document.


What Java version are you running?  We do have a possible bug that could 
be affecting you.


https://issues.apache.org/jira/browse/SOLR-13349

If this is the problem you're experiencing, the solution would be to 
either upgrade to Java 11 or wait for Solr 8.1 to be released.


Note that Oracle requires payment if you use their Java 11 in 
production.  You're likely going to want to use OpenJDK.


Thanks,
Shawn


var, sttdev Streaming Evaluators.

2019-04-10 Thread Nazerke S
Hi,

I have got a question about Streaming Expression evaluators.
I would like to calculate mean, standard deviation and variance of the
given array.

For example, the following code works for the mean:
   let(arr=array(1,3,3), m=mean(a))

Also, I want to compute variance and standard deviation as well i.e.:
 let(echo="m,v,sd", arr=array(1,3,3), m=mean(a), v=var(a),
sd=stddev(a))

It seems the var(), stddev() evaluator functions are not implemented as
separate functions??

__Nazerke


Re: CPU usage goes high when indexing one document in Solr 8.0.0

2019-04-10 Thread Erick Erickson
Possibly SOLR-13349?

> On Apr 9, 2019, at 11:41 PM, vishal patel  
> wrote:
> 
> I am upgrading solr 6.1.0 to 8.0.0. When I indexed only one document in solr 
> 8.0.0, My CPU usage got high for some times even though indexing done.
> I noticed that my autocommit maxTime was 60 in solrconfig.xml of Solr 
> 6.1.0. And same value I used in solrconfig.xml of solr 8.0.0.
> After I replaced ${solr.autoCommit.maxTime:15000} which took from default 
> collection, It is working fine and CPU usage does not high for more time.
> 
> I have attached my solrconfig.xml of 6.1.0 and 8.0.0. Some changes i have 
> updated in solrconfig.xml so Please tell me is there any kind of changes 
> needed in this?
> 
> Sent from Outlook
> 



Re: Solr exception: java.lang.IllegalStateException: unexpected docvalues type NUMERIC for field 'weight' (expected one of [BINARY, NUMERIC, SORTED, SORTED_NUMERIC, SORTED_SET]). Re-index with correct

2019-04-10 Thread Erick Erickson
"Re-index with correct docvalues”. I.e. define weight to have docValues=true in 
your schema. WARNING: you have to totally get rid of your current data, I’d 
recommend starting with a new collection.

> On Apr 10, 2019, at 12:21 AM, Alex Broitman  
> wrote:
> 
> We got the Solr exception when searching in Solr:
>  
> SolrNet.Exceptions.SolrConnectionException:  encoding="UTF-8"?>
> 
> true name="status">500160 name="hl">true name="fl">vid:def(rid,id),name,nls_NAME___en-us,nls_NAME_NLS_KEY,txt_display_name,sysid  name="hl.requireFieldMatch">true0 name="hl.usePhraseHighlighter">truegid:(0 
> 21)-(+type:3 -recipients:5164077)-disabled_types:(16 
> 1024 2048){!acls user="5164077" gid="21" group="34" pcid="6" 
> ecid="174"}20 name="version">2.2+(Dashboard Dashboard*) name="defType">edismaxDashboard name="qf">name nls_NAME___en-ustrue name="boost">product(sum(1,product(norm(acl_i),termfreq(acl_i,5164077))),if(exists(weight),weight,1))  name="hl.fl">sysid1 name="spellcheck.collate">true name="msg">unexpected docvalues type NUMERIC for field 'weight' (expected one 
> of [BINARY, NUMERIC, SORTED, SORTED_NUMERIC, SORTED_SET]). Re-index with 
> correct docvalues type. name="trace">java.lang.IllegalStateException: unexpected docvalues type 
> NUMERIC for field 'weight' (expected one of [BINARY, NUMERIC, SORTED, 
> SORTED_NUMERIC, SORTED_SET]). Re-index with correct docvalues type.
> at 
> org.apache.lucene.index.DocValues.checkField(DocValues.java:212)
> at 
> org.apache.lucene.index.DocValues.getDocsWithField(DocValues.java:324)
> at 
> org.apache.lucene.queries.function.valuesource.FloatFieldSource.getValues(FloatFieldSource.java:56)
> at 
> org.apache.lucene.queries.function.valuesource.SimpleBoolFunction.getValues(SimpleBoolFunction.java:48)
> at 
> org.apache.lucene.queries.function.valuesource.SimpleBoolFunction.getValues(SimpleBoolFunction.java:35)
> at 
> org.apache.lucene.queries.function.valuesource.IfFunction.getValues(IfFunction.java:47)
> at 
> org.apache.lucene.queries.function.valuesource.MultiFloatFunction.getValues(MultiFloatFunction.java:76)
> at 
> org.apache.lucene.queries.function.BoostedQuery$CustomScorer.(BoostedQuery.java:124)
> at 
> org.apache.lucene.queries.function.BoostedQuery$CustomScorer.(BoostedQuery.java:114)
> at 
> org.apache.lucene.queries.function.BoostedQuery$BoostedWeight.scorer(BoostedQuery.java:98)
> at 
> org.apache.lucene.search.Weight.scorerSupplier(Weight.java:126)
> at 
> org.apache.lucene.search.BooleanWeight.scorerSupplier(BooleanWeight.java:400)
> at 
> org.apache.lucene.search.BooleanWeight.scorer(BooleanWeight.java:381)
> at org.apache.lucene.search.Weight.bulkScorer(Weight.java:160)
> at 
> org.apache.lucene.search.BooleanWeight.bulkScorer(BooleanWeight.java:375)
> at 
> org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:665)
> at 
> org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:472)
> at 
> org.apache.solr.search.SolrIndexSearcher.buildAndRunCollectorChain(SolrIndexSearcher.java:217)
> at 
> org.apache.solr.search.SolrIndexSearcher.getDocListNC(SolrIndexSearcher.java:1582)
> at 
> org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1399)
> at 
> org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:566)
> at 
> org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:545)
> at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:296)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:173)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:2477)
> at 
> org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:723)
> at 
> org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:529)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:361)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:305)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
> at 
> org.eclipse.jetty.server.handler

Re: Shard and replica went down in Solr 6.1.0

2019-04-10 Thread Erick Erickson
Solr will not go down due to “performance warning”. All that means is that 
you’re opening too many searchers in a short time. If you’re somehow opening a 
huge number of searchers, then maybe but IIRC you can’t really open more than 
two. And don’t think you can “fix” this by upping maxWarmingSearchers, that’ll 
only make the problem worse as every new searcher chews up memory that’s kept 
until the old searcher is done with outstanding requests.

Which is weird because with those settings you shouldn’t be opening _any_ new 
searchers. So my guess is that some external client is doing that and this is 
usually an anti-pattern. Don’t do it, please. Just set your soft commit to what 
you need and leave it at that. As long as possible. Here are all the gory 
details:
https://lucidworks.com/2013/08/23/understanding-transaction-logs-softcommit-and-commit-in-sorlcloud/

I’d also not bother with maxDocs, that’s much less predictable than just 
setting a reasonable time for autocommit, a minute or two is usually reasonable 
especially when openSearcher is false.

Not sure what’s really generating that error, take a look at all your other 
Solr logs to see if there’s a cause.

Best,
Erick


> On Apr 10, 2019, at 5:21 AM, vishal patel  
> wrote:
> 
> I have 2 shards and 2 replicas of Solr 6.1.0. one shard and one replica went 
> down and I got below ERROR
> 
> 2019-04-08 12:54:01.469 INFO  (commitScheduler-131-thread-1) [c:products 
> s:shard1 r:core_node1 x:product1] o.a.s.s.SolrIndexSearcher Opening 
> [Searcher@24b9127f[product1] main]
> 2019-04-08 12:54:01.468 INFO  (commitScheduler-110-thread-1) [c:product2 
> s:shard1 r:core_node1 x:product2] o.a.s.c.SolrDeletionPolicy 
> SolrDeletionPolicy.onCommit: commits: num=2
> commit{dir=G:\SolrCloud\solr1\server\solr\product2\data\index.20180412060518798,segFN=segments_he5,generation=22541}
> commit{dir=G:\SolrCloud\solr1\server\solr\product2\data\index.20180412060518798,segFN=segments_he6,generation=22542}
> 2019-04-08 12:54:01.556 INFO  (commitScheduler-110-thread-1) [c:product2 
> s:shard1 r:core_node1 x:product2] o.a.s.c.SolrDeletionPolicy newest commit 
> generation = 22542
> 2019-04-08 12:54:01.465 WARN (commitScheduler-136-thread-1) [c:product3 
> s:shard1 r:core_node1 x:product3] o.a.s.c.SolrCore [product3] PERFORMANCE 
> WARNING: Overlapping onDeckSearchers=2
> 
> 2019-04-08 12:54:01.534 ERROR 
> (updateExecutor-2-thread-36358-processing-http:10.101.111.80:8983//solr//product3
>  x:product3 r:core_node1 n:10.102.119.85:8983_solr s:shard1 c:product3) 
> [c:product3 s:shard1 r:core_node1 x:product3] o.a.s.u.StreamingSolrClients 
> error
> org.apache.solr.common.SolrException: Service Unavailable
> 
> request: 
> http://10.101.111.80:8983/solr/product3/update?update.distrib=FROMLEADER&distrib.from=http%3A%2F%2F10.102.119.85%3A8983%2Fsolr%2Fproduct3%2F&wt=javabin&version=2
> at 
> org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrClient$Runner.sendUpdateStream(ConcurrentUpdateSolrClient.java:320)
> at 
> org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrClient$Runner.run(ConcurrentUpdateSolrClient.java:185)
> at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$22(ExecutorUtil.java:229)
> at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$$Lambda$3/30175207.run(Unknown
>  Source)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> 
> Note : product1,product2 and product3 are my collection.
> 
> In my solrconfig.xml
> 
> 60
>2
>false
> 
> 
> 
> ${solr.autoSoftCommit.maxTime:-1}
> 
> 2
> 
> There are many documents committed at that time and I found out so many 
> commitScheduler threads in Log.
> Solr went down due to warn PERFORMANCE WARNING: Overlapping onDeckSearchers=2 
> is it possible?
> Need to update my autoCommit or  maxWarmingSearchers?
> 
> Sent from Outlook



gatherNodes question. Is this a bug?

2019-04-10 Thread Kojo
Hello everybody I have a question about Streaming Expression/Graph
Traversal.

The following pseudocode works fine:

complement( search(),
sort(
gatherNodes( collection, search())
),
)


However, when I feed the SE resultset above to another gatherNodes
function, I have a result different from what I expected. It returns the
root nodes (branches) of the inner gatherNodes:

gatherNodes( collection,
complement( search(),
sort(
   gatherNodes( collection, search())
),
),
)

In the case I tested, the outer gatherNodes does not have leaves. I was
waiting to have the result from the "complement" function as the root nodes
of the outter gatherNodes function. Do you know how can I achieve this?

Thank you,


Re: How to prevent solr from deleting cores when getting an empty config from zookeeper

2019-04-10 Thread Gus Heck
Deleting data on a zookeeper hiccup does sound bad if it's really solr's
fault. Can you work up a set of steps to reproduce? Something like install
solr, index tech products example, shut down solr, perform some editing to
zk, start solr, observe data gone (but with lots of details about exact
configurations/commands/edits etc)?

"some sort of split brain" is nebulous and nobody will know if they've
solved your problem unless that can be quantified and the problem
replicated.

-Gus

On Tue, Apr 9, 2019 at 1:37 PM Koen De Groote 
wrote:

> Hello,
>
> I recently ran in to the following scenario:
>
> Solr, version 7.5, in a docker container, running as cloud, with an
> external zookeeper ensemble of 3 zookeepers. Instructions were followed to
> make a root first, this was set correctly, as could be seen by the solr
> logs outputting the connect info.
>
> root command is: "bin/solr zk mkroot /solr -z "
>
> For a yet undetermined reason, the zookeeper ensemble had some kind of
> split-brain occur. At a later point, Solr was restarted and then suddenly
> all its directories were gone.
>
> By which I mean: the directories containing the configuration and the data.
> The stopwords, the schema, the solr config, the "shard1_replica_n2"
> directories, those directories.
>
> Those were gone without a trace.
>
> As far as I can tell, solr started, asked zookeeper for its config,
> zookeeper returned an empty config and consequently "made it so".
>
> I am by no means very knowledgeable about solr internals. Can anyone chime
> in as to what happened here and how to prevent it? Is more info needed?
>
> Ideally, if something like this were to happen, I'd like for either solr to
> not delete folders or if that's not possible, add some kind of pre-startup
> check that stops solr from going any further if things go wrong.
>
> Regards,
> Koen
>


-- 
http://www.the111shift.com


Shard and replica went down in Solr 6.1.0

2019-04-10 Thread vishal patel
I have 2 shards and 2 replicas of Solr 6.1.0. one shard and one replica went 
down and I got below ERROR

2019-04-08 12:54:01.469 INFO  (commitScheduler-131-thread-1) [c:products 
s:shard1 r:core_node1 x:product1] o.a.s.s.SolrIndexSearcher Opening 
[Searcher@24b9127f[product1] main]
2019-04-08 12:54:01.468 INFO  (commitScheduler-110-thread-1) [c:product2 
s:shard1 r:core_node1 x:product2] o.a.s.c.SolrDeletionPolicy 
SolrDeletionPolicy.onCommit: commits: num=2
commit{dir=G:\SolrCloud\solr1\server\solr\product2\data\index.20180412060518798,segFN=segments_he5,generation=22541}
commit{dir=G:\SolrCloud\solr1\server\solr\product2\data\index.20180412060518798,segFN=segments_he6,generation=22542}
2019-04-08 12:54:01.556 INFO  (commitScheduler-110-thread-1) [c:product2 
s:shard1 r:core_node1 x:product2] o.a.s.c.SolrDeletionPolicy newest commit 
generation = 22542
2019-04-08 12:54:01.465 WARN (commitScheduler-136-thread-1) [c:product3 
s:shard1 r:core_node1 x:product3] o.a.s.c.SolrCore [product3] PERFORMANCE 
WARNING: Overlapping onDeckSearchers=2

2019-04-08 12:54:01.534 ERROR 
(updateExecutor-2-thread-36358-processing-http:10.101.111.80:8983//solr//product3
 x:product3 r:core_node1 n:10.102.119.85:8983_solr s:shard1 c:product3) 
[c:product3 s:shard1 r:core_node1 x:product3] o.a.s.u.StreamingSolrClients error
org.apache.solr.common.SolrException: Service Unavailable

request: 
http://10.101.111.80:8983/solr/product3/update?update.distrib=FROMLEADER&distrib.from=http%3A%2F%2F10.102.119.85%3A8983%2Fsolr%2Fproduct3%2F&wt=javabin&version=2
at 
org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrClient$Runner.sendUpdateStream(ConcurrentUpdateSolrClient.java:320)
at 
org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrClient$Runner.run(ConcurrentUpdateSolrClient.java:185)
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$22(ExecutorUtil.java:229)
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$$Lambda$3/30175207.run(Unknown
 Source)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

Note : product1,product2 and product3 are my collection.

In my solrconfig.xml

60
2
false



${solr.autoSoftCommit.maxTime:-1}

2

There are many documents committed at that time and I found out so many 
commitScheduler threads in Log.
Solr went down due to warn PERFORMANCE WARNING: Overlapping onDeckSearchers=2 
is it possible?
Need to update my autoCommit or  maxWarmingSearchers?

Sent from Outlook


Solr exception: java.lang.IllegalStateException: unexpected docvalues type NUMERIC for field 'weight' (expected one of [BINARY, NUMERIC, SORTED, SORTED_NUMERIC, SORTED_SET]). Re-index with correct doc

2019-04-10 Thread Alex Broitman
We got the Solr exception when searching in Solr:

SolrNet.Exceptions.SolrConnectionException: 

true500160truevid:def(rid,id),name,nls_NAME___en-us,nls_NAME_NLS_KEY,txt_display_name,sysidtrue0truegid:(0 
21)-(+type:3 -recipients:5164077)-disabled_types:(16 1024 
2048){!acls user="5164077" gid="21" group="34" pcid="6" 
ecid="174"}202.2+(Dashboard Dashboard*)edismaxDashboardname nls_NAME___en-ustrueproduct(sum(1,product(norm(acl_i),termfreq(acl_i,5164077))),if(exists(weight),weight,1))sysid1trueunexpected docvalues type NUMERIC for field 'weight' (expected one 
of [BINARY, NUMERIC, SORTED, SORTED_NUMERIC, SORTED_SET]). Re-index with 
correct docvalues type.java.lang.IllegalStateException: 
unexpected docvalues type NUMERIC for field 'weight' (expected one of [BINARY, 
NUMERIC, SORTED, SORTED_NUMERIC, SORTED_SET]). Re-index with correct docvalues 
type.
at 
org.apache.lucene.index.DocValues.checkField(DocValues.java:212)
at 
org.apache.lucene.index.DocValues.getDocsWithField(DocValues.java:324)
at 
org.apache.lucene.queries.function.valuesource.FloatFieldSource.getValues(FloatFieldSource.java:56)
at 
org.apache.lucene.queries.function.valuesource.SimpleBoolFunction.getValues(SimpleBoolFunction.java:48)
at 
org.apache.lucene.queries.function.valuesource.SimpleBoolFunction.getValues(SimpleBoolFunction.java:35)
at 
org.apache.lucene.queries.function.valuesource.IfFunction.getValues(IfFunction.java:47)
at 
org.apache.lucene.queries.function.valuesource.MultiFloatFunction.getValues(MultiFloatFunction.java:76)
at 
org.apache.lucene.queries.function.BoostedQuery$CustomScorer.(BoostedQuery.java:124)
at 
org.apache.lucene.queries.function.BoostedQuery$CustomScorer.(BoostedQuery.java:114)
at 
org.apache.lucene.queries.function.BoostedQuery$BoostedWeight.scorer(BoostedQuery.java:98)
at 
org.apache.lucene.search.Weight.scorerSupplier(Weight.java:126)
at 
org.apache.lucene.search.BooleanWeight.scorerSupplier(BooleanWeight.java:400)
at 
org.apache.lucene.search.BooleanWeight.scorer(BooleanWeight.java:381)
at org.apache.lucene.search.Weight.bulkScorer(Weight.java:160)
at 
org.apache.lucene.search.BooleanWeight.bulkScorer(BooleanWeight.java:375)
at 
org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:665)
at 
org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:472)
at 
org.apache.solr.search.SolrIndexSearcher.buildAndRunCollectorChain(SolrIndexSearcher.java:217)
at 
org.apache.solr.search.SolrIndexSearcher.getDocListNC(SolrIndexSearcher.java:1582)
at 
org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1399)
at 
org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:566)
at 
org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:545)
at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:296)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:173)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:2477)
at 
org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:723)
at 
org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:529)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:361)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:305)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691)
at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
at 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
at 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
at 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.ja

Solr7.5 and zookeeper 3.4.14

2019-04-10 Thread Hari Nakka
We have solr 7.5 cloud setup with external zookeeper ensemble 3.4.11 (as it
is the recommended version bundled with the distribution)
I read new zookeeper versions are backward compatible.
Can we upgrade it to 3.4.14. Any issue reported ?