[JENKINS] Lucene-Solr-7.x-Solaris (64bit/jdk1.8.0) - Build # 344 - Unstable!

2017-12-19 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Solaris/344/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testCollectionsAPI

Error Message:
Error from server at 
http://127.0.0.1:61096/solr/awhollynewcollection_0_shard2_replica_n2: 
ClusterState says we are the leader 
(http://127.0.0.1:61096/solr/awhollynewcollection_0_shard2_replica_n2), but 
locally we don't think so. Request came from null

Stack Trace:
org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: Error from 
server at http://127.0.0.1:61096/solr/awhollynewcollection_0_shard2_replica_n2: 
ClusterState says we are the leader 
(http://127.0.0.1:61096/solr/awhollynewcollection_0_shard2_replica_n2), but 
locally we don't think so. Request came from null
at 
__randomizedtesting.SeedInfo.seed([212C3DF6229AD48D:6959494224A9FB18]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:550)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1013)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:884)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:946)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:946)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:946)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:946)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:946)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.client.solrj.request.UpdateRequest.commit(UpdateRequest.java:233)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testCollectionsAPI(CollectionsAPIDistributedZkTest.java:460)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS] Lucene-Solr-7.2-Linux (32bit/jdk1.8.0_144) - Build # 89 - Still Unstable!

2017-12-19 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.2-Linux/89/
Java: 32bit/jdk1.8.0_144 -server -XX:+UseG1GC

5 tests failed.
FAILED:  
org.apache.solr.client.solrj.impl.CloudSolrClientTest.testRetryUpdatesWhenClusterStateIsStale

Error Message:
Error from server at https://127.0.0.1:39089/solr/stale_state_test_col: No 
registered leader was found after waiting for 4000ms , collection: 
stale_state_test_col slice: shard1 saw 
state=DocCollection(stale_state_test_col//collections/stale_state_test_col/state.json/9)={
   "pullReplicas":"0",   "replicationFactor":"1",   "shards":{"shard1":{   
"range":"8000-7fff",   "state":"active",   
"replicas":{"core_node4":{   
"core":"stale_state_test_col_shard1_replica_n3",   
"base_url":"https://127.0.0.1:39089/solr;,   
"node_name":"127.0.0.1:39089_solr",   "state":"active",   
"type":"NRT",   "router":{"name":"compositeId"},   "maxShardsPerNode":"1",  
 "autoAddReplicas":"false",   "nrtReplicas":"1",   "tlogReplicas":"0"} with 
live_nodes=[127.0.0.1:39089_solr, 127.0.0.1:46123_solr, 127.0.0.1:43071_solr]

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at https://127.0.0.1:39089/solr/stale_state_test_col: No registered 
leader was found after waiting for 4000ms , collection: stale_state_test_col 
slice: shard1 saw 
state=DocCollection(stale_state_test_col//collections/stale_state_test_col/state.json/9)={
  "pullReplicas":"0",
  "replicationFactor":"1",
  "shards":{"shard1":{
  "range":"8000-7fff",
  "state":"active",
  "replicas":{"core_node4":{
  "core":"stale_state_test_col_shard1_replica_n3",
  "base_url":"https://127.0.0.1:39089/solr;,
  "node_name":"127.0.0.1:39089_solr",
  "state":"active",
  "type":"NRT",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"1",
  "autoAddReplicas":"false",
  "nrtReplicas":"1",
  "tlogReplicas":"0"} with live_nodes=[127.0.0.1:39089_solr, 
127.0.0.1:46123_solr, 127.0.0.1:43071_solr]
at 
__randomizedtesting.SeedInfo.seed([8CD8FA161FA74085:38E962FEFC4E36A9]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:643)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1103)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:883)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:945)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:816)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.client.solrj.request.UpdateRequest.commit(UpdateRequest.java:233)
at 
org.apache.solr.client.solrj.impl.CloudSolrClientTest.testRetryUpdatesWhenClusterStateIsStale(CloudSolrClientTest.java:848)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:110)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 

[JENKINS] Lucene-Solr-7.x-Linux (32bit/jdk1.8.0_144) - Build # 1018 - Still Unstable!

2017-12-19 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/1018/
Java: 32bit/jdk1.8.0_144 -server -XX:+UseG1GC

5 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation

Error Message:
2 threads leaked from SUITE scope at 
org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation: 1) 
Thread[id=7441, name=jetty-launcher-939-thread-2-EventThread, 
state=TIMED_WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation] 
at sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
 at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
 at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277)  
   at 
org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:323)
 at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:105)  
   at 
org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:288)
 at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279)
 at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:41)
 at 
org.apache.curator.framework.recipes.shared.SharedValue.readValue(SharedValue.java:244)
 at 
org.apache.curator.framework.recipes.shared.SharedValue.access$100(SharedValue.java:44)
 at 
org.apache.curator.framework.recipes.shared.SharedValue$1.process(SharedValue.java:61)
 at 
org.apache.curator.framework.imps.NamespaceWatcher.process(NamespaceWatcher.java:67)
 at 
org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:530)   
  at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:505)   
 2) Thread[id=7443, name=jetty-launcher-939-thread-1-EventThread, 
state=TIMED_WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation] 
at sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
 at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
 at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277)  
   at 
org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:323)
 at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:105)  
   at 
org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:288)
 at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279)
 at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:41)
 at 
org.apache.curator.framework.recipes.shared.SharedValue.readValue(SharedValue.java:244)
 at 
org.apache.curator.framework.recipes.shared.SharedValue.access$100(SharedValue.java:44)
 at 
org.apache.curator.framework.recipes.shared.SharedValue$1.process(SharedValue.java:61)
 at 
org.apache.curator.framework.imps.NamespaceWatcher.process(NamespaceWatcher.java:67)
 at 
org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:530)   
  at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:505)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 2 threads leaked from SUITE 
scope at org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation: 
   1) Thread[id=7441, name=jetty-launcher-939-thread-2-EventThread, 
state=TIMED_WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation]
at sun.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277)
at 
org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:323)
at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:105)
at 
org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:288)
at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279)
at 

[jira] [Commented] (SOLR-11739) Solr can accept duplicated async IDs

2017-12-19 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16297890#comment-16297890
 ] 

Hoss Man commented on SOLR-11739:
-

i have no strong opinions ... at a high level i think the approach you describe 
(i didn't review the patch) is fine from a stability/reliability and back 
compat standpoint, but I have no idea how it affects performance (although it 
should go w/o saying that i favor stability/reliability over performance when 
discussing "admin" actions)

To clarify one comment where i suspect we may have been misscommunicating...

{quote}
bq. Solr should happily let you specify the same asyncId multiple times

hmm I'm not sure. I think the fact that Solr won't re-execute the same request 
twice makes it much easier to write workflows that do collection operations. 
{quote}

I feel like my suggestion was orthoginal to to the concern you suggested in 
response.  What i was suggesting is that, in theory: Solr could be agnostic to 
the asyncId and *only* use them for  tracking results.  Example: if these 4 
commands come in...

* asyncId=222=CREATESOMETHING
* asyncId=1=DOSOMETHINGELSE
* asyncId=222=DOWHATEVER

...then Solr could in fact execute those 3 commands, in that sequence, 
reliably, and w/o any risk of any of them being executed more or less then 
exactly 1 time.  The fact that 2 of them have the same asyncId should in no way 
impact Solr's execution of those commands.  The only impact that the duplicated 
asyncId should have is that once the DOWHATEVER cmd is executed, it will no 
longer be possible for a client to ever check the status of the 
CREATESOMETHING, because the results of the DOWHATEVER command should overwrite 
the results of the CREATESOMETHING ... that could be solr's *sole* internal use 
of the asyncId

(ie: only track all commands to process in an ordered queue, use the asyncId 
only in a map we don't care about internally, other then if a user asks for 
status info)

that was the crux of my suggestion ... assuming we want a policy of solr being 
"agnostic" to duplicate asyncIds in order to reduce the zk overhead of 
tracking/rejecting dups.  (BUt i don't feel strongly about it)


> Solr can accept duplicated async IDs
> 
>
> Key: SOLR-11739
> URL: https://issues.apache.org/jira/browse/SOLR-11739
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Tomás Fernández Löbbe
>Assignee: Tomás Fernández Löbbe
>Priority: Minor
> Attachments: SOLR-11739.patch, SOLR-11739.patch
>
>
> Solr is supposed to reject duplicated async IDs, however, if the repeated IDs 
> are sent fast enough, a race condition in Solr will let the repeated IDs 
> through. The duplicated task is ran and and then silently fails to report as 
> completed because the same async ID is already in the completed map. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11769) Sorting performance degrades when useFilterForSortedQuery is enabled and there is no filter query specified

2017-12-19 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11769?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated SOLR-11769:

Attachment: SOLR-11769_Optimize_MatchAllDocsQuery_more.patch

I couldn't help myself today but dig deeper here and figure out why there would 
be a slow-down even without the above change.  The result is a more broader 
improvement for match-all-docs (*:*) scenarios that is not specific to the 
particular useFilterForSortedQuery situation above.  Essentially 
SolrIndexSearcher.getProcessedFilter would return an empty pf if there are no 
queries or filters (the semantics mean match-all-docs).  But there is the 
"answer" field that could be populated with getLiveDocs, and pf.answer is 
examined by getDocSet so it can return early.  I also optimized 
DocSetUtil.createDocSet to check that the query arg is a MatchAllDocsQuery and 
the live docs is "instantiated", allowing us to return that directly.

The only quirky thing about this was a test failure I fixed in 
TestSolrQueryParser that checked the filter cache insert delta after executing 
a query.  The additional call to getLiveDocs in this patch by 
getProcessedFilter occurred which got in the cache and increased the counter an 
additional time.  An assumption I make in the getProcessedFilter change is that 
returning getLiveDocs is either cheap or a forlorn conclusion that it will 
ultimately be instantiated at some point any way so might as well get it on 
with.  Alternatively, it's caller could instead check for this case (e.g. 
filter == null && postFilter == null then return getLiveDocs)?  But that seems 
less clean since "answer" is there for a reason so why avoid it.

[~yo...@apache.org] can you please review this?  It intersects with 
modifications you've done in the past.

As an aside, I think it would be good if more DocSet methods in 
SolrIndexSearcher move over to DocSetUtil so that we can keep the unwieldily 
SolrIndexSearcher tamed.  Essentially I suggest a similar change as I did for 
the SolrDocumentFetcher refactoring but for DocSets.  With such a change, it 
would have access to the searcher and not need it in the methods.

> Sorting performance degrades when useFilterForSortedQuery is enabled and 
> there is no filter query specified
> ---
>
> Key: SOLR-11769
> URL: https://issues.apache.org/jira/browse/SOLR-11769
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search
>Affects Versions: 4.10.4
> Environment: OS: macOS Sierra (version 10.12.4)
> Memory: 16GB
> CPU: 2.9 GHz Intel Core i7
> Java Version: 1.8
>Reporter: Betim Deva
>Assignee: David Smiley
>  Labels: performance
> Attachments: SOLR-11769_Optimize_MatchAllDocsQuery_more.patch
>
>
> The performance of sorting degrades significantly when the 
> {{useFilterForSortedQuery}} is enabled, and there's no filter query specified.
> *Steps to Reproduce:*
> 1. Set {{useFilterForSortedQuery=true}} in {{solrconfig.xml}}
> 2. Run a  query to match and return a single document. Also add sorting
> - Example {{/select?q=foo:123=bar+desc}}
> Having a large index (> 10 million documents), this yields to a slow response 
> (a few hundreds of milliseconds on average) even when the resulting set 
> consists of a single document.
> *Observation 1:*
> - Disabling {{useFilterForSortedQuery}} improves the performance to < 1ms
> *Observation 2:*
> - Removing the {{sort}} improves the performance to < 1ms
> *Observation 3:*
> - Keeping the {{sort}}, and adding any filter query (such as {{fq=\*:\*}}) 
> improves the performance to < 1 ms.
> After profiling 
> [SolrIndexSearcher.java|https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;a=blob;f=solr/core/src/java/org/apache/solr/search/SolrIndexSearcher.java;h=9ee5199bdf7511c70f2cc616c123292c97d36b5b;hb=HEAD#l1400]
>  found that the bottleneck is on 
> {{DocSet bigFilt = getDocSet(cmd.getFilterList());}} 
> when {{cmd.getFilterList())}} is passed in as {{null}}. This is making 
> {{getDocSet()}} function collect document ids every single time it is called 
> without any caching.
> {code:java}
> 1394 if (useFilterCache) {
> 1395   // now actually use the filter cache.
> 1396   // for large filters that match few documents, this may be
> 1397   // slower than simply re-executing the query.
> 1398   if (out.docSet == null) {
> 1399 out.docSet = getDocSet(cmd.getQuery(), cmd.getFilter());
> 1400 DocSet bigFilt = getDocSet(cmd.getFilterList());
> 1401 if (bigFilt != null) out.docSet = 
> out.docSet.intersection(bigFilt);
> 1402   }
> 1403   // todo: there could be a sortDocSet that could take a list of
> 1404   // the 

[JENKINS] Lucene-Solr-master-Linux (64bit/jdk1.8.0_144) - Build # 21111 - Still Unstable!

2017-12-19 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/2/
Java: 64bit/jdk1.8.0_144 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.cloud.autoscaling.TriggerIntegrationTest.testEventQueue

Error Message:
action wasn't interrupted

Stack Trace:
java.lang.AssertionError: action wasn't interrupted
at 
__randomizedtesting.SeedInfo.seed([89F3DDE657647DFE:40469F485E03BB0B]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.autoscaling.TriggerIntegrationTest.testEventQueue(TriggerIntegrationTest.java:707)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)




Build Log:
[...truncated 12970 lines...]
   [junit4] Suite: org.apache.solr.cloud.autoscaling.TriggerIntegrationTest
   [junit4]   2> 1191907 INFO  
(SUITE-TriggerIntegrationTest-seed#[89F3DDE657647DFE]-worker) [] 
o.a.s.SolrTestCaseJ4 SecureRandom sanity 

[JENKINS] Lucene-Solr-Tests-master - Build # 2224 - Still Unstable

2017-12-19 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/2224/

2 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.TestTlogReplica

Error Message:
ObjectTracker found 4 object(s) that were not released!!! [InternalHttpClient, 
MockDirectoryWrapper, MockDirectoryWrapper, SolrCore] 
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.http.impl.client.InternalHttpClient  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at 
org.apache.solr.client.solrj.impl.HttpClientUtil.createClient(HttpClientUtil.java:289)
  at 
org.apache.solr.client.solrj.impl.HttpClientUtil.createClient(HttpClientUtil.java:298)
  at 
org.apache.solr.handler.IndexFetcher.createHttpClient(IndexFetcher.java:224)  
at org.apache.solr.handler.IndexFetcher.(IndexFetcher.java:266)  at 
org.apache.solr.handler.ReplicationHandler.inform(ReplicationHandler.java:1190) 
 at 
org.apache.solr.cloud.ReplicateFromLeader.startReplication(ReplicateFromLeader.java:106)
  at 
org.apache.solr.cloud.ZkController.startReplicationFromLeader(ZkController.java:1146)
  at org.apache.solr.cloud.ZkController.register(ZkController.java:1124)  at 
org.apache.solr.cloud.ZkController.register(ZkController.java:1014)  at 
org.apache.solr.core.ZkContainer.lambda$registerInZk$0(ZkContainer.java:181)  
at org.apache.solr.core.ZkContainer.registerInZk(ZkContainer.java:208)  at 
org.apache.solr.core.CoreContainer.registerCore(CoreContainer.java:892)  at 
org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1053)
  at org.apache.solr.core.CoreContainer.create(CoreContainer.java:954)  at 
org.apache.solr.handler.admin.CoreAdminOperation.lambda$static$0(CoreAdminOperation.java:91)
  at 
org.apache.solr.handler.admin.CoreAdminOperation.execute(CoreAdminOperation.java:384)
  at 
org.apache.solr.handler.admin.CoreAdminHandler$CallInfo.call(CoreAdminHandler.java:389)
  at 
org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:174)
  at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:177)
  at org.apache.solr.servlet.HttpSolrCall.handleAdmin(HttpSolrCall.java:736)  
at 
org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:717)  
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:498)  at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:382)
  at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:326)
  at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
  at 
org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:139)
  at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
  at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582) 
 at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:224)
  at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
  at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)  
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
  at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
  at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)  
at 
org.eclipse.jetty.server.handler.gzip.GzipHandler.handle(GzipHandler.java:426)  
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134) 
 at org.eclipse.jetty.server.Server.handle(Server.java:534)  at 
org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320)  at 
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)  at 
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283)
  at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108)  at 
org.eclipse.jetty.io.ssl.SslConnection.onFillable(SslConnection.java:251)  at 
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283)
  at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108)  at 
org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93) 
 at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
  at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
  at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
  at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
  at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589) 
 at java.lang.Thread.run(Thread.java:748)  
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.lucene.store.MockDirectoryWrapper  at 

[JENKINS] Lucene-Solr-7.2-Linux (32bit/jdk1.8.0_144) - Build # 88 - Still Unstable!

2017-12-19 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.2-Linux/88/
Java: 32bit/jdk1.8.0_144 -client -XX:+UseParallelGC

3 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.schema.PreAnalyzedFieldManagedSchemaCloudTest

Error Message:
Error from server at http://127.0.0.1:35553/solr: Could not fully create 
collection: managed-preanalyzed

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:35553/solr: Could not fully create collection: 
managed-preanalyzed
at __randomizedtesting.SeedInfo.seed([F165F3BFDC35C5C7]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:643)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1103)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:883)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:816)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
at 
org.apache.solr.schema.PreAnalyzedFieldManagedSchemaCloudTest.setupCluster(PreAnalyzedFieldManagedSchemaCloudTest.java:44)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:874)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  org.apache.solr.core.TestJmxIntegration.testJmxRegistration

Error Message:
org.apache.lucene.store.AlreadyClosedException: this IndexReader is closed

Stack Trace:
javax.management.RuntimeMBeanException: 
org.apache.lucene.store.AlreadyClosedException: this IndexReader is closed
at 
__randomizedtesting.SeedInfo.seed([F165F3BFDC35C5C7:7FB49785B1749DA2]:0)
at 
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.rethrow(DefaultMBeanServerInterceptor.java:839)
at 
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.rethrowMaybeMBeanException(DefaultMBeanServerInterceptor.java:852)
at 
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:651)
at 

[jira] [Updated] (SOLR-11782) LatchWatcher.await doesn’t protect against spurious wakeup

2017-12-19 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-11782?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomás Fernández Löbbe updated SOLR-11782:
-
Attachment: SOLR-11782.patch

> LatchWatcher.await doesn’t protect against spurious wakeup
> --
>
> Key: SOLR-11782
> URL: https://issues.apache.org/jira/browse/SOLR-11782
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Tomás Fernández Löbbe
>Priority: Minor
> Attachments: SOLR-11782.patch
>
>
> I noticed that {{LatchWatcher.await}} does:
> {code}
> public void await(long timeout) throws InterruptedException {
>   synchronized (lock) {
> if (this.event != null) return;
> lock.wait(timeout);
>   }
> }
> {code}
> while the recommendation of lock.wait is to check the wait condition even 
> after the method returns in case of spurious wakeup. {{lock}} is a private 
> local field to which {{notifyAll}} is called only after a zk event is being 
> handled. I think we should check the {{await}} method to something like:
> {code}
> public void await(long timeout) throws InterruptedException {
>   assert timeout > 0;
>   long timeoutTime = System.currentTimeMillis() + timeout;
>   synchronized (lock) {
> while (this.event == null) {
>   long nextTimeout = timeoutTime - System.currentTimeMillis();
>   if (nextTimeout <= 0) {
> return;
>   }
>   lock.wait(nextTimeout);
> }
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-11782) LatchWatcher.await doesn’t protect against spurious wakeup

2017-12-19 Thread JIRA
Tomás Fernández Löbbe created SOLR-11782:


 Summary: LatchWatcher.await doesn’t protect against spurious wakeup
 Key: SOLR-11782
 URL: https://issues.apache.org/jira/browse/SOLR-11782
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Tomás Fernández Löbbe
Priority: Minor


I noticed that {{LatchWatcher.await}} does:
{code}
public void await(long timeout) throws InterruptedException {
  synchronized (lock) {
if (this.event != null) return;
lock.wait(timeout);
  }
}
{code}
while the recommendation of lock.wait is to check the wait condition even after 
the method returns in case of spurious wakeup. {{lock}} is a private local 
field to which {{notifyAll}} is called only after a zk event is being handled. 
I think we should check the {{await}} method to something like:

{code}
public void await(long timeout) throws InterruptedException {
  assert timeout > 0;
  long timeoutTime = System.currentTimeMillis() + timeout;
  synchronized (lock) {
while (this.event == null) {
  long nextTimeout = timeoutTime - System.currentTimeMillis();
  if (nextTimeout <= 0) {
return;
  }
  lock.wait(nextTimeout);
}
  }
}
{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11739) Solr can accept duplicated async IDs

2017-12-19 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-11739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomás Fernández Löbbe updated SOLR-11739:
-
Attachment: SOLR-11739.patch

Here is a patch with what I said, it keeps the async IDs in a separate map, and 
every time a new async request comes in we try to create the key on this new 
map, failing if it already exists.
Any other thoughts [~hossman], given my previous reply and the patch? 
[~anshumg]?

> Solr can accept duplicated async IDs
> 
>
> Key: SOLR-11739
> URL: https://issues.apache.org/jira/browse/SOLR-11739
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Tomás Fernández Löbbe
>Assignee: Tomás Fernández Löbbe
>Priority: Minor
> Attachments: SOLR-11739.patch, SOLR-11739.patch
>
>
> Solr is supposed to reject duplicated async IDs, however, if the repeated IDs 
> are sent fast enough, a race condition in Solr will let the repeated IDs 
> through. The duplicated task is ran and and then silently fails to report as 
> completed because the same async ID is already in the completed map. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk1.8.0) - Build # 4335 - Still Unstable!

2017-12-19 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/4335/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseParallelGC

20 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.LeaderElectionTest

Error Message:
3 threads leaked from SUITE scope at org.apache.solr.cloud.LeaderElectionTest:  
   1) Thread[id=14985, name=zkConnectionManagerCallback-3220-thread-1, 
state=WAITING, group=TGRP-LeaderElectionTest] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
at java.lang.Thread.run(Thread.java:748)2) Thread[id=14983, 
name=TEST-LeaderElectionTest.testElection-seed#[759A14137D3B8B95]-SendThread(127.0.0.1:58098),
 state=TIMED_WAITING, group=TGRP-LeaderElectionTest] at 
java.lang.Thread.sleep(Native Method) at 
org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1051)3) 
Thread[id=14984, 
name=TEST-LeaderElectionTest.testElection-seed#[759A14137D3B8B95]-EventThread, 
state=WAITING, group=TGRP-LeaderElectionTest] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:501)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 3 threads leaked from SUITE 
scope at org.apache.solr.cloud.LeaderElectionTest: 
   1) Thread[id=14985, name=zkConnectionManagerCallback-3220-thread-1, 
state=WAITING, group=TGRP-LeaderElectionTest]
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
   2) Thread[id=14983, 
name=TEST-LeaderElectionTest.testElection-seed#[759A14137D3B8B95]-SendThread(127.0.0.1:58098),
 state=TIMED_WAITING, group=TGRP-LeaderElectionTest]
at java.lang.Thread.sleep(Native Method)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1051)
   3) Thread[id=14984, 
name=TEST-LeaderElectionTest.testElection-seed#[759A14137D3B8B95]-EventThread, 
state=WAITING, group=TGRP-LeaderElectionTest]
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:501)
at __randomizedtesting.SeedInfo.seed([759A14137D3B8B95]:0)


FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.LeaderElectionTest

Error Message:
There are still zombie threads that couldn't be terminated:1) 
Thread[id=14985, name=zkConnectionManagerCallback-3220-thread-1, state=WAITING, 
group=TGRP-LeaderElectionTest] at sun.misc.Unsafe.park(Native Method)   
  at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)  
   at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
at java.lang.Thread.run(Thread.java:748)2) Thread[id=14983, 
name=TEST-LeaderElectionTest.testElection-seed#[759A14137D3B8B95]-SendThread(127.0.0.1:58098),
 state=TIMED_WAITING, group=TGRP-LeaderElectionTest] at 

[JENKINS-EA] Lucene-Solr-7.x-Linux (64bit/jdk-10-ea+32) - Build # 1017 - Unstable!

2017-12-19 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/1017/
Java: 64bit/jdk-10-ea+32 -XX:-UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.AutoAddReplicasIntegrationTest.testSimple

Error Message:
Error from server at https://127.0.0.1:41191/solr: create the collection time 
out:180s

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at https://127.0.0.1:41191/solr: create the collection time out:180s
at 
__randomizedtesting.SeedInfo.seed([3C730503C51CF522:4C021FDE2EF21F3]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:643)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1104)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:884)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
at 
org.apache.solr.cloud.autoscaling.AutoAddReplicasIntegrationTest.testSimple(AutoAddReplicasIntegrationTest.java:73)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS] Lucene-Solr-7.x-Windows (64bit/jdk1.8.0_144) - Build # 351 - Still Unstable!

2017-12-19 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Windows/351/
Java: 64bit/jdk1.8.0_144 -XX:+UseCompressedOops -XX:+UseSerialGC

8 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.lucene.index.TestIndexManyDocuments

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J0\temp\lucene.index.TestIndexManyDocuments_AD0983DDA6ADCABF-001\tempDir-001:
 java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J0\temp\lucene.index.TestIndexManyDocuments_AD0983DDA6ADCABF-001\tempDir-001

C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J0\temp\lucene.index.TestIndexManyDocuments_AD0983DDA6ADCABF-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J0\temp\lucene.index.TestIndexManyDocuments_AD0983DDA6ADCABF-001
 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J0\temp\lucene.index.TestIndexManyDocuments_AD0983DDA6ADCABF-001\tempDir-001:
 java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J0\temp\lucene.index.TestIndexManyDocuments_AD0983DDA6ADCABF-001\tempDir-001
   
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J0\temp\lucene.index.TestIndexManyDocuments_AD0983DDA6ADCABF-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J0\temp\lucene.index.TestIndexManyDocuments_AD0983DDA6ADCABF-001

at __randomizedtesting.SeedInfo.seed([AD0983DDA6ADCABF]:0)
at org.apache.lucene.util.IOUtils.rm(IOUtils.java:329)
at 
org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:216)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterAlways(TestRuleAdapter.java:31)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.dataimport.TestFileListEntityProcessor

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\contrib\solr-dataimporthandler\test\J0\temp\solr.handler.dataimport.TestFileListEntityProcessor_E48F84AA652BB24C-001\tempDir-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\contrib\solr-dataimporthandler\test\J0\temp\solr.handler.dataimport.TestFileListEntityProcessor_E48F84AA652BB24C-001\tempDir-001
 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\contrib\solr-dataimporthandler\test\J0\temp\solr.handler.dataimport.TestFileListEntityProcessor_E48F84AA652BB24C-001\tempDir-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\contrib\solr-dataimporthandler\test\J0\temp\solr.handler.dataimport.TestFileListEntityProcessor_E48F84AA652BB24C-001\tempDir-001

at __randomizedtesting.SeedInfo.seed([E48F84AA652BB24C]:0)
at org.apache.lucene.util.IOUtils.rm(IOUtils.java:329)
at 
org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:216)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterAlways(TestRuleAdapter.java:31)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 

[jira] [Commented] (SOLR-8327) SolrDispatchFilter is not caching new state format, which results in live fetch from ZK per request if node does not contain core from collection

2017-12-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16297686#comment-16297686
 ] 

ASF GitHub Bot commented on SOLR-8327:
--

Github user slackhappy commented on the issue:

https://github.com/apache/lucene-solr/pull/294
  
I'll look into the test failures.  I actually didn't mean to create the PR 
yet  


> SolrDispatchFilter is not caching new state format, which results in live 
> fetch from ZK per request if node does not contain core from collection
> -
>
> Key: SOLR-8327
> URL: https://issues.apache.org/jira/browse/SOLR-8327
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.3
>Reporter: Jessica Cheng Mallet
>Assignee: Varun Thacker
>  Labels: solrcloud
> Attachments: SOLR-8327.patch
>
>
> While perf testing with non-solrj client (request can be sent to any solr 
> node), we noticed a huge amount of data from Zookeeper in our tcpdump (~1G 
> for 20 second dump). From the thread dump, we noticed this:
> java.lang.Object.wait (Native Method)
> java.lang.Object.wait (Object.java:503)
> org.apache.zookeeper.ClientCnxn.submitRequest (ClientCnxn.java:1309)
> org.apache.zookeeper.ZooKeeper.getData (ZooKeeper.java:1152)
> org.apache.solr.common.cloud.SolrZkClient$7.execute (SolrZkClient.java:345)
> org.apache.solr.common.cloud.SolrZkClient$7.execute (SolrZkClient.java:342)
> org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation 
> (ZkCmdExecutor.java:61)
> org.apache.solr.common.cloud.SolrZkClient.getData (SolrZkClient.java:342)
> org.apache.solr.common.cloud.ZkStateReader.getCollectionLive 
> (ZkStateReader.java:841)
> org.apache.solr.common.cloud.ZkStateReader$7.get (ZkStateReader.java:515)
> org.apache.solr.common.cloud.ClusterState.getCollectionOrNull 
> (ClusterState.java:175)
> org.apache.solr.common.cloud.ClusterState.getLeader (ClusterState.java:98)
> org.apache.solr.servlet.HttpSolrCall.getCoreByCollection 
> (HttpSolrCall.java:784)
> org.apache.solr.servlet.HttpSolrCall.init (HttpSolrCall.java:272)
> org.apache.solr.servlet.HttpSolrCall.call (HttpSolrCall.java:417)
> org.apache.solr.servlet.SolrDispatchFilter.doFilter 
> (SolrDispatchFilter.java:210)
> org.apache.solr.servlet.SolrDispatchFilter.doFilter 
> (SolrDispatchFilter.java:179)
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter 
> (ServletHandler.java:1652)
> org.eclipse.jetty.servlet.ServletHandler.doHandle (ServletHandler.java:585)
> org.eclipse.jetty.server.handler.ScopedHandler.handle (ScopedHandler.java:143)
> org.eclipse.jetty.security.SecurityHandler.handle (SecurityHandler.java:577)
> org.eclipse.jetty.server.session.SessionHandler.doHandle 
> (SessionHandler.java:223)
> org.eclipse.jetty.server.handler.ContextHandler.doHandle 
> (ContextHandler.java:1127)
> org.eclipse.jetty.servlet.ServletHandler.doScope (ServletHandler.java:515)
> org.eclipse.jetty.server.session.SessionHandler.doScope 
> (SessionHandler.java:185)
> org.eclipse.jetty.server.handler.ContextHandler.doScope 
> (ContextHandler.java:1061)
> org.eclipse.jetty.server.handler.ScopedHandler.handle (ScopedHandler.java:141)
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle 
> (ContextHandlerCollection.java:215)
> org.eclipse.jetty.server.handler.HandlerCollection.handle 
> (HandlerCollection.java:110)
> org.eclipse.jetty.server.handler.HandlerWrapper.handle 
> (HandlerWrapper.java:97)
> org.eclipse.jetty.server.Server.handle (Server.java:499)
> org.eclipse.jetty.server.HttpChannel.handle (HttpChannel.java:310)
> org.eclipse.jetty.server.HttpConnection.onFillable (HttpConnection.java:257)
> org.eclipse.jetty.io.AbstractConnection$2.run (AbstractConnection.java:540)
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob 
> (QueuedThreadPool.java:635)
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run 
> (QueuedThreadPool.java:555)
> java.lang.Thread.run (Thread.java:745)
> Looks like SolrDispatchFilter doesn't have caching similar to the 
> collectionStateCache in CloudSolrClient, so if the node doesn't know about a 
> collection in the new state format, it just live-fetch it from Zookeeper on 
> every request.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr issue #294: ZkStateReader: cache LazyCollectionRef (SOLR-8327)

2017-12-19 Thread slackhappy
Github user slackhappy commented on the issue:

https://github.com/apache/lucene-solr/pull/294
  
I'll look into the test failures.  I actually didn't mean to create the PR 
yet 😳 


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-11781) Pass impersonator info to the authorization plugin

2017-12-19 Thread Hrishikesh Gadre (JIRA)
Hrishikesh Gadre created SOLR-11781:
---

 Summary: Pass impersonator info to the authorization plugin
 Key: SOLR-11781
 URL: https://issues.apache.org/jira/browse/SOLR-11781
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Affects Versions: 7.0
Reporter: Hrishikesh Gadre
Priority: Minor


SENTRY-1475 implemented Solr authorization plugin based on Sentry. This also 
includes the audit log functionality in Sentry. Currently authorization context 
is not providing the impersonator information which is required for the audit 
logs. We should improve Solr authorization framework to pass this extra 
information.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (64bit/jdk1.8.0_144) - Build # 21110 - Still Unstable!

2017-12-19 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/21110/
Java: 64bit/jdk1.8.0_144 -XX:-UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.cloud.ReplaceNodeTest.test

Error Message:
Expecting no cores but found some: 
{replacenodetest_coll_shard2_replica_t21={name=replacenodetest_coll_shard2_replica_t21,instanceDir=/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.ReplaceNodeTest_CAB2BF9CE28C822D-001/tempDir-001/node5/replacenodetest_coll_shard2_replica_t21,dataDir=/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.ReplaceNodeTest_CAB2BF9CE28C822D-001/tempDir-001/node5/./replacenodetest_coll_shard2_replica_t21/data/,config=solrconfig.xml,schema=schema.xml,startTime=Tue
 Dec 19 19:05:26 BRT 
2017,uptime=18465,lastPublished=active,configVersion=0,cloud={collection=replacenodetest_coll,shard=shard2,replica=core_node22},index={numDocs=0,maxDoc=0,deletedDocs=0,indexHeapUsageBytes=0,version=11,segmentCount=0,current=true,hasDeletions=false,directory=org.apache.lucene.store.MockDirectoryWrapper:MockDirectoryWrapper(RAMDirectory@1a9d9ffd
 
lockFactory=org.apache.lucene.store.SingleInstanceLockFactory@f0f759d),segmentsFile=segments_2,segmentsFileSizeInBytes=117,userData={commitTimeMSec=1513721128403,
 commitCommandVer=0},lastModified=Tue Dec 19 19:05:28 BRT 
2017,sizeInBytes=117,size=117 
bytes}},replacenodetest_coll_shard4_replica_t23={name=replacenodetest_coll_shard4_replica_t23,instanceDir=/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.ReplaceNodeTest_CAB2BF9CE28C822D-001/tempDir-001/node5/replacenodetest_coll_shard4_replica_t23,dataDir=/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.ReplaceNodeTest_CAB2BF9CE28C822D-001/tempDir-001/node5/./replacenodetest_coll_shard4_replica_t23/data/,config=solrconfig.xml,schema=schema.xml,startTime=Tue
 Dec 19 19:05:27 BRT 
2017,uptime=17354,lastPublished=active,configVersion=0,cloud={collection=replacenodetest_coll,shard=shard4,replica=core_node24},index={numDocs=0,maxDoc=0,deletedDocs=0,indexHeapUsageBytes=0,version=11,segmentCount=0,current=true,hasDeletions=false,directory=org.apache.lucene.store.MockDirectoryWrapper:MockDirectoryWrapper(RAMDirectory@29a2ed38
 
lockFactory=org.apache.lucene.store.SingleInstanceLockFactory@3621226a),segmentsFile=segments_2,segmentsFileSizeInBytes=117,userData={commitTimeMSec=1513721129528,
 commitCommandVer=0},lastModified=Tue Dec 19 19:05:29 BRT 
2017,sizeInBytes=117,size=117 bytes}}} expected:<0> but was:<2>

Stack Trace:
java.lang.AssertionError: Expecting no cores but found some: 
{replacenodetest_coll_shard2_replica_t21={name=replacenodetest_coll_shard2_replica_t21,instanceDir=/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.ReplaceNodeTest_CAB2BF9CE28C822D-001/tempDir-001/node5/replacenodetest_coll_shard2_replica_t21,dataDir=/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.ReplaceNodeTest_CAB2BF9CE28C822D-001/tempDir-001/node5/./replacenodetest_coll_shard2_replica_t21/data/,config=solrconfig.xml,schema=schema.xml,startTime=Tue
 Dec 19 19:05:26 BRT 
2017,uptime=18465,lastPublished=active,configVersion=0,cloud={collection=replacenodetest_coll,shard=shard2,replica=core_node22},index={numDocs=0,maxDoc=0,deletedDocs=0,indexHeapUsageBytes=0,version=11,segmentCount=0,current=true,hasDeletions=false,directory=org.apache.lucene.store.MockDirectoryWrapper:MockDirectoryWrapper(RAMDirectory@1a9d9ffd
 
lockFactory=org.apache.lucene.store.SingleInstanceLockFactory@f0f759d),segmentsFile=segments_2,segmentsFileSizeInBytes=117,userData={commitTimeMSec=1513721128403,
 commitCommandVer=0},lastModified=Tue Dec 19 19:05:28 BRT 
2017,sizeInBytes=117,size=117 
bytes}},replacenodetest_coll_shard4_replica_t23={name=replacenodetest_coll_shard4_replica_t23,instanceDir=/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.ReplaceNodeTest_CAB2BF9CE28C822D-001/tempDir-001/node5/replacenodetest_coll_shard4_replica_t23,dataDir=/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.ReplaceNodeTest_CAB2BF9CE28C822D-001/tempDir-001/node5/./replacenodetest_coll_shard4_replica_t23/data/,config=solrconfig.xml,schema=schema.xml,startTime=Tue
 Dec 19 19:05:27 BRT 
2017,uptime=17354,lastPublished=active,configVersion=0,cloud={collection=replacenodetest_coll,shard=shard4,replica=core_node24},index={numDocs=0,maxDoc=0,deletedDocs=0,indexHeapUsageBytes=0,version=11,segmentCount=0,current=true,hasDeletions=false,directory=org.apache.lucene.store.MockDirectoryWrapper:MockDirectoryWrapper(RAMDirectory@29a2ed38
 
lockFactory=org.apache.lucene.store.SingleInstanceLockFactory@3621226a),segmentsFile=segments_2,segmentsFileSizeInBytes=117,userData={commitTimeMSec=1513721129528,
 

[JENKINS] Lucene-Solr-7.2-Windows (64bit/jdk-9.0.1) - Build # 24 - Still Unstable!

2017-12-19 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.2-Windows/24/
Java: 64bit/jdk-9.0.1 -XX:+UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.search.TestAddFieldRealTimeGet

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-7.2-Windows\solr\build\solr-core\test\J1\temp\solr.search.TestAddFieldRealTimeGet_4A0C75951C87AC2B-001\init-core-data-001\index:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.2-Windows\solr\build\solr-core\test\J1\temp\solr.search.TestAddFieldRealTimeGet_4A0C75951C87AC2B-001\init-core-data-001\index

C:\Users\jenkins\workspace\Lucene-Solr-7.2-Windows\solr\build\solr-core\test\J1\temp\solr.search.TestAddFieldRealTimeGet_4A0C75951C87AC2B-001\init-core-data-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.2-Windows\solr\build\solr-core\test\J1\temp\solr.search.TestAddFieldRealTimeGet_4A0C75951C87AC2B-001\init-core-data-001
 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\jenkins\workspace\Lucene-Solr-7.2-Windows\solr\build\solr-core\test\J1\temp\solr.search.TestAddFieldRealTimeGet_4A0C75951C87AC2B-001\init-core-data-001\index:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.2-Windows\solr\build\solr-core\test\J1\temp\solr.search.TestAddFieldRealTimeGet_4A0C75951C87AC2B-001\init-core-data-001\index
   
C:\Users\jenkins\workspace\Lucene-Solr-7.2-Windows\solr\build\solr-core\test\J1\temp\solr.search.TestAddFieldRealTimeGet_4A0C75951C87AC2B-001\init-core-data-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.2-Windows\solr\build\solr-core\test\J1\temp\solr.search.TestAddFieldRealTimeGet_4A0C75951C87AC2B-001\init-core-data-001

at __randomizedtesting.SeedInfo.seed([4A0C75951C87AC2B]:0)
at org.apache.lucene.util.IOUtils.rm(IOUtils.java:329)
at 
org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:216)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterAlways(TestRuleAdapter.java:31)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)




Build Log:
[...truncated 13894 lines...]
   [junit4] Suite: org.apache.solr.search.TestAddFieldRealTimeGet
   [junit4]   2> Creating dataDir: 
C:\Users\jenkins\workspace\Lucene-Solr-7.2-Windows\solr\build\solr-core\test\J1\temp\solr.search.TestAddFieldRealTimeGet_4A0C75951C87AC2B-001\init-core-data-001
   [junit4]   2> 3520733 WARN  
(SUITE-TestAddFieldRealTimeGet-seed#[4A0C75951C87AC2B]-worker) [] 
o.a.s.SolrTestCaseJ4 startTrackingSearchers: numOpens=1 numCloses=1
   [junit4]   2> 3520733 INFO  
(SUITE-TestAddFieldRealTimeGet-seed#[4A0C75951C87AC2B]-worker) [] 
o.a.s.SolrTestCaseJ4 Using PointFields (NUMERIC_POINTS_SYSPROP=true) 
w/NUMERIC_DOCVALUES_SYSPROP=false
   [junit4]   2> 3520734 INFO  
(SUITE-TestAddFieldRealTimeGet-seed#[4A0C75951C87AC2B]-worker) [] 
o.a.s.SolrTestCaseJ4 Randomized ssl (true) and clientAuth (false) via: 
@org.apache.solr.util.RandomizeSSL(reason="", value=0.0/0.0, ssl=0.0/0.0, 
clientAuth=0.0/0.0)
   [junit4]   2> 3520735 INFO  
(SUITE-TestAddFieldRealTimeGet-seed#[4A0C75951C87AC2B]-worker) [] 
o.a.s.SolrTestCaseJ4 SecureRandom sanity checks: 
test.solr.allowed.securerandom=null & java.security.egd=file:/dev/./urandom
   [junit4]   2> 3520736 INFO  
(TEST-TestAddFieldRealTimeGet.test-seed#[4A0C75951C87AC2B]) [] 
o.a.s.SolrTestCaseJ4 ###Starting test
   [junit4]   2> 3520742 INFO  
(TEST-TestAddFieldRealTimeGet.test-seed#[4A0C75951C87AC2B]) [] 
o.a.s.SolrTestCaseJ4 initCore
   [junit4]   2> 3520750 INFO  
(TEST-TestAddFieldRealTimeGet.test-seed#[4A0C75951C87AC2B]) [] 
o.a.s.c.SolrConfig Using Lucene MatchVersion: 7.2.0
   [junit4]   2> 3520756 INFO  
(TEST-TestAddFieldRealTimeGet.test-seed#[4A0C75951C87AC2B]) [] 
o.a.s.s.ManagedIndexSchemaFactory The schema is configured as managed, but 
managed schema 

[jira] [Commented] (SOLR-8327) SolrDispatchFilter is not caching new state format, which results in live fetch from ZK per request if node does not contain core from collection

2017-12-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16297495#comment-16297495
 ] 

ASF GitHub Bot commented on SOLR-8327:
--

Github user chatman commented on the issue:

https://github.com/apache/lucene-solr/pull/294
  
Seems like there are some test failures due to this change:

   [junit4] Tests with failures [seed: DE1D5337E38D2C32]:
   [junit4]   - 
org.apache.solr.cloud.TestPullReplica.testRemoveAllWriterReplicas
   [junit4]   - 
org.apache.solr.cloud.TestPullReplica.testAddRemovePullReplica
   [junit4]   - 
org.apache.solr.cloud.CollectionTooManyReplicasTest.testAddTooManyReplicas
   [junit4]   - 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.addReplicaTest
   [junit4]   - 
org.apache.solr.cloud.DeleteShardTest.testDirectoryCleanupAfterDeleteShard
   [junit4]   - org.apache.solr.cloud.TestCloudRecovery.corruptedLogTest
   [junit4]   - 
org.apache.solr.cloud.TestCloudRecovery.leaderRecoverFromLogOnStartupTest
   [junit4]   - org.apache.solr.cloud.TestUtilizeNode.test
   [junit4]   - 
org.apache.solr.cloud.TestTlogReplica.testAddRemoveTlogReplica

Haven't looked into them yet, though.


> SolrDispatchFilter is not caching new state format, which results in live 
> fetch from ZK per request if node does not contain core from collection
> -
>
> Key: SOLR-8327
> URL: https://issues.apache.org/jira/browse/SOLR-8327
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.3
>Reporter: Jessica Cheng Mallet
>Assignee: Varun Thacker
>  Labels: solrcloud
> Attachments: SOLR-8327.patch
>
>
> While perf testing with non-solrj client (request can be sent to any solr 
> node), we noticed a huge amount of data from Zookeeper in our tcpdump (~1G 
> for 20 second dump). From the thread dump, we noticed this:
> java.lang.Object.wait (Native Method)
> java.lang.Object.wait (Object.java:503)
> org.apache.zookeeper.ClientCnxn.submitRequest (ClientCnxn.java:1309)
> org.apache.zookeeper.ZooKeeper.getData (ZooKeeper.java:1152)
> org.apache.solr.common.cloud.SolrZkClient$7.execute (SolrZkClient.java:345)
> org.apache.solr.common.cloud.SolrZkClient$7.execute (SolrZkClient.java:342)
> org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation 
> (ZkCmdExecutor.java:61)
> org.apache.solr.common.cloud.SolrZkClient.getData (SolrZkClient.java:342)
> org.apache.solr.common.cloud.ZkStateReader.getCollectionLive 
> (ZkStateReader.java:841)
> org.apache.solr.common.cloud.ZkStateReader$7.get (ZkStateReader.java:515)
> org.apache.solr.common.cloud.ClusterState.getCollectionOrNull 
> (ClusterState.java:175)
> org.apache.solr.common.cloud.ClusterState.getLeader (ClusterState.java:98)
> org.apache.solr.servlet.HttpSolrCall.getCoreByCollection 
> (HttpSolrCall.java:784)
> org.apache.solr.servlet.HttpSolrCall.init (HttpSolrCall.java:272)
> org.apache.solr.servlet.HttpSolrCall.call (HttpSolrCall.java:417)
> org.apache.solr.servlet.SolrDispatchFilter.doFilter 
> (SolrDispatchFilter.java:210)
> org.apache.solr.servlet.SolrDispatchFilter.doFilter 
> (SolrDispatchFilter.java:179)
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter 
> (ServletHandler.java:1652)
> org.eclipse.jetty.servlet.ServletHandler.doHandle (ServletHandler.java:585)
> org.eclipse.jetty.server.handler.ScopedHandler.handle (ScopedHandler.java:143)
> org.eclipse.jetty.security.SecurityHandler.handle (SecurityHandler.java:577)
> org.eclipse.jetty.server.session.SessionHandler.doHandle 
> (SessionHandler.java:223)
> org.eclipse.jetty.server.handler.ContextHandler.doHandle 
> (ContextHandler.java:1127)
> org.eclipse.jetty.servlet.ServletHandler.doScope (ServletHandler.java:515)
> org.eclipse.jetty.server.session.SessionHandler.doScope 
> (SessionHandler.java:185)
> org.eclipse.jetty.server.handler.ContextHandler.doScope 
> (ContextHandler.java:1061)
> org.eclipse.jetty.server.handler.ScopedHandler.handle (ScopedHandler.java:141)
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle 
> (ContextHandlerCollection.java:215)
> org.eclipse.jetty.server.handler.HandlerCollection.handle 
> (HandlerCollection.java:110)
> org.eclipse.jetty.server.handler.HandlerWrapper.handle 
> (HandlerWrapper.java:97)
> org.eclipse.jetty.server.Server.handle (Server.java:499)
> org.eclipse.jetty.server.HttpChannel.handle (HttpChannel.java:310)
> org.eclipse.jetty.server.HttpConnection.onFillable (HttpConnection.java:257)
> org.eclipse.jetty.io.AbstractConnection$2.run (AbstractConnection.java:540)
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob 
> (QueuedThreadPool.java:635)
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run 

[GitHub] lucene-solr issue #294: ZkStateReader: cache LazyCollectionRef (SOLR-8327)

2017-12-19 Thread chatman
Github user chatman commented on the issue:

https://github.com/apache/lucene-solr/pull/294
  
Seems like there are some test failures due to this change:

   [junit4] Tests with failures [seed: DE1D5337E38D2C32]:
   [junit4]   - 
org.apache.solr.cloud.TestPullReplica.testRemoveAllWriterReplicas
   [junit4]   - 
org.apache.solr.cloud.TestPullReplica.testAddRemovePullReplica
   [junit4]   - 
org.apache.solr.cloud.CollectionTooManyReplicasTest.testAddTooManyReplicas
   [junit4]   - 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.addReplicaTest
   [junit4]   - 
org.apache.solr.cloud.DeleteShardTest.testDirectoryCleanupAfterDeleteShard
   [junit4]   - org.apache.solr.cloud.TestCloudRecovery.corruptedLogTest
   [junit4]   - 
org.apache.solr.cloud.TestCloudRecovery.leaderRecoverFromLogOnStartupTest
   [junit4]   - org.apache.solr.cloud.TestUtilizeNode.test
   [junit4]   - 
org.apache.solr.cloud.TestTlogReplica.testAddRemoveTlogReplica

Haven't looked into them yet, though.


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-8327) SolrDispatchFilter is not caching new state format, which results in live fetch from ZK per request if node does not contain core from collection

2017-12-19 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16297445#comment-16297445
 ] 

Ishan Chattopadhyaya edited comment on SOLR-8327 at 12/19/17 9:25 PM:
--

bq. This approach seems fine to me. 
My only concern with that solution is that within those 2 seconds (cache expiry 
time), the support for cache invalidation (in case of replicas being moved away 
from a node for example) is not there. I'm not sure yet if that might lead to 
potentially failed queries/updates. With a "smart caching" [0], request 
re-tries (and subsequent cache refresh) can be built into the routing logic in 
case our-version-of-collection-state thinks the collection should be on a node, 
but it isn't. The other benefit with "smart caching" is that it doesn't rely on 
nodes to keep fetching the state from ZK every two seconds.

[0] - 
https://issues.apache.org/jira/browse/SOLR-8327?focusedCommentId=15207115=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15207115


was (Author: ichattopadhyaya):
bq. This approach seems fine to me. 
My only concern with that solution is that within those 2 seconds (cache expiry 
time), the support for cache invalidation (in case of replicas being moved away 
from a node for example) is not there. I'm not sure yet if that might lead to 
potentially failed queries/updates. With a smart caching, request re-tries (and 
subsequent cache refresh) can be built into the routing logic in case 
our-version-of-collection-state thinks the collection should be on a node, but 
it isn't.

> SolrDispatchFilter is not caching new state format, which results in live 
> fetch from ZK per request if node does not contain core from collection
> -
>
> Key: SOLR-8327
> URL: https://issues.apache.org/jira/browse/SOLR-8327
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.3
>Reporter: Jessica Cheng Mallet
>Assignee: Varun Thacker
>  Labels: solrcloud
> Attachments: SOLR-8327.patch
>
>
> While perf testing with non-solrj client (request can be sent to any solr 
> node), we noticed a huge amount of data from Zookeeper in our tcpdump (~1G 
> for 20 second dump). From the thread dump, we noticed this:
> java.lang.Object.wait (Native Method)
> java.lang.Object.wait (Object.java:503)
> org.apache.zookeeper.ClientCnxn.submitRequest (ClientCnxn.java:1309)
> org.apache.zookeeper.ZooKeeper.getData (ZooKeeper.java:1152)
> org.apache.solr.common.cloud.SolrZkClient$7.execute (SolrZkClient.java:345)
> org.apache.solr.common.cloud.SolrZkClient$7.execute (SolrZkClient.java:342)
> org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation 
> (ZkCmdExecutor.java:61)
> org.apache.solr.common.cloud.SolrZkClient.getData (SolrZkClient.java:342)
> org.apache.solr.common.cloud.ZkStateReader.getCollectionLive 
> (ZkStateReader.java:841)
> org.apache.solr.common.cloud.ZkStateReader$7.get (ZkStateReader.java:515)
> org.apache.solr.common.cloud.ClusterState.getCollectionOrNull 
> (ClusterState.java:175)
> org.apache.solr.common.cloud.ClusterState.getLeader (ClusterState.java:98)
> org.apache.solr.servlet.HttpSolrCall.getCoreByCollection 
> (HttpSolrCall.java:784)
> org.apache.solr.servlet.HttpSolrCall.init (HttpSolrCall.java:272)
> org.apache.solr.servlet.HttpSolrCall.call (HttpSolrCall.java:417)
> org.apache.solr.servlet.SolrDispatchFilter.doFilter 
> (SolrDispatchFilter.java:210)
> org.apache.solr.servlet.SolrDispatchFilter.doFilter 
> (SolrDispatchFilter.java:179)
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter 
> (ServletHandler.java:1652)
> org.eclipse.jetty.servlet.ServletHandler.doHandle (ServletHandler.java:585)
> org.eclipse.jetty.server.handler.ScopedHandler.handle (ScopedHandler.java:143)
> org.eclipse.jetty.security.SecurityHandler.handle (SecurityHandler.java:577)
> org.eclipse.jetty.server.session.SessionHandler.doHandle 
> (SessionHandler.java:223)
> org.eclipse.jetty.server.handler.ContextHandler.doHandle 
> (ContextHandler.java:1127)
> org.eclipse.jetty.servlet.ServletHandler.doScope (ServletHandler.java:515)
> org.eclipse.jetty.server.session.SessionHandler.doScope 
> (SessionHandler.java:185)
> org.eclipse.jetty.server.handler.ContextHandler.doScope 
> (ContextHandler.java:1061)
> org.eclipse.jetty.server.handler.ScopedHandler.handle (ScopedHandler.java:141)
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle 
> (ContextHandlerCollection.java:215)
> org.eclipse.jetty.server.handler.HandlerCollection.handle 
> (HandlerCollection.java:110)
> org.eclipse.jetty.server.handler.HandlerWrapper.handle 
> (HandlerWrapper.java:97)
> org.eclipse.jetty.server.Server.handle 

[jira] [Commented] (SOLR-8327) SolrDispatchFilter is not caching new state format, which results in live fetch from ZK per request if node does not contain core from collection

2017-12-19 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16297445#comment-16297445
 ] 

Ishan Chattopadhyaya commented on SOLR-8327:


bq. This approach seems fine to me. 
My only concern with that solution is that within those 2 seconds (cache expiry 
time), the support for cache invalidation (in case of replicas being moved away 
from a node for example) is not there. I'm not sure yet if that might lead to 
potentially failed queries/updates. With a smart caching, request re-tries (and 
subsequent cache refresh) can be built into the routing logic in case 
our-version-of-collection-state thinks the collection should be on a node, but 
it isn't.

> SolrDispatchFilter is not caching new state format, which results in live 
> fetch from ZK per request if node does not contain core from collection
> -
>
> Key: SOLR-8327
> URL: https://issues.apache.org/jira/browse/SOLR-8327
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.3
>Reporter: Jessica Cheng Mallet
>Assignee: Varun Thacker
>  Labels: solrcloud
> Attachments: SOLR-8327.patch
>
>
> While perf testing with non-solrj client (request can be sent to any solr 
> node), we noticed a huge amount of data from Zookeeper in our tcpdump (~1G 
> for 20 second dump). From the thread dump, we noticed this:
> java.lang.Object.wait (Native Method)
> java.lang.Object.wait (Object.java:503)
> org.apache.zookeeper.ClientCnxn.submitRequest (ClientCnxn.java:1309)
> org.apache.zookeeper.ZooKeeper.getData (ZooKeeper.java:1152)
> org.apache.solr.common.cloud.SolrZkClient$7.execute (SolrZkClient.java:345)
> org.apache.solr.common.cloud.SolrZkClient$7.execute (SolrZkClient.java:342)
> org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation 
> (ZkCmdExecutor.java:61)
> org.apache.solr.common.cloud.SolrZkClient.getData (SolrZkClient.java:342)
> org.apache.solr.common.cloud.ZkStateReader.getCollectionLive 
> (ZkStateReader.java:841)
> org.apache.solr.common.cloud.ZkStateReader$7.get (ZkStateReader.java:515)
> org.apache.solr.common.cloud.ClusterState.getCollectionOrNull 
> (ClusterState.java:175)
> org.apache.solr.common.cloud.ClusterState.getLeader (ClusterState.java:98)
> org.apache.solr.servlet.HttpSolrCall.getCoreByCollection 
> (HttpSolrCall.java:784)
> org.apache.solr.servlet.HttpSolrCall.init (HttpSolrCall.java:272)
> org.apache.solr.servlet.HttpSolrCall.call (HttpSolrCall.java:417)
> org.apache.solr.servlet.SolrDispatchFilter.doFilter 
> (SolrDispatchFilter.java:210)
> org.apache.solr.servlet.SolrDispatchFilter.doFilter 
> (SolrDispatchFilter.java:179)
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter 
> (ServletHandler.java:1652)
> org.eclipse.jetty.servlet.ServletHandler.doHandle (ServletHandler.java:585)
> org.eclipse.jetty.server.handler.ScopedHandler.handle (ScopedHandler.java:143)
> org.eclipse.jetty.security.SecurityHandler.handle (SecurityHandler.java:577)
> org.eclipse.jetty.server.session.SessionHandler.doHandle 
> (SessionHandler.java:223)
> org.eclipse.jetty.server.handler.ContextHandler.doHandle 
> (ContextHandler.java:1127)
> org.eclipse.jetty.servlet.ServletHandler.doScope (ServletHandler.java:515)
> org.eclipse.jetty.server.session.SessionHandler.doScope 
> (SessionHandler.java:185)
> org.eclipse.jetty.server.handler.ContextHandler.doScope 
> (ContextHandler.java:1061)
> org.eclipse.jetty.server.handler.ScopedHandler.handle (ScopedHandler.java:141)
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle 
> (ContextHandlerCollection.java:215)
> org.eclipse.jetty.server.handler.HandlerCollection.handle 
> (HandlerCollection.java:110)
> org.eclipse.jetty.server.handler.HandlerWrapper.handle 
> (HandlerWrapper.java:97)
> org.eclipse.jetty.server.Server.handle (Server.java:499)
> org.eclipse.jetty.server.HttpChannel.handle (HttpChannel.java:310)
> org.eclipse.jetty.server.HttpConnection.onFillable (HttpConnection.java:257)
> org.eclipse.jetty.io.AbstractConnection$2.run (AbstractConnection.java:540)
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob 
> (QueuedThreadPool.java:635)
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run 
> (QueuedThreadPool.java:555)
> java.lang.Thread.run (Thread.java:745)
> Looks like SolrDispatchFilter doesn't have caching similar to the 
> collectionStateCache in CloudSolrClient, so if the node doesn't know about a 
> collection in the new state format, it just live-fetch it from Zookeeper on 
> every request.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For 

[JENKINS] Lucene-Solr-Tests-7.2 - Build # 16 - Still Unstable

2017-12-19 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.2/16/

6 tests failed.
FAILED:  
org.apache.solr.client.solrj.io.stream.StreamExpressionTest.testExecutorStream

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([66FE22CE21FF6F26:443EA33502954536]:0)
at org.junit.Assert.fail(Assert.java:92)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertTrue(Assert.java:54)
at 
org.apache.solr.client.solrj.io.stream.StreamExpressionTest.testExecutorStream(StreamExpressionTest.java:8443)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  
junit.framework.TestSuite.org.apache.solr.client.solrj.io.stream.StreamExpressionTest

Error Message:
11 threads leaked from SUITE scope at 
org.apache.solr.client.solrj.io.stream.StreamExpressionTest: 1) 
Thread[id=292, name=Connection evictor, state=TIMED_WAITING, 
group=TGRP-StreamExpressionTest] at 

[JENKINS] Lucene-Solr-master-Windows (64bit/jdk1.8.0_144) - Build # 7063 - Still Unstable!

2017-12-19 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/7063/
Java: 64bit/jdk1.8.0_144 -XX:+UseCompressedOops -XX:+UseSerialGC

9 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.lucene.index.TestBackwardsCompatibility

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\backward-codecs\test\J0\temp\lucene.index.TestBackwardsCompatibility_B599383E1C973DC9-001\3.0.2-nocfs-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\backward-codecs\test\J0\temp\lucene.index.TestBackwardsCompatibility_B599383E1C973DC9-001\3.0.2-nocfs-001
 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\backward-codecs\test\J0\temp\lucene.index.TestBackwardsCompatibility_B599383E1C973DC9-001\3.0.2-nocfs-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\backward-codecs\test\J0\temp\lucene.index.TestBackwardsCompatibility_B599383E1C973DC9-001\3.0.2-nocfs-001

at __randomizedtesting.SeedInfo.seed([B599383E1C973DC9]:0)
at org.apache.lucene.util.IOUtils.rm(IOUtils.java:329)
at 
org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:216)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterAlways(TestRuleAdapter.java:31)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  
junit.framework.TestSuite.org.apache.lucene.codecs.blockterms.TestVarGapDocFreqIntervalPostingsFormat

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\codecs\test\J0\temp\lucene.codecs.blockterms.TestVarGapDocFreqIntervalPostingsFormat_A1EF7255751A33E3-001\testPostingsFormat.testExact-006:
 java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\codecs\test\J0\temp\lucene.codecs.blockterms.TestVarGapDocFreqIntervalPostingsFormat_A1EF7255751A33E3-001\testPostingsFormat.testExact-006

C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\codecs\test\J0\temp\lucene.codecs.blockterms.TestVarGapDocFreqIntervalPostingsFormat_A1EF7255751A33E3-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\codecs\test\J0\temp\lucene.codecs.blockterms.TestVarGapDocFreqIntervalPostingsFormat_A1EF7255751A33E3-001
 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\codecs\test\J0\temp\lucene.codecs.blockterms.TestVarGapDocFreqIntervalPostingsFormat_A1EF7255751A33E3-001\testPostingsFormat.testExact-006:
 java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\codecs\test\J0\temp\lucene.codecs.blockterms.TestVarGapDocFreqIntervalPostingsFormat_A1EF7255751A33E3-001\testPostingsFormat.testExact-006
   
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\codecs\test\J0\temp\lucene.codecs.blockterms.TestVarGapDocFreqIntervalPostingsFormat_A1EF7255751A33E3-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\codecs\test\J0\temp\lucene.codecs.blockterms.TestVarGapDocFreqIntervalPostingsFormat_A1EF7255751A33E3-001

at __randomizedtesting.SeedInfo.seed([A1EF7255751A33E3]:0)
at org.apache.lucene.util.IOUtils.rm(IOUtils.java:329)
at 
org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:216)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterAlways(TestRuleAdapter.java:31)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
at 

[jira] [Commented] (SOLR-8327) SolrDispatchFilter is not caching new state format, which results in live fetch from ZK per request if node does not contain core from collection

2017-12-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16297410#comment-16297410
 ] 

ASF GitHub Bot commented on SOLR-8327:
--

Github user elyograg commented on the issue:

https://github.com/apache/lucene-solr/pull/294
  
Java programs are migrating to nanoTime instead of currentTimeMillis for 
elapsed time because many people have found that the latter will go *backwards* 
on occasion.  It is not monotonic.

Using nanoTime should be far less likely to go backwards.  That undesirable 
behavior has been observed in the wild, but should be rare.  Supposedly 
nanoTime is monotonic if the OS properly supports a monotonic clock.  There's a 
lot of info out there about it:

https://www.google.com/search?q=java+nanotime+monotonic

The fact that nanoTime *might* produce elapsed times with greater accuracy 
than one millisecond is a bonus.


> SolrDispatchFilter is not caching new state format, which results in live 
> fetch from ZK per request if node does not contain core from collection
> -
>
> Key: SOLR-8327
> URL: https://issues.apache.org/jira/browse/SOLR-8327
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.3
>Reporter: Jessica Cheng Mallet
>Assignee: Varun Thacker
>  Labels: solrcloud
> Attachments: SOLR-8327.patch
>
>
> While perf testing with non-solrj client (request can be sent to any solr 
> node), we noticed a huge amount of data from Zookeeper in our tcpdump (~1G 
> for 20 second dump). From the thread dump, we noticed this:
> java.lang.Object.wait (Native Method)
> java.lang.Object.wait (Object.java:503)
> org.apache.zookeeper.ClientCnxn.submitRequest (ClientCnxn.java:1309)
> org.apache.zookeeper.ZooKeeper.getData (ZooKeeper.java:1152)
> org.apache.solr.common.cloud.SolrZkClient$7.execute (SolrZkClient.java:345)
> org.apache.solr.common.cloud.SolrZkClient$7.execute (SolrZkClient.java:342)
> org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation 
> (ZkCmdExecutor.java:61)
> org.apache.solr.common.cloud.SolrZkClient.getData (SolrZkClient.java:342)
> org.apache.solr.common.cloud.ZkStateReader.getCollectionLive 
> (ZkStateReader.java:841)
> org.apache.solr.common.cloud.ZkStateReader$7.get (ZkStateReader.java:515)
> org.apache.solr.common.cloud.ClusterState.getCollectionOrNull 
> (ClusterState.java:175)
> org.apache.solr.common.cloud.ClusterState.getLeader (ClusterState.java:98)
> org.apache.solr.servlet.HttpSolrCall.getCoreByCollection 
> (HttpSolrCall.java:784)
> org.apache.solr.servlet.HttpSolrCall.init (HttpSolrCall.java:272)
> org.apache.solr.servlet.HttpSolrCall.call (HttpSolrCall.java:417)
> org.apache.solr.servlet.SolrDispatchFilter.doFilter 
> (SolrDispatchFilter.java:210)
> org.apache.solr.servlet.SolrDispatchFilter.doFilter 
> (SolrDispatchFilter.java:179)
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter 
> (ServletHandler.java:1652)
> org.eclipse.jetty.servlet.ServletHandler.doHandle (ServletHandler.java:585)
> org.eclipse.jetty.server.handler.ScopedHandler.handle (ScopedHandler.java:143)
> org.eclipse.jetty.security.SecurityHandler.handle (SecurityHandler.java:577)
> org.eclipse.jetty.server.session.SessionHandler.doHandle 
> (SessionHandler.java:223)
> org.eclipse.jetty.server.handler.ContextHandler.doHandle 
> (ContextHandler.java:1127)
> org.eclipse.jetty.servlet.ServletHandler.doScope (ServletHandler.java:515)
> org.eclipse.jetty.server.session.SessionHandler.doScope 
> (SessionHandler.java:185)
> org.eclipse.jetty.server.handler.ContextHandler.doScope 
> (ContextHandler.java:1061)
> org.eclipse.jetty.server.handler.ScopedHandler.handle (ScopedHandler.java:141)
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle 
> (ContextHandlerCollection.java:215)
> org.eclipse.jetty.server.handler.HandlerCollection.handle 
> (HandlerCollection.java:110)
> org.eclipse.jetty.server.handler.HandlerWrapper.handle 
> (HandlerWrapper.java:97)
> org.eclipse.jetty.server.Server.handle (Server.java:499)
> org.eclipse.jetty.server.HttpChannel.handle (HttpChannel.java:310)
> org.eclipse.jetty.server.HttpConnection.onFillable (HttpConnection.java:257)
> org.eclipse.jetty.io.AbstractConnection$2.run (AbstractConnection.java:540)
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob 
> (QueuedThreadPool.java:635)
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run 
> (QueuedThreadPool.java:555)
> java.lang.Thread.run (Thread.java:745)
> Looks like SolrDispatchFilter doesn't have caching similar to the 
> collectionStateCache in CloudSolrClient, so if the node doesn't know about a 
> collection in the new state format, it just live-fetch it from Zookeeper on 

[GitHub] lucene-solr issue #294: ZkStateReader: cache LazyCollectionRef (SOLR-8327)

2017-12-19 Thread elyograg
Github user elyograg commented on the issue:

https://github.com/apache/lucene-solr/pull/294
  
Java programs are migrating to nanoTime instead of currentTimeMillis for 
elapsed time because many people have found that the latter will go *backwards* 
on occasion.  It is not monotonic.

Using nanoTime should be far less likely to go backwards.  That undesirable 
behavior has been observed in the wild, but should be rare.  Supposedly 
nanoTime is monotonic if the OS properly supports a monotonic clock.  There's a 
lot of info out there about it:

https://www.google.com/search?q=java+nanotime+monotonic

The fact that nanoTime *might* produce elapsed times with greater accuracy 
than one millisecond is a bonus.


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.2-Linux (64bit/jdk1.8.0_144) - Build # 87 - Still Unstable!

2017-12-19 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.2-Linux/87/
Java: 64bit/jdk1.8.0_144 -XX:-UseCompressedOops -XX:+UseParallelGC

10 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.MoveReplicaHDFSFailoverTest

Error Message:
Timed out waiting for Mini HDFS Cluster to start

Stack Trace:
java.io.IOException: Timed out waiting for Mini HDFS Cluster to start
at __randomizedtesting.SeedInfo.seed([5B4FF3FA6A3ABDEC]:0)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.waitClusterUp(MiniDFSCluster.java:1207)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:840)
at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:746)
at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:616)
at 
org.apache.solr.cloud.hdfs.HdfsTestUtil.setupClass(HdfsTestUtil.java:105)
at 
org.apache.solr.cloud.hdfs.HdfsTestUtil.setupClass(HdfsTestUtil.java:63)
at 
org.apache.solr.cloud.MoveReplicaHDFSFailoverTest.setupClass(MoveReplicaHDFSFailoverTest.java:55)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:874)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.autoscaling.HdfsAutoAddReplicasIntegrationTest

Error Message:
Timed out waiting for Mini HDFS Cluster to start

Stack Trace:
java.io.IOException: Timed out waiting for Mini HDFS Cluster to start
at __randomizedtesting.SeedInfo.seed([5B4FF3FA6A3ABDEC]:0)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.waitClusterUp(MiniDFSCluster.java:1207)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:840)
at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:746)
at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:616)
at 
org.apache.solr.cloud.hdfs.HdfsTestUtil.setupClass(HdfsTestUtil.java:105)
at 
org.apache.solr.cloud.hdfs.HdfsTestUtil.setupClass(HdfsTestUtil.java:63)
at 
org.apache.solr.cloud.autoscaling.HdfsAutoAddReplicasIntegrationTest.setupClass(HdfsAutoAddReplicasIntegrationTest.java:42)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 

[jira] [Commented] (SOLR-8327) SolrDispatchFilter is not caching new state format, which results in live fetch from ZK per request if node does not contain core from collection

2017-12-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16297372#comment-16297372
 ] 

ASF GitHub Bot commented on SOLR-8327:
--

Github user dragonsinth commented on the issue:

https://github.com/apache/lucene-solr/pull/294
  
This approach seems fine to me.  Remind me why we use nanoTime vs. normal 
clock?  I'm sure you're right I just want to refresh my brain.


> SolrDispatchFilter is not caching new state format, which results in live 
> fetch from ZK per request if node does not contain core from collection
> -
>
> Key: SOLR-8327
> URL: https://issues.apache.org/jira/browse/SOLR-8327
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.3
>Reporter: Jessica Cheng Mallet
>Assignee: Varun Thacker
>  Labels: solrcloud
> Attachments: SOLR-8327.patch
>
>
> While perf testing with non-solrj client (request can be sent to any solr 
> node), we noticed a huge amount of data from Zookeeper in our tcpdump (~1G 
> for 20 second dump). From the thread dump, we noticed this:
> java.lang.Object.wait (Native Method)
> java.lang.Object.wait (Object.java:503)
> org.apache.zookeeper.ClientCnxn.submitRequest (ClientCnxn.java:1309)
> org.apache.zookeeper.ZooKeeper.getData (ZooKeeper.java:1152)
> org.apache.solr.common.cloud.SolrZkClient$7.execute (SolrZkClient.java:345)
> org.apache.solr.common.cloud.SolrZkClient$7.execute (SolrZkClient.java:342)
> org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation 
> (ZkCmdExecutor.java:61)
> org.apache.solr.common.cloud.SolrZkClient.getData (SolrZkClient.java:342)
> org.apache.solr.common.cloud.ZkStateReader.getCollectionLive 
> (ZkStateReader.java:841)
> org.apache.solr.common.cloud.ZkStateReader$7.get (ZkStateReader.java:515)
> org.apache.solr.common.cloud.ClusterState.getCollectionOrNull 
> (ClusterState.java:175)
> org.apache.solr.common.cloud.ClusterState.getLeader (ClusterState.java:98)
> org.apache.solr.servlet.HttpSolrCall.getCoreByCollection 
> (HttpSolrCall.java:784)
> org.apache.solr.servlet.HttpSolrCall.init (HttpSolrCall.java:272)
> org.apache.solr.servlet.HttpSolrCall.call (HttpSolrCall.java:417)
> org.apache.solr.servlet.SolrDispatchFilter.doFilter 
> (SolrDispatchFilter.java:210)
> org.apache.solr.servlet.SolrDispatchFilter.doFilter 
> (SolrDispatchFilter.java:179)
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter 
> (ServletHandler.java:1652)
> org.eclipse.jetty.servlet.ServletHandler.doHandle (ServletHandler.java:585)
> org.eclipse.jetty.server.handler.ScopedHandler.handle (ScopedHandler.java:143)
> org.eclipse.jetty.security.SecurityHandler.handle (SecurityHandler.java:577)
> org.eclipse.jetty.server.session.SessionHandler.doHandle 
> (SessionHandler.java:223)
> org.eclipse.jetty.server.handler.ContextHandler.doHandle 
> (ContextHandler.java:1127)
> org.eclipse.jetty.servlet.ServletHandler.doScope (ServletHandler.java:515)
> org.eclipse.jetty.server.session.SessionHandler.doScope 
> (SessionHandler.java:185)
> org.eclipse.jetty.server.handler.ContextHandler.doScope 
> (ContextHandler.java:1061)
> org.eclipse.jetty.server.handler.ScopedHandler.handle (ScopedHandler.java:141)
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle 
> (ContextHandlerCollection.java:215)
> org.eclipse.jetty.server.handler.HandlerCollection.handle 
> (HandlerCollection.java:110)
> org.eclipse.jetty.server.handler.HandlerWrapper.handle 
> (HandlerWrapper.java:97)
> org.eclipse.jetty.server.Server.handle (Server.java:499)
> org.eclipse.jetty.server.HttpChannel.handle (HttpChannel.java:310)
> org.eclipse.jetty.server.HttpConnection.onFillable (HttpConnection.java:257)
> org.eclipse.jetty.io.AbstractConnection$2.run (AbstractConnection.java:540)
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob 
> (QueuedThreadPool.java:635)
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run 
> (QueuedThreadPool.java:555)
> java.lang.Thread.run (Thread.java:745)
> Looks like SolrDispatchFilter doesn't have caching similar to the 
> collectionStateCache in CloudSolrClient, so if the node doesn't know about a 
> collection in the new state format, it just live-fetch it from Zookeeper on 
> every request.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr issue #294: ZkStateReader: cache LazyCollectionRef (SOLR-8327)

2017-12-19 Thread dragonsinth
Github user dragonsinth commented on the issue:

https://github.com/apache/lucene-solr/pull/294
  
This approach seems fine to me.  Remind me why we use nanoTime vs. normal 
clock?  I'm sure you're right I just want to refresh my brain.


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8327) SolrDispatchFilter is not caching new state format, which results in live fetch from ZK per request if node does not contain core from collection

2017-12-19 Thread Scott Blum (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16297371#comment-16297371
 ] 

Scott Blum commented on SOLR-8327:
--

https://github.com/apache/lucene-solr/pull/294

> SolrDispatchFilter is not caching new state format, which results in live 
> fetch from ZK per request if node does not contain core from collection
> -
>
> Key: SOLR-8327
> URL: https://issues.apache.org/jira/browse/SOLR-8327
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.3
>Reporter: Jessica Cheng Mallet
>Assignee: Varun Thacker
>  Labels: solrcloud
> Attachments: SOLR-8327.patch
>
>
> While perf testing with non-solrj client (request can be sent to any solr 
> node), we noticed a huge amount of data from Zookeeper in our tcpdump (~1G 
> for 20 second dump). From the thread dump, we noticed this:
> java.lang.Object.wait (Native Method)
> java.lang.Object.wait (Object.java:503)
> org.apache.zookeeper.ClientCnxn.submitRequest (ClientCnxn.java:1309)
> org.apache.zookeeper.ZooKeeper.getData (ZooKeeper.java:1152)
> org.apache.solr.common.cloud.SolrZkClient$7.execute (SolrZkClient.java:345)
> org.apache.solr.common.cloud.SolrZkClient$7.execute (SolrZkClient.java:342)
> org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation 
> (ZkCmdExecutor.java:61)
> org.apache.solr.common.cloud.SolrZkClient.getData (SolrZkClient.java:342)
> org.apache.solr.common.cloud.ZkStateReader.getCollectionLive 
> (ZkStateReader.java:841)
> org.apache.solr.common.cloud.ZkStateReader$7.get (ZkStateReader.java:515)
> org.apache.solr.common.cloud.ClusterState.getCollectionOrNull 
> (ClusterState.java:175)
> org.apache.solr.common.cloud.ClusterState.getLeader (ClusterState.java:98)
> org.apache.solr.servlet.HttpSolrCall.getCoreByCollection 
> (HttpSolrCall.java:784)
> org.apache.solr.servlet.HttpSolrCall.init (HttpSolrCall.java:272)
> org.apache.solr.servlet.HttpSolrCall.call (HttpSolrCall.java:417)
> org.apache.solr.servlet.SolrDispatchFilter.doFilter 
> (SolrDispatchFilter.java:210)
> org.apache.solr.servlet.SolrDispatchFilter.doFilter 
> (SolrDispatchFilter.java:179)
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter 
> (ServletHandler.java:1652)
> org.eclipse.jetty.servlet.ServletHandler.doHandle (ServletHandler.java:585)
> org.eclipse.jetty.server.handler.ScopedHandler.handle (ScopedHandler.java:143)
> org.eclipse.jetty.security.SecurityHandler.handle (SecurityHandler.java:577)
> org.eclipse.jetty.server.session.SessionHandler.doHandle 
> (SessionHandler.java:223)
> org.eclipse.jetty.server.handler.ContextHandler.doHandle 
> (ContextHandler.java:1127)
> org.eclipse.jetty.servlet.ServletHandler.doScope (ServletHandler.java:515)
> org.eclipse.jetty.server.session.SessionHandler.doScope 
> (SessionHandler.java:185)
> org.eclipse.jetty.server.handler.ContextHandler.doScope 
> (ContextHandler.java:1061)
> org.eclipse.jetty.server.handler.ScopedHandler.handle (ScopedHandler.java:141)
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle 
> (ContextHandlerCollection.java:215)
> org.eclipse.jetty.server.handler.HandlerCollection.handle 
> (HandlerCollection.java:110)
> org.eclipse.jetty.server.handler.HandlerWrapper.handle 
> (HandlerWrapper.java:97)
> org.eclipse.jetty.server.Server.handle (Server.java:499)
> org.eclipse.jetty.server.HttpChannel.handle (HttpChannel.java:310)
> org.eclipse.jetty.server.HttpConnection.onFillable (HttpConnection.java:257)
> org.eclipse.jetty.io.AbstractConnection$2.run (AbstractConnection.java:540)
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob 
> (QueuedThreadPool.java:635)
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run 
> (QueuedThreadPool.java:555)
> java.lang.Thread.run (Thread.java:745)
> Looks like SolrDispatchFilter doesn't have caching similar to the 
> collectionStateCache in CloudSolrClient, so if the node doesn't know about a 
> collection in the new state format, it just live-fetch it from Zookeeper on 
> every request.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-10981) Allow update to load gzip files

2017-12-19 Thread Andrew Lundgren (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16297263#comment-16297263
 ] 

Andrew Lundgren edited comment on SOLR-10981 at 12/19/17 7:02 PM:
--

Patch that includes Changes.txt updates and solr-ref-guide.

This patch should work for the 7 series as well as master.

[~janhoy] Sorry for the delay.  Sometimes RL gets busy.


was (Author: lundgren):
Patch that includes Changes.txt updates and solr-ref-guide.

This patch should work for the 7 series as well as master.

> Allow update to load gzip files 
> 
>
> Key: SOLR-10981
> URL: https://issues.apache.org/jira/browse/SOLR-10981
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: 6.6
>Reporter: Andrew Lundgren
>  Labels: patch
> Fix For: 7.2, master (8.0)
>
> Attachments: SOLR-10981.patch, SOLR-10981.patch, SOLR-10981.patch, 
> SOLR-10981.patch
>
>
> We currently import large CSV files.  We store them in gzip files as they 
> compress at around 80%.
> To import them we must gunzip them and then import them.  After that we no 
> longer need the decompressed files.
> This patch allows directly opening either URL, or local files that are 
> gzipped.
> For URLs, to determine if the file is gzipped, it will check the content 
> encoding=="gzip" or if the file ends in ".gz"
> For files, if the file ends in ".gz" then it will assume the file is gzipped.
> I have tested the patch with 4.10.4, 6.6.0 and master from git.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10981) Allow update to load gzip files

2017-12-19 Thread Andrew Lundgren (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10981?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Lundgren updated SOLR-10981:
---
Attachment: SOLR-10981.patch

Patch that includes Changes.txt updates and solr-ref-guide.

This patch should work for the 7 series as well as master.

> Allow update to load gzip files 
> 
>
> Key: SOLR-10981
> URL: https://issues.apache.org/jira/browse/SOLR-10981
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: 6.6
>Reporter: Andrew Lundgren
>  Labels: patch
> Fix For: 7.2, master (8.0)
>
> Attachments: SOLR-10981.patch, SOLR-10981.patch, SOLR-10981.patch, 
> SOLR-10981.patch
>
>
> We currently import large CSV files.  We store them in gzip files as they 
> compress at around 80%.
> To import them we must gunzip them and then import them.  After that we no 
> longer need the decompressed files.
> This patch allows directly opening either URL, or local files that are 
> gzipped.
> For URLs, to determine if the file is gzipped, it will check the content 
> encoding=="gzip" or if the file ends in ".gz"
> For files, if the file ends in ".gz" then it will assume the file is gzipped.
> I have tested the patch with 4.10.4, 6.6.0 and master from git.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8327) SolrDispatchFilter is not caching new state format, which results in live fetch from ZK per request if node does not contain core from collection

2017-12-19 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16297255#comment-16297255
 ] 

Ishan Chattopadhyaya commented on SOLR-8327:


I tested this today (dockerized Solr + TCPDump on ZK container) and confirm 
this still is still an open issue; I think Noble's proposal for "smart caching" 
(as used in CloudSolrClient) is the right solution here.

> SolrDispatchFilter is not caching new state format, which results in live 
> fetch from ZK per request if node does not contain core from collection
> -
>
> Key: SOLR-8327
> URL: https://issues.apache.org/jira/browse/SOLR-8327
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.3
>Reporter: Jessica Cheng Mallet
>Assignee: Varun Thacker
>  Labels: solrcloud
> Attachments: SOLR-8327.patch
>
>
> While perf testing with non-solrj client (request can be sent to any solr 
> node), we noticed a huge amount of data from Zookeeper in our tcpdump (~1G 
> for 20 second dump). From the thread dump, we noticed this:
> java.lang.Object.wait (Native Method)
> java.lang.Object.wait (Object.java:503)
> org.apache.zookeeper.ClientCnxn.submitRequest (ClientCnxn.java:1309)
> org.apache.zookeeper.ZooKeeper.getData (ZooKeeper.java:1152)
> org.apache.solr.common.cloud.SolrZkClient$7.execute (SolrZkClient.java:345)
> org.apache.solr.common.cloud.SolrZkClient$7.execute (SolrZkClient.java:342)
> org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation 
> (ZkCmdExecutor.java:61)
> org.apache.solr.common.cloud.SolrZkClient.getData (SolrZkClient.java:342)
> org.apache.solr.common.cloud.ZkStateReader.getCollectionLive 
> (ZkStateReader.java:841)
> org.apache.solr.common.cloud.ZkStateReader$7.get (ZkStateReader.java:515)
> org.apache.solr.common.cloud.ClusterState.getCollectionOrNull 
> (ClusterState.java:175)
> org.apache.solr.common.cloud.ClusterState.getLeader (ClusterState.java:98)
> org.apache.solr.servlet.HttpSolrCall.getCoreByCollection 
> (HttpSolrCall.java:784)
> org.apache.solr.servlet.HttpSolrCall.init (HttpSolrCall.java:272)
> org.apache.solr.servlet.HttpSolrCall.call (HttpSolrCall.java:417)
> org.apache.solr.servlet.SolrDispatchFilter.doFilter 
> (SolrDispatchFilter.java:210)
> org.apache.solr.servlet.SolrDispatchFilter.doFilter 
> (SolrDispatchFilter.java:179)
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter 
> (ServletHandler.java:1652)
> org.eclipse.jetty.servlet.ServletHandler.doHandle (ServletHandler.java:585)
> org.eclipse.jetty.server.handler.ScopedHandler.handle (ScopedHandler.java:143)
> org.eclipse.jetty.security.SecurityHandler.handle (SecurityHandler.java:577)
> org.eclipse.jetty.server.session.SessionHandler.doHandle 
> (SessionHandler.java:223)
> org.eclipse.jetty.server.handler.ContextHandler.doHandle 
> (ContextHandler.java:1127)
> org.eclipse.jetty.servlet.ServletHandler.doScope (ServletHandler.java:515)
> org.eclipse.jetty.server.session.SessionHandler.doScope 
> (SessionHandler.java:185)
> org.eclipse.jetty.server.handler.ContextHandler.doScope 
> (ContextHandler.java:1061)
> org.eclipse.jetty.server.handler.ScopedHandler.handle (ScopedHandler.java:141)
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle 
> (ContextHandlerCollection.java:215)
> org.eclipse.jetty.server.handler.HandlerCollection.handle 
> (HandlerCollection.java:110)
> org.eclipse.jetty.server.handler.HandlerWrapper.handle 
> (HandlerWrapper.java:97)
> org.eclipse.jetty.server.Server.handle (Server.java:499)
> org.eclipse.jetty.server.HttpChannel.handle (HttpChannel.java:310)
> org.eclipse.jetty.server.HttpConnection.onFillable (HttpConnection.java:257)
> org.eclipse.jetty.io.AbstractConnection$2.run (AbstractConnection.java:540)
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob 
> (QueuedThreadPool.java:635)
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run 
> (QueuedThreadPool.java:555)
> java.lang.Thread.run (Thread.java:745)
> Looks like SolrDispatchFilter doesn't have caching similar to the 
> collectionStateCache in CloudSolrClient, so if the node doesn't know about a 
> collection in the new state format, it just live-fetch it from Zookeeper on 
> every request.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #294: ZkStateReader: cache LazyCollectionRef

2017-12-19 Thread slackhappy
Github user slackhappy commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/294#discussion_r157842876
  
--- Diff: 
solr/solrj/src/java/org/apache/solr/common/cloud/ZkStateReader.java ---
@@ -632,19 +636,24 @@ private void refreshCollectionList(Watcher watcher) 
throws KeeperException, Inte
   }
 
   private class LazyCollectionRef extends ClusterState.CollectionRef {
-
 private final String collName;
+private long lastUpdateTime;
+private DocCollection cachedDocCollection;
 
 public LazyCollectionRef(String collName) {
   super(null);
   this.collName = collName;
+  this.lastUpdateTime = -1;
 }
 
 @Override
-public DocCollection get() {
+public synchronized DocCollection get() {
--- End diff --

I thought synchronized here would provide a good enough performance 
increase without the complexity of other approches


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #294: ZkStateReader: cache LazyCollectionRef

2017-12-19 Thread slackhappy
Github user slackhappy commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/294#discussion_r157842171
  
--- Diff: solr/core/src/java/org/apache/solr/cloud/Overseer.java ---
@@ -68,7 +68,7 @@
   public static final String QUEUE_OPERATION = "operation";
 
   // System properties are used in tests to make them run fast
-  public static final int STATE_UPDATE_DELAY = 
Integer.getInteger("solr.OverseerStateUpdateDelay", 2000);  // delay between 
cloud state updates
+  public static final int STATE_UPDATE_DELAY = 
ZkStateReader.STATE_UPDATE_DELAY;
--- End diff --

Moved so that I could access this setting in ZkStateReader, but left an 
alias here for locality.


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #294: ZkStateReader: cache LazyCollectionRef

2017-12-19 Thread slackhappy
GitHub user slackhappy opened a pull request:

https://github.com/apache/lucene-solr/pull/294

ZkStateReader: cache LazyCollectionRef

SOLR-10524 introduced zk state update batching, with
a default interval of 2 seconds.  That opens
the door for a simple, time-based cache on the read side
to address the issue described in SOLR-8327

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/slackhappy/lucene-solr 
cloud_cache_lazy_collection_ref

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/294.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #294


commit e7c6a6773f1d01d2ddb0dbce6cdbaeff52e78376
Author: John Gallagher 
Date:   2017-12-19T16:57:25Z

ZkStateReader: cache LazyCollectionRef

SOLR-10524 introduced zk state update batching, with
a default interval of 2 seconds.  That opens
the door for a simple, time-based cache on the read side
to address the issue described in SOLR-8327




---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10524) Better ZkStateWriter batching

2017-12-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16297192#comment-16297192
 ] 

ASF GitHub Bot commented on SOLR-10524:
---

GitHub user slackhappy opened a pull request:

https://github.com/apache/lucene-solr/pull/294

ZkStateReader: cache LazyCollectionRef

SOLR-10524 introduced zk state update batching, with
a default interval of 2 seconds.  That opens
the door for a simple, time-based cache on the read side
to address the issue described in SOLR-8327

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/slackhappy/lucene-solr 
cloud_cache_lazy_collection_ref

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/294.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #294


commit e7c6a6773f1d01d2ddb0dbce6cdbaeff52e78376
Author: John Gallagher 
Date:   2017-12-19T16:57:25Z

ZkStateReader: cache LazyCollectionRef

SOLR-10524 introduced zk state update batching, with
a default interval of 2 seconds.  That opens
the door for a simple, time-based cache on the read side
to address the issue described in SOLR-8327




> Better ZkStateWriter batching
> -
>
> Key: SOLR-10524
> URL: https://issues.apache.org/jira/browse/SOLR-10524
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Assignee: Cao Manh Dat
> Fix For: 6.6, 7.0
>
> Attachments: SOLR-10524-NPE-fix.patch, SOLR-10524-dragonsinth.patch, 
> SOLR-10524.patch, SOLR-10524.patch, SOLR-10524.patch, SOLR-10524.patch
>
>
> There are several JIRAs (I'll link in a second) about trying to be more 
> efficient about processing overseer messages as the overseer can become a 
> bottleneck, especially with very large numbers of replicas in a cluster. One 
> of the approaches mentioned near the end of SOLR-5872 (15-Mar) was to "read 
> large no:of items say 1. put them into in memory buckets and feed them 
> into overseer".
> This JIRA is to break out that part of the discussion as it might be an easy 
> win whereas "eliminating the Overseer queue" would be quite an undertaking.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_144) - Build # 21109 - Still Unstable!

2017-12-19 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/21109/
Java: 32bit/jdk1.8.0_144 -server -XX:+UseConcMarkSweepGC

5 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.ForceLeaderTest

Error Message:
128 threads leaked from SUITE scope at org.apache.solr.cloud.ForceLeaderTest:   
  1) Thread[id=22640, name=qtp14400945-22640, state=TIMED_WAITING, 
group=TGRP-ForceLeaderTest] at sun.misc.Unsafe.park(Native Method)  
   at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
 at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at 
org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:392) 
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.idleJobPoll(QueuedThreadPool.java:563)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool.access$800(QueuedThreadPool.java:48)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626) 
at java.lang.Thread.run(Thread.java:748)2) Thread[id=22599, 
name=qtp21864620-22599, state=RUNNABLE, group=TGRP-ForceLeaderTest] at 
sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at 
sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at 
sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at 
sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at 
sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at 
sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) at 
org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:243)
 at 
org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:191)
 at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:249)
 at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
 at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.execute(ExecuteProduceConsume.java:100)
 at org.eclipse.jetty.io.ManagedSelector.run(ManagedSelector.java:147)  
   at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589) 
at java.lang.Thread.run(Thread.java:748)3) Thread[id=22624, 
name=Scheduler-9492849, state=WAITING, group=TGRP-ForceLeaderTest] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1081)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
at java.lang.Thread.run(Thread.java:748)4) Thread[id=22718, 
name=zkCallback-6543-thread-3, state=TIMED_WAITING, group=TGRP-ForceLeaderTest] 
at sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
 at 
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
 at 
java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
at java.lang.Thread.run(Thread.java:748)5) Thread[id=22435, 
name=Scheduler-32456330, state=WAITING, group=TGRP-ForceLeaderTest] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1081)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)   
  at 

[JENKINS] Lucene-Solr-7.2-Linux (64bit/jdk-9.0.1) - Build # 86 - Still Unstable!

2017-12-19 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.2-Linux/86/
Java: 64bit/jdk-9.0.1 -XX:-UseCompressedOops -XX:+UseSerialGC

2 tests failed.
FAILED:  org.apache.solr.cloud.ChaosMonkeyNothingIsSafeWithPullReplicasTest.test

Error Message:
Timeout occured while waiting response from server at: 
http://127.0.0.1:45581/grrb

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: http://127.0.0.1:45581/grrb
at 
__randomizedtesting.SeedInfo.seed([567C929E4498721A:DE28AD44EA641FE2]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:654)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1103)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:883)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:816)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.createServers(AbstractFullDistribZkTestBase.java:314)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:991)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 

[JENKINS] Lucene-Solr-SmokeRelease-7.2 - Build # 7 - Still Failing

2017-12-19 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-7.2/7/

No tests ran.

Build Log:
[...truncated 28043 lines...]
prepare-release-no-sign:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.2/lucene/build/smokeTestRelease/dist
 [copy] Copying 476 files to 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.2/lucene/build/smokeTestRelease/dist/lucene
 [copy] Copying 215 files to 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.2/lucene/build/smokeTestRelease/dist/solr
   [smoker] Java 1.8 JAVA_HOME=/home/jenkins/tools/java/latest1.8
   [smoker] NOTE: output encoding is UTF-8
   [smoker] 
   [smoker] Load release URL 
"file:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.2/lucene/build/smokeTestRelease/dist/"...
   [smoker] 
   [smoker] Test Lucene...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.02 sec (13.2 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download lucene-7.2.0-src.tgz...
   [smoker] 31.2 MB in 0.09 sec (334.3 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-7.2.0.tgz...
   [smoker] 71.1 MB in 0.22 sec (319.1 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-7.2.0.zip...
   [smoker] 81.5 MB in 0.27 sec (297.4 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack lucene-7.2.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6227 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-7.2.0.zip...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6227 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-7.2.0-src.tgz...
   [smoker] make sure no JARs/WARs in src dist...
   [smoker] run "ant validate"
   [smoker] run tests w/ Java 8 and testArgs='-Dtests.slow=false'...
   [smoker] test demo with 1.8...
   [smoker]   got 214 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] generate javadocs w/ Java 8...
   [smoker] 
   [smoker] Crawl/parse...
   [smoker] 
   [smoker] Verify...
   [smoker]   confirm all releases have coverage in TestBackwardsCompatibility
   [smoker] find all past Lucene releases...
   [smoker] run TestBackwardsCompatibility..
   [smoker] success!
   [smoker] 
   [smoker] Test Solr...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.03 sec (9.1 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download solr-7.2.0-src.tgz...
   [smoker] 53.3 MB in 1.00 sec (53.3 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-7.2.0.tgz...
   [smoker] 146.1 MB in 2.37 sec (61.7 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-7.2.0.zip...
   [smoker] 147.1 MB in 2.29 sec (64.4 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack solr-7.2.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] unpack lucene-7.2.0.tgz...
   [smoker]   **WARNING**: skipping check of 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.2/lucene/build/smokeTestRelease/tmp/unpack/solr-7.2.0/contrib/dataimporthandler-extras/lib/javax.mail-1.5.1.jar:
 it has javax.* classes
   [smoker]   **WARNING**: skipping check of 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.2/lucene/build/smokeTestRelease/tmp/unpack/solr-7.2.0/contrib/dataimporthandler-extras/lib/activation-1.1.1.jar:
 it has javax.* classes
   [smoker] copying unpacked distribution for Java 8 ...
   [smoker] test solr example w/ Java 8...
   [smoker]   start Solr instance 
(log=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.2/lucene/build/smokeTestRelease/tmp/unpack/solr-7.2.0-java8/solr-example.log)...
   [smoker] No process found for Solr node running on port 8983
   [smoker]   Running techproducts example on port 8983 from 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.2/lucene/build/smokeTestRelease/tmp/unpack/solr-7.2.0-java8
   [smoker] Creating Solr home directory 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.2/lucene/build/smokeTestRelease/tmp/unpack/solr-7.2.0-java8/example/techproducts/solr
   [smoker] 
   [smoker] Starting up Solr on port 8983 using command:
   [smoker] "bin/solr" start -p 8983 -s "example/techproducts/solr"
   [smoker] 
   [smoker] Waiting up to 180 seconds to see Solr running on port 8983 [|]  
 [/]   [-]   [\]   [|]   [/]   [-]   
[\]   [|]   [/]   [-]   [\]   

[jira] [Commented] (SOLR-11780) Upgrade httpclient to 5.0

2017-12-19 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16296969#comment-16296969
 ] 

Shawn Heisey commented on SOLR-11780:
-

Something Oleg has mentioned that we could do is have HC 4.5 and HC 5.0 
coexist.  This is possible because all of the packages have changed in HC 5.0.  
This does offer another option to us -- we could start writing new SolrClient 
implementations from scratch, supporting HTTP/2 and new code paradigms from the 
ground up.

This idea is very compelling.


> Upgrade httpclient to 5.0
> -
>
> Key: SOLR-11780
> URL: https://issues.apache.org/jira/browse/SOLR-11780
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: clients - java
>Affects Versions: 7.1
>Reporter: Shawn Heisey
>
> The httpcomponents project is working on the 5.0 release.  There is an alpha 
> version of httpclient and a beta version of httpcore.  The httpmime 
> dependency that Solr currently uses has been folded into the main httpclient.
> I've enlisted the help of [~olegk] from httpcomponents and we'll be using 
> github to coordinate.
> This work will not be committed officially until the 5.0 version of 
> httpclient is actually released.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5803) Add another AnalyzerWrapper class that does not have its own cache, so delegate-only wrappers don't create thread local resources several times

2017-12-19 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5803?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16296959#comment-16296959
 ] 

ASF subversion and git services commented on LUCENE-5803:
-

Commit 9f7f76f267bd46b0069731ba1ae4990d31c33df8 in lucene-solr's branch 
refs/heads/master from [~dsmiley]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=9f7f76f ]

LUCENE-5803: Add a Solr test that we reuse analysis components across fields 
for the same field type


> Add another AnalyzerWrapper class that does not have its own cache, so 
> delegate-only wrappers don't create thread local resources several times
> ---
>
> Key: LUCENE-5803
> URL: https://issues.apache.org/jira/browse/LUCENE-5803
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Affects Versions: 4.9
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
> Fix For: 4.10, 6.0
>
> Attachments: LUCENE-5803.patch, LUCENE-5803.patch, LUCENE-5803.patch, 
> LUCENE-5803.patch, LUCENE-5803.patch, LUCENE-5803.patch
>
>
> This is a followup issue for the following Elasticsearch issue: 
> https://github.com/elasticsearch/elasticsearch/pull/6714
> Basically the problem is the following:
> - Elasticsearch has a pool of Analyzers that are used for analysis in several 
> indexes
> - Each index uses a different PerFieldAnalyzerWrapper
> PerFieldAnalyzerWrapper uses PER_FIELD_REUSE_STRATEGY. Because of this it 
> caches the tokenstreams for every field. If there are many fields, this are a 
> lot. In addition, the underlying analyzers may also cache tokenstreams and 
> other PerFieldAnalyzerWrappers do the same, although the delegate Analyzer 
> can always return the same components.
> We should add similar code to Elasticsearch's directly to Lucene: If the 
> delegating Analyzer just delegates per Field or just wraps CharFilters around 
> the Reader, there is no need to cache the TokenStreamComponents a second time 
> in the delegating Analyzers. This is only needed, if the delegating Analyzers 
> adds additional TokenFilters (like ShingleAnalyzerWrapper).
> We should name this new class DelegatingAnalyzerWrapper extends 
> AnalyzerWrapper. The wrapComponents method must be final, because we are not 
> allowed to add additional TokenFilters, but unlike ES, we don't need to 
> disallow wrapping with CharFilters.
> Internally this class uses a private ReuseStrategy that just delegates to the 
> underlying analyzer. It does not matter here if the strategy of the delegate 
> is global or per field, this is private to the delegate.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11201) Implement trigger for arbitrary metrics

2017-12-19 Thread Andrzej Bialecki (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16296960#comment-16296960
 ] 

Andrzej Bialecki  commented on SOLR-11201:
--

I think that the configuration should also specify what is the preferred action 
- in many cases this will be MOVEREPLICA, but if we plan to use this trigger 
for other metrics then other actions may be more appropriate depending on the 
metric (eg. ADDREPLICA, REPLACENODE, SPLITSHARD, etc).

> Implement trigger for arbitrary metrics
> ---
>
> Key: SOLR-11201
> URL: https://issues.apache.org/jira/browse/SOLR-11201
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling, SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
> Fix For: 7.2
>
> Attachments: SOLR-11201.patch
>
>
> It should be possible to set a trigger on any metrics exposed by the Metrics 
> API using a threshold value. Supporting {{waitFor}} may not be possible or 
> useful for all metrics. For those we will implement proper trigger support 
> (such as searchRate) However, a naive implementation might be to just poll 
> the value of the metric frequently and if it is consistently above the 
> threshold, fire the trigger.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11780) Upgrade httpclient to 5.0

2017-12-19 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16296941#comment-16296941
 ] 

Shawn Heisey commented on SOLR-11780:
-

My fork of the github project:

https://github.com/elyograg/lucene-solr


> Upgrade httpclient to 5.0
> -
>
> Key: SOLR-11780
> URL: https://issues.apache.org/jira/browse/SOLR-11780
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: clients - java
>Affects Versions: 7.1
>Reporter: Shawn Heisey
>
> The httpcomponents project is working on the 5.0 release.  There is an alpha 
> version of httpclient and a beta version of httpcore.  The httpmime 
> dependency that Solr currently uses has been folded into the main httpclient.
> I've enlisted the help of [~olegk] from httpcomponents and we'll be using 
> github to coordinate.
> This work will not be committed officially until the 5.0 version of 
> httpclient is actually released.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-11780) Upgrade httpclient to 5.0

2017-12-19 Thread Shawn Heisey (JIRA)
Shawn Heisey created SOLR-11780:
---

 Summary: Upgrade httpclient to 5.0
 Key: SOLR-11780
 URL: https://issues.apache.org/jira/browse/SOLR-11780
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
  Components: clients - java
Affects Versions: 7.1
Reporter: Shawn Heisey


The httpcomponents project is working on the 5.0 release.  There is an alpha 
version of httpclient and a beta version of httpcore.  The httpmime dependency 
that Solr currently uses has been folded into the main httpclient.

I've enlisted the help of [~olegk] from httpcomponents and we'll be using 
github to coordinate.

This work will not be committed officially until the 5.0 version of httpclient 
is actually released.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-MacOSX (64bit/jdk-9) - Build # 349 - Still Unstable!

2017-12-19 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-MacOSX/349/
Java: 64bit/jdk-9 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  
org.apache.solr.update.processor.TimeRoutedAliasUpdateProcessorTest.test

Error Message:
expected:<1> but was:<0>

Stack Trace:
java.lang.AssertionError: expected:<1> but was:<0>
at 
__randomizedtesting.SeedInfo.seed([407D59BB4AFF4E03:C8296661E40323FB]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.update.processor.TimeRoutedAliasUpdateProcessorTest.assertInvariants(TimeRoutedAliasUpdateProcessorTest.java:245)
at 
org.apache.solr.update.processor.TimeRoutedAliasUpdateProcessorTest.test(TimeRoutedAliasUpdateProcessorTest.java:123)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 

[jira] [Commented] (LUCENE-8104) Grouping module should no longer depend on Queries module (ValueSource)

2017-12-19 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16296922#comment-16296922
 ] 

Adrien Grand commented on LUCENE-8104:
--

bq. FunctionScoreQuery is also in the queries module

Sorry I got confused by the fact that DoubleValuesSource is in core. +1 to the 
patch.



> Grouping module should no longer depend on Queries module (ValueSource)
> ---
>
> Key: LUCENE-8104
> URL: https://issues.apache.org/jira/browse/LUCENE-8104
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/grouping
>Reporter: David Smiley
> Attachments: LUCENE-8104.patch
>
>
> The Grouping module depends on the Queries module in GroupingSearch / 
> ValueSourceGroupSelector to use the ValueSource framework.  It should instead 
> use the newer DoubleValueSource or LongValueSource framework in Core.  As I 
> write this, this appears to be the last part of Lucene to refer to the 
> ValueSource framework, and I think we should then remove it -- for another 
> issue of course.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8104) Grouping module should no longer depend on Queries module (ValueSource)

2017-12-19 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16296907#comment-16296907
 ] 

David Smiley commented on LUCENE-8104:
--

bq. Removing ValueSource entirely would be great, but it's heavily used within 
Solr. Would you want to just move the classes into the Solr subproject, or try 
and rework things so that Solr uses DoubleValuesSource instead?

Indeed those are the options.  The latter is certainly preferable to me but is 
of course a lot of work -- not sure how much.

RE facets module: if its tests only depend on the queries module then the 
build/dependencies should be modified as-such (or as Adrien suggested use 
FunctionScoreQuery.  And have its test switch to FunctionScoreQuery (thus 
_away_ from ValueSource framework).  Right now the facets module artificially 
declares through its dependencies that a user needs the queries module when 
that's not true.  This ought to be another issue; sorry for distracting us here.

> Grouping module should no longer depend on Queries module (ValueSource)
> ---
>
> Key: LUCENE-8104
> URL: https://issues.apache.org/jira/browse/LUCENE-8104
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/grouping
>Reporter: David Smiley
> Attachments: LUCENE-8104.patch
>
>
> The Grouping module depends on the Queries module in GroupingSearch / 
> ValueSourceGroupSelector to use the ValueSource framework.  It should instead 
> use the newer DoubleValueSource or LongValueSource framework in Core.  As I 
> write this, this appears to be the last part of Lucene to refer to the 
> ValueSource framework, and I think we should then remove it -- for another 
> issue of course.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Linux (32bit/jdk1.8.0_144) - Build # 1015 - Still Unstable!

2017-12-19 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/1015/
Java: 32bit/jdk1.8.0_144 -server -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.cloud.RollingRestartTest.test

Error Message:
Timeout occured while waiting response from server at: https://127.0.0.1:43207

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: https://127.0.0.1:43207
at 
__randomizedtesting.SeedInfo.seed([A8EF970815D23401:20BBA8D2BB2E59F9]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:654)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1104)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:884)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.createServers(AbstractFullDistribZkTestBase.java:315)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:991)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 

[jira] [Commented] (SOLR-11597) Implement RankNet.

2017-12-19 Thread Michael A. Alcorn (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16296893#comment-16296893
 ] 

Michael A. Alcorn commented on SOLR-11597:
--

Hi, [~cpoerschke]. Is there anything else I can help with on this?

> Implement RankNet.
> --
>
> Key: SOLR-11597
> URL: https://issues.apache.org/jira/browse/SOLR-11597
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: contrib - LTR
>Reporter: Michael A. Alcorn
>
> Implement RankNet as described in [this 
> tutorial|https://github.com/airalcorn2/Solr-LTR].



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk1.8.0) - Build # 4334 - Still Unstable!

2017-12-19 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/4334/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseParallelGC

5 tests failed.
FAILED:  
org.apache.solr.cloud.DistributedVersionInfoTest.testReplicaVersionHandling

Error Message:
Error from server at 
http://127.0.0.1:54712/solr/c8n_vers_1x3_shard1_replica_n2: Expected mime type 
application/octet-stream but got text/html.Error 
500HTTP ERROR: 500 Problem accessing 
/solr/c8n_vers_1x3_shard1_replica_n2/update. Reason: 
java.lang.NoClassDefFoundError: 
org/apache/solr/update/processor/DistributedUpdateProcessor$RollupRequestReplicationTracker
 http://eclipse.org/jetty;>Powered by Jetty:// 
9.3.20.v20170531   

Stack Trace:
org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: Error from 
server at http://127.0.0.1:54712/solr/c8n_vers_1x3_shard1_replica_n2: Expected 
mime type application/octet-stream but got text/html. 


Error 500 


HTTP ERROR: 500
Problem accessing /solr/c8n_vers_1x3_shard1_replica_n2/update. Reason:
java.lang.NoClassDefFoundError: 
org/apache/solr/update/processor/DistributedUpdateProcessor$RollupRequestReplicationTracker
http://eclipse.org/jetty;>Powered by Jetty:// 
9.3.20.v20170531



at 
__randomizedtesting.SeedInfo.seed([5A04E9E573539387:86FD3E1FD12859C6]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:550)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1013)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:884)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:946)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:946)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:946)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:946)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:946)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.sendDocsWithRetry(AbstractFullDistribZkTestBase.java:796)
at 
org.apache.solr.cloud.DistributedVersionInfoTest.sendDoc(DistributedVersionInfoTest.java:331)
at 
org.apache.solr.cloud.DistributedVersionInfoTest.testReplicaVersionHandling(DistributedVersionInfoTest.java:94)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 

[jira] [Created] (SOLR-11779) Basic long-term collection of aggregated metrics

2017-12-19 Thread Andrzej Bialecki (JIRA)
Andrzej Bialecki  created SOLR-11779:


 Summary: Basic long-term collection of aggregated metrics
 Key: SOLR-11779
 URL: https://issues.apache.org/jira/browse/SOLR-11779
 Project: Solr
  Issue Type: New Feature
  Security Level: Public (Default Security Level. Issues are Public)
  Components: metrics
Affects Versions: master (8.0), 7.3
Reporter: Andrzej Bialecki 
Assignee: Andrzej Bialecki 


Tracking the key metrics over time is very helpful in understanding the cluster 
and user behavior.

Currently even basic metrics tracking requires setting up an external system 
and either polling {{/admin/metrics}} or using {{SolrMetricReporter}}-s. The 
advantage of this setup is that these external tools usually provide a lot of 
sophisticated functionality. The downside is that they don't ship out of the 
box with Solr and require additional admin effort to set up.

Solr could collect some of the key metrics and keep their historical values in 
a round-robin database (eg. using RRD4j) to keep the size of the historic data 
constant (eg. ~64kB per metric), but at the same providing out of the box 
useful insights into the basic system behavior over time. This data could be 
persisted to the {{.system}} collection as blobs, and it could be also 
presented in the Admin UI as graphs.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (64bit/jdk1.8.0_144) - Build # 21108 - Unstable!

2017-12-19 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/21108/
Java: 64bit/jdk1.8.0_144 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

3 tests failed.
FAILED:  org.apache.solr.cloud.MoveReplicaHDFSTest.testFailedMove

Error Message:
No live SolrServers available to handle this 
request:[https://127.0.0.1:44601/solr/MoveReplicaHDFSTest_failed_coll_true, 
https://127.0.0.1:34077/solr/MoveReplicaHDFSTest_failed_coll_true]

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this 
request:[https://127.0.0.1:44601/solr/MoveReplicaHDFSTest_failed_coll_true, 
https://127.0.0.1:34077/solr/MoveReplicaHDFSTest_failed_coll_true]
at 
__randomizedtesting.SeedInfo.seed([39DAE57A28CC74A:A9507DA5155F129A]:0)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:462)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1104)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:884)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:991)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:942)
at 
org.apache.solr.cloud.MoveReplicaTest.testFailedMove(MoveReplicaTest.java:309)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Updated] (SOLR-11201) Implement trigger for arbitrary metrics

2017-12-19 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-11201:
-
Attachment: SOLR-11201.patch

This patch adds a MetricTrigger that creates MetricBreachedEvents. The 
intention is to watch container level metrics and perform move replica actions 
if they are breached. But any arbitrary metric (even core level) can be used as 
well. It supports "below" and "above" threshold values and can limit operations 
to a specific collection, shard or a single node.

Example set-trigger call that fires the trigger whenever the total usable space 
on a node having replicas of "mycollection" falls below 100GB. The computed 
plan will then move replicas of mycollection away from such nodes.
{code}
{
  "set-trigger": {
"name": "metric_trigger",
"event": "metric",
"waitFor": "5s",
"metric": "metric:solr.node:CONTAINER.fs.coreRoot.usableSpace"
"below": 107374182400,
"collection": "mycollection",
"shard": "shard1"
  }
}
{code}

> Implement trigger for arbitrary metrics
> ---
>
> Key: SOLR-11201
> URL: https://issues.apache.org/jira/browse/SOLR-11201
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling, SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
> Fix For: 7.2
>
> Attachments: SOLR-11201.patch
>
>
> It should be possible to set a trigger on any metrics exposed by the Metrics 
> API using a threshold value. Supporting {{waitFor}} may not be possible or 
> useful for all metrics. For those we will implement proper trigger support 
> (such as searchRate) However, a naive implementation might be to just poll 
> the value of the metric frequently and if it is consistently above the 
> threshold, fire the trigger.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-11285) Support simulations at scale in the autoscaling framework

2017-12-19 Thread Andrzej Bialecki (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrzej Bialecki  resolved SOLR-11285.
--
Resolution: Fixed

> Support simulations at scale in the autoscaling framework
> -
>
> Key: SOLR-11285
> URL: https://issues.apache.org/jira/browse/SOLR-11285
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
> Fix For: master (8.0), 7.3
>
> Attachments: SOLR-11285.patch
>
>
> This is a spike to investigate how difficult it would be to modify the 
> autoscaling framework so that it's possible to run simulated large-scale 
> experiments and test its dynamic behavior without actually spinning up a 
> large cluster.
> Currently many components rely heavily on actual Solr, ZK and behavior of ZK 
> watches, or insist on making actual HTTP calls. Notable exception is the core 
> Policy framework where most of the ZK / Solr details are abstracted.
> As the algorithms for autoscaling that we implement become more and more 
> complex the ability to effectively run multiple large simulations will be 
> crucial - it's very easy to unknowingly introduce catastrophic instabilities 
> that don't manifest themselves in regular unit tests.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Windows (64bit/jdk-9.0.1) - Build # 350 - Still Unstable!

2017-12-19 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Windows/350/
Java: 64bit/jdk-9.0.1 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

6 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.clustering.ClusteringComponentTest

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\contrib\solr-clustering\test\J1\temp\solr.handler.clustering.ClusteringComponentTest_2E226C96CB0397E1-001\tempDir-001\collection1\conf\clustering:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\contrib\solr-clustering\test\J1\temp\solr.handler.clustering.ClusteringComponentTest_2E226C96CB0397E1-001\tempDir-001\collection1\conf\clustering

C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\contrib\solr-clustering\test\J1\temp\solr.handler.clustering.ClusteringComponentTest_2E226C96CB0397E1-001\tempDir-001\collection1\conf:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\contrib\solr-clustering\test\J1\temp\solr.handler.clustering.ClusteringComponentTest_2E226C96CB0397E1-001\tempDir-001\collection1\conf

C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\contrib\solr-clustering\test\J1\temp\solr.handler.clustering.ClusteringComponentTest_2E226C96CB0397E1-001\tempDir-001\collection1:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\contrib\solr-clustering\test\J1\temp\solr.handler.clustering.ClusteringComponentTest_2E226C96CB0397E1-001\tempDir-001\collection1

C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\contrib\solr-clustering\test\J1\temp\solr.handler.clustering.ClusteringComponentTest_2E226C96CB0397E1-001\tempDir-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\contrib\solr-clustering\test\J1\temp\solr.handler.clustering.ClusteringComponentTest_2E226C96CB0397E1-001\tempDir-001

C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\contrib\solr-clustering\test\J1\temp\solr.handler.clustering.ClusteringComponentTest_2E226C96CB0397E1-001\tempDir-001\collection1\conf\clustering\custom:
 java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\contrib\solr-clustering\test\J1\temp\solr.handler.clustering.ClusteringComponentTest_2E226C96CB0397E1-001\tempDir-001\collection1\conf\clustering\custom

C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\contrib\solr-clustering\test\J1\temp\solr.handler.clustering.ClusteringComponentTest_2E226C96CB0397E1-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\contrib\solr-clustering\test\J1\temp\solr.handler.clustering.ClusteringComponentTest_2E226C96CB0397E1-001
 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\contrib\solr-clustering\test\J1\temp\solr.handler.clustering.ClusteringComponentTest_2E226C96CB0397E1-001\tempDir-001\collection1\conf\clustering:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\contrib\solr-clustering\test\J1\temp\solr.handler.clustering.ClusteringComponentTest_2E226C96CB0397E1-001\tempDir-001\collection1\conf\clustering
   
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\contrib\solr-clustering\test\J1\temp\solr.handler.clustering.ClusteringComponentTest_2E226C96CB0397E1-001\tempDir-001\collection1\conf:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\contrib\solr-clustering\test\J1\temp\solr.handler.clustering.ClusteringComponentTest_2E226C96CB0397E1-001\tempDir-001\collection1\conf
   
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\contrib\solr-clustering\test\J1\temp\solr.handler.clustering.ClusteringComponentTest_2E226C96CB0397E1-001\tempDir-001\collection1:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\contrib\solr-clustering\test\J1\temp\solr.handler.clustering.ClusteringComponentTest_2E226C96CB0397E1-001\tempDir-001\collection1
   
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\contrib\solr-clustering\test\J1\temp\solr.handler.clustering.ClusteringComponentTest_2E226C96CB0397E1-001\tempDir-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\contrib\solr-clustering\test\J1\temp\solr.handler.clustering.ClusteringComponentTest_2E226C96CB0397E1-001\tempDir-001
   

[jira] [Commented] (LUCENE-8104) Grouping module should no longer depend on Queries module (ValueSource)

2017-12-19 Thread Alan Woodward (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16296661#comment-16296661
 ] 

Alan Woodward commented on LUCENE-8104:
---

FunctionScoreQuery is also in the queries module

> Grouping module should no longer depend on Queries module (ValueSource)
> ---
>
> Key: LUCENE-8104
> URL: https://issues.apache.org/jira/browse/LUCENE-8104
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/grouping
>Reporter: David Smiley
> Attachments: LUCENE-8104.patch
>
>
> The Grouping module depends on the Queries module in GroupingSearch / 
> ValueSourceGroupSelector to use the ValueSource framework.  It should instead 
> use the newer DoubleValueSource or LongValueSource framework in Core.  As I 
> write this, this appears to be the last part of Lucene to refer to the 
> ValueSource framework, and I think we should then remove it -- for another 
> issue of course.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11774) langid.map.individual won't work with langid.map.keepOrig

2017-12-19 Thread Marco Remy (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16296643#comment-16296643
 ] 

Marco Remy commented on SOLR-11774:
---

I found a workaround for this:

{code:xml}


{code}

Added copyField configuration to the schema, which copies the data back to the 
original field.


> langid.map.individual won't work with langid.map.keepOrig
> -
>
> Key: SOLR-11774
> URL: https://issues.apache.org/jira/browse/SOLR-11774
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: contrib - LangId
>Affects Versions: 6.5
>Reporter: Marco Remy
>Priority: Minor
>
> Tried to get language detection to work.
> *Setting:*
> {code:xml}
>  class="org.apache.solr.update.processor.LangDetectLanguageIdentifierUpdateProcessorFactory">
>   title,author
>   detected_languages
>   de,en
>   txt
>   true
>   true
>   true
> 
> {code}
> Main purpose
> * Map fields individually
> * Keep the original field
> But the fields won't be mapped individually. They are mapped to a single 
> detected language. After some hours of investigation i finally found the 
> reason: *The option langid.map.keepOrig breaks the individual mapping 
> function.* Only if it is disabled the fields will be mapped as expected.
> - Regards



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-7.x - Build # 107 - Still unstable

2017-12-19 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-7.x/107/

16 tests failed.
FAILED:  
org.apache.lucene.analysis.opennlp.TestOpenNLPPOSFilterFactory.testBasic

Error Message:
term 0 expected:<[Senntennce]> but was:<[.]>

Stack Trace:
org.junit.ComparisonFailure: term 0 expected:<[Senntennce]> but was:<[.]>
at 
__randomizedtesting.SeedInfo.seed([48D3596BCDAE3476:E329447E1272B258]:0)
at org.junit.Assert.assertEquals(Assert.java:125)
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.assertTokenStreamContents(BaseTokenStreamTestCase.java:201)
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.assertTokenStreamContents(BaseTokenStreamTestCase.java:325)
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.assertTokenStreamContents(BaseTokenStreamTestCase.java:329)
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkAnalysisConsistency(BaseTokenStreamTestCase.java:865)
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkAnalysisConsistency(BaseTokenStreamTestCase.java:727)
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkAnalysisConsistency(BaseTokenStreamTestCase.java:723)
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.assertAnalyzesTo(BaseTokenStreamTestCase.java:384)
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.assertAnalyzesTo(BaseTokenStreamTestCase.java:420)
at 
org.apache.lucene.analysis.opennlp.TestOpenNLPPOSFilterFactory.testBasic(TestOpenNLPPOSFilterFactory.java:67)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
 

[JENKINS] Lucene-Solr-7.2-Windows (64bit/jdk1.8.0_144) - Build # 23 - Still Unstable!

2017-12-19 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.2-Windows/23/
Java: 64bit/jdk1.8.0_144 -XX:+UseCompressedOops -XX:+UseParallelGC

6 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.lucene.store.TestSimpleFSDirectory

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-7.2-Windows\lucene\build\core\test\J0\temp\lucene.store.TestSimpleFSDirectory_FA68C35E035EAD57-001\testThreadSafety-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.2-Windows\lucene\build\core\test\J0\temp\lucene.store.TestSimpleFSDirectory_FA68C35E035EAD57-001\testThreadSafety-001

C:\Users\jenkins\workspace\Lucene-Solr-7.2-Windows\lucene\build\core\test\J0\temp\lucene.store.TestSimpleFSDirectory_FA68C35E035EAD57-001\testThreadSafety-001\t1987:
 java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.2-Windows\lucene\build\core\test\J0\temp\lucene.store.TestSimpleFSDirectory_FA68C35E035EAD57-001\testThreadSafety-001\t1987

C:\Users\jenkins\workspace\Lucene-Solr-7.2-Windows\lucene\build\core\test\J0\temp\lucene.store.TestSimpleFSDirectory_FA68C35E035EAD57-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.2-Windows\lucene\build\core\test\J0\temp\lucene.store.TestSimpleFSDirectory_FA68C35E035EAD57-001
 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\jenkins\workspace\Lucene-Solr-7.2-Windows\lucene\build\core\test\J0\temp\lucene.store.TestSimpleFSDirectory_FA68C35E035EAD57-001\testThreadSafety-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.2-Windows\lucene\build\core\test\J0\temp\lucene.store.TestSimpleFSDirectory_FA68C35E035EAD57-001\testThreadSafety-001
   
C:\Users\jenkins\workspace\Lucene-Solr-7.2-Windows\lucene\build\core\test\J0\temp\lucene.store.TestSimpleFSDirectory_FA68C35E035EAD57-001\testThreadSafety-001\t1987:
 java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.2-Windows\lucene\build\core\test\J0\temp\lucene.store.TestSimpleFSDirectory_FA68C35E035EAD57-001\testThreadSafety-001\t1987
   
C:\Users\jenkins\workspace\Lucene-Solr-7.2-Windows\lucene\build\core\test\J0\temp\lucene.store.TestSimpleFSDirectory_FA68C35E035EAD57-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.2-Windows\lucene\build\core\test\J0\temp\lucene.store.TestSimpleFSDirectory_FA68C35E035EAD57-001

at __randomizedtesting.SeedInfo.seed([FA68C35E035EAD57]:0)
at org.apache.lucene.util.IOUtils.rm(IOUtils.java:329)
at 
org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:216)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterAlways(TestRuleAdapter.java:31)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  
org.apache.solr.handler.dataimport.TestSqlEntityProcessor.testCachedChildEntities

Error Message:
C:\Users\jenkins\workspace\Lucene-Solr-7.2-Windows\solr\build\contrib\solr-dataimporthandler\test\J0\temp\solr.handler.dataimport.TestSqlEntityProcessor_975F1DF6C795D55F-001\tempDir-001

Stack Trace:
java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.2-Windows\solr\build\contrib\solr-dataimporthandler\test\J0\temp\solr.handler.dataimport.TestSqlEntityProcessor_975F1DF6C795D55F-001\tempDir-001
at 
__randomizedtesting.SeedInfo.seed([975F1DF6C795D55F:64D933FB520141C2]:0)
at 
sun.nio.fs.WindowsFileSystemProvider.implDelete(WindowsFileSystemProvider.java:266)
at 
sun.nio.fs.AbstractFileSystemProvider.deleteIfExists(AbstractFileSystemProvider.java:108)
at java.nio.file.Files.deleteIfExists(Files.java:1165)
at 
org.apache.solr.handler.dataimport.AbstractSqlEntityProcessorTestCase.afterSqlEntitiyProcessorTestCase(AbstractSqlEntityProcessorTestCase.java:98)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 

[JENKINS] Lucene-Solr-7.2-Linux (32bit/jdk1.8.0_144) - Build # 85 - Still Unstable!

2017-12-19 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.2-Linux/85/
Java: 32bit/jdk1.8.0_144 -client -XX:+UseConcMarkSweepGC

6 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation

Error Message:
2 threads leaked from SUITE scope at 
org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation: 1) 
Thread[id=18849, name=jetty-launcher-6461-thread-2-EventThread, 
state=TIMED_WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation] 
at sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
 at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
 at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277)  
   at 
org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:323)
 at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:105)  
   at 
org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:288)
 at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279)
 at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:41)
 at 
org.apache.curator.framework.recipes.shared.SharedValue.readValue(SharedValue.java:244)
 at 
org.apache.curator.framework.recipes.shared.SharedValue.access$100(SharedValue.java:44)
 at 
org.apache.curator.framework.recipes.shared.SharedValue$1.process(SharedValue.java:61)
 at 
org.apache.curator.framework.imps.NamespaceWatcher.process(NamespaceWatcher.java:67)
 at 
org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:530)   
  at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:505)   
 2) Thread[id=18837, name=jetty-launcher-6461-thread-1-EventThread, 
state=TIMED_WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation] 
at sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
 at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
 at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277)  
   at 
org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:323)
 at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:105)  
   at 
org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:288)
 at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279)
 at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:41)
 at 
org.apache.curator.framework.recipes.shared.SharedValue.readValue(SharedValue.java:244)
 at 
org.apache.curator.framework.recipes.shared.SharedValue.access$100(SharedValue.java:44)
 at 
org.apache.curator.framework.recipes.shared.SharedValue$1.process(SharedValue.java:61)
 at 
org.apache.curator.framework.imps.NamespaceWatcher.process(NamespaceWatcher.java:67)
 at 
org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:530)   
  at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:505)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 2 threads leaked from SUITE 
scope at org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation: 
   1) Thread[id=18849, name=jetty-launcher-6461-thread-2-EventThread, 
state=TIMED_WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation]
at sun.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277)
at 
org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:323)
at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:105)
at 
org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:288)
at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279)
at 

[JENKINS] Lucene-Solr-SmokeRelease-7.x - Build # 105 - Still Failing

2017-12-19 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-7.x/105/

No tests ran.

Build Log:
[...truncated 28287 lines...]
prepare-release-no-sign:
[mkdir] Created dir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/dist
 [copy] Copying 491 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/dist/lucene
 [copy] Copying 215 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/dist/solr
   [smoker] Java 1.8 JAVA_HOME=/home/jenkins/tools/java/latest1.8
   [smoker] NOTE: output encoding is UTF-8
   [smoker] 
   [smoker] Load release URL 
"file:/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/dist/"...
   [smoker] 
   [smoker] Test Lucene...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.37 sec (0.6 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download lucene-7.3.0-src.tgz...
   [smoker] 31.4 MB in 0.06 sec (491.6 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-7.3.0.tgz...
   [smoker] 72.3 MB in 0.31 sec (231.2 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-7.3.0.zip...
   [smoker] 82.8 MB in 0.09 sec (881.2 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack lucene-7.3.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6282 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-7.3.0.zip...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6282 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-7.3.0-src.tgz...
   [smoker] make sure no JARs/WARs in src dist...
   [smoker] run "ant validate"
   [smoker] run tests w/ Java 8 and testArgs='-Dtests.slow=false'...
   [smoker] test demo with 1.8...
   [smoker]   got 215 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] generate javadocs w/ Java 8...
   [smoker] 
   [smoker] Crawl/parse...
   [smoker] 
   [smoker] Verify...
   [smoker]   confirm all releases have coverage in TestBackwardsCompatibility
   [smoker] find all past Lucene releases...
   [smoker] run TestBackwardsCompatibility..
   [smoker] success!
   [smoker] 
   [smoker] Test Solr...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.02 sec (15.4 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download solr-7.3.0-src.tgz...
   [smoker] 53.6 MB in 1.14 sec (47.1 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-7.3.0.tgz...
   [smoker] 147.3 MB in 2.21 sec (66.7 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-7.3.0.zip...
   [smoker] 148.3 MB in 2.26 sec (65.7 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack solr-7.3.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] unpack lucene-7.3.0.tgz...
   [smoker]   **WARNING**: skipping check of 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/tmp/unpack/solr-7.3.0/contrib/dataimporthandler-extras/lib/javax.mail-1.5.1.jar:
 it has javax.* classes
   [smoker]   **WARNING**: skipping check of 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/tmp/unpack/solr-7.3.0/contrib/dataimporthandler-extras/lib/activation-1.1.1.jar:
 it has javax.* classes
   [smoker] copying unpacked distribution for Java 8 ...
   [smoker] test solr example w/ Java 8...
   [smoker]   start Solr instance 
(log=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/tmp/unpack/solr-7.3.0-java8/solr-example.log)...
   [smoker] No process found for Solr node running on port 8983
   [smoker]   Running techproducts example on port 8983 from 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/tmp/unpack/solr-7.3.0-java8
   [smoker] Creating Solr home directory 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/tmp/unpack/solr-7.3.0-java8/example/techproducts/solr
   [smoker] 
   [smoker] Starting up Solr on port 8983 using command:
   [smoker] "bin/solr" start -p 8983 -s "example/techproducts/solr"
   [smoker] 
   [smoker] Waiting up to 180 seconds to see Solr running on port 8983 [|]  
 [/]   [-]   [\]   [|]   [/]   [-]   
[\]   [|]   [/]   [-]   [\]   [|]   [/]  

[jira] [Resolved] (LUCENE-4100) Maxscore - Efficient Scoring

2017-12-19 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4100?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand resolved LUCENE-4100.
--
Resolution: Fixed

> Maxscore - Efficient Scoring
> 
>
> Key: LUCENE-4100
> URL: https://issues.apache.org/jira/browse/LUCENE-4100
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/codecs, core/query/scoring, core/search
>Affects Versions: 4.0-ALPHA
>Reporter: Stefan Pohl
>  Labels: api-change, gsoc2014, patch, performance
> Fix For: master (8.0)
>
> Attachments: LUCENE-4100.patch, LUCENE-4100.patch, LUCENE-4100.patch, 
> LUCENE-4100.patch, contrib_maxscore.tgz, maxscore.patch
>
>
> At Berlin Buzzwords 2012, I will be presenting 'maxscore', an efficient 
> algorithm first published in the IR domain in 1995 by H. Turtle & J. Flood, 
> that I find deserves more attention among Lucene users (and developers).
> I implemented a proof of concept and did some performance measurements with 
> example queries and lucenebench, the package of Mike McCandless, resulting in 
> very significant speedups.
> This ticket is to get started the discussion on including the implementation 
> into Lucene's codebase. Because the technique requires awareness about it 
> from the Lucene user/developer, it seems best to become a contrib/module 
> package so that it consciously can be chosen to be used.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-7979) Move disjunctions to a radix heap

2017-12-19 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7979?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand resolved LUCENE-7979.
--
Resolution: Won't Fix

> Move disjunctions to a radix heap
> -
>
> Key: LUCENE-7979
> URL: https://issues.apache.org/jira/browse/LUCENE-7979
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Priority: Trivial
> Attachments: LUCENE-7979.patch, LUCENE-7979.patch, LUCENE-7979.patch
>
>
> An Elasticsearch user argued that we should look into using radix heaps in 
> order to run disjunctions so I wanted to give it a try. I'm creating this 
> issue to share findings. Spoiler: so far it does not seem to help but maybe 
> I'm just doing it wrong?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-11778) Add per-stage RequestHandler metrics

2017-12-19 Thread Andrzej Bialecki (JIRA)
Andrzej Bialecki  created SOLR-11778:


 Summary: Add per-stage RequestHandler metrics
 Key: SOLR-11778
 URL: https://issues.apache.org/jira/browse/SOLR-11778
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
  Components: metrics
Reporter: Andrzej Bialecki 
Assignee: Andrzej Bialecki 


Currently {{RequestHandlerBase}} defines several metrics for keeping track of 
general-purpose requests. These metrics are also used in {{SearchHandler}}.

However, query processing in SolrCloud consists of several (varying) stages, 
and each of these stages may result in additional shard requests. These details 
are not captured by the metrics, because they treat all requests the same way.

In some applications it's important to know how many requests are user 
requests, and how many of them are generated internally by SolrCloud - for 
example, when diagnosing uneven distribution of requests across shards, or when 
autoscaling framework wants to detect "hot replicas". If we split the metrics 
at least into distributed vs. non-distributed requests then the numbers become 
more meaningful and representative of the global load vs. local load.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8104) Grouping module should no longer depend on Queries module (ValueSource)

2017-12-19 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16296577#comment-16296577
 ] 

Adrien Grand commented on LUCENE-8104:
--

bq. The facets module also uses a FunctionQuery in 
TestTaxonomyFacetSumValueSource, so it can't be removed entirely.

It looks like it can be easily replaced with a FunctionScoreQuery?

> Grouping module should no longer depend on Queries module (ValueSource)
> ---
>
> Key: LUCENE-8104
> URL: https://issues.apache.org/jira/browse/LUCENE-8104
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/grouping
>Reporter: David Smiley
> Attachments: LUCENE-8104.patch
>
>
> The Grouping module depends on the Queries module in GroupingSearch / 
> ValueSourceGroupSelector to use the ValueSource framework.  It should instead 
> use the newer DoubleValueSource or LongValueSource framework in Core.  As I 
> write this, this appears to be the last part of Lucene to refer to the 
> ValueSource framework, and I think we should then remove it -- for another 
> issue of course.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Windows (64bit/jdk-9.0.1) - Build # 7062 - Still Unstable!

2017-12-19 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/7062/
Java: 64bit/jdk-9.0.1 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

6 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.SolrXmlInZkTest

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.SolrXmlInZkTest_AF023941D9C78C36-001\tempDir-002\zookeeper1374581066732961\server1\data:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.SolrXmlInZkTest_AF023941D9C78C36-001\tempDir-002\zookeeper1374581066732961\server1\data

C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.SolrXmlInZkTest_AF023941D9C78C36-001\tempDir-002\zookeeper1374581066732961\server1:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.SolrXmlInZkTest_AF023941D9C78C36-001\tempDir-002\zookeeper1374581066732961\server1

C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.SolrXmlInZkTest_AF023941D9C78C36-001\tempDir-002\zookeeper1374581066732961:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.SolrXmlInZkTest_AF023941D9C78C36-001\tempDir-002\zookeeper1374581066732961

C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.SolrXmlInZkTest_AF023941D9C78C36-001\tempDir-002:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.SolrXmlInZkTest_AF023941D9C78C36-001\tempDir-002
 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.SolrXmlInZkTest_AF023941D9C78C36-001\tempDir-002\zookeeper1374581066732961\server1\data:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.SolrXmlInZkTest_AF023941D9C78C36-001\tempDir-002\zookeeper1374581066732961\server1\data
   
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.SolrXmlInZkTest_AF023941D9C78C36-001\tempDir-002\zookeeper1374581066732961\server1:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.SolrXmlInZkTest_AF023941D9C78C36-001\tempDir-002\zookeeper1374581066732961\server1
   
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.SolrXmlInZkTest_AF023941D9C78C36-001\tempDir-002\zookeeper1374581066732961:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.SolrXmlInZkTest_AF023941D9C78C36-001\tempDir-002\zookeeper1374581066732961
   
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.SolrXmlInZkTest_AF023941D9C78C36-001\tempDir-002:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.SolrXmlInZkTest_AF023941D9C78C36-001\tempDir-002

at __randomizedtesting.SeedInfo.seed([AF023941D9C78C36]:0)
at org.apache.lucene.util.IOUtils.rm(IOUtils.java:329)
at 
org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:216)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterAlways(TestRuleAdapter.java:31)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  junit.framework.TestSuite.org.apache.solr.core.TestConfigSetProperties

Error Message:
Could not remove the following files (in the order of 

[jira] [Updated] (LUCENE-8104) Grouping module should no longer depend on Queries module (ValueSource)

2017-12-19 Thread Alan Woodward (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Woodward updated LUCENE-8104:
--
Attachment: LUCENE-8104.patch

Here's a patch changing the facet dependency on queries to be test-only, and 
fixing the javadocs.

> Grouping module should no longer depend on Queries module (ValueSource)
> ---
>
> Key: LUCENE-8104
> URL: https://issues.apache.org/jira/browse/LUCENE-8104
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/grouping
>Reporter: David Smiley
> Attachments: LUCENE-8104.patch
>
>
> The Grouping module depends on the Queries module in GroupingSearch / 
> ValueSourceGroupSelector to use the ValueSource framework.  It should instead 
> use the newer DoubleValueSource or LongValueSource framework in Core.  As I 
> write this, this appears to be the last part of Lucene to refer to the 
> ValueSource framework, and I think we should then remove it -- for another 
> issue of course.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



7.2.0 release notes

2017-12-19 Thread Adrien Grand
I started working on the release notes of the upcoming 7.2.0 release.
 * https://wiki.apache.org/lucene-java/ReleaseNote72
 * https://wiki.apache.org/solr/ReleaseNote72

Feel free to edit, especially release highlights.

Thanks.


[JENKINS-EA] Lucene-Solr-7.x-Linux (64bit/jdk-10-ea+32) - Build # 1014 - Still Unstable!

2017-12-19 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/1014/
Java: 64bit/jdk-10-ea+32 -XX:-UseCompressedOops -XX:+UseSerialGC

7 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest

Error Message:
9 threads leaked from SUITE scope at 
org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest: 1) Thread[id=9166, 
name=Connection evictor, state=TIMED_WAITING, 
group=TGRP-ChaosMonkeyNothingIsSafeTest] at 
java.base@10-ea/java.lang.Thread.sleep(Native Method) at 
app//org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66)
 at java.base@10-ea/java.lang.Thread.run(Thread.java:844)2) 
Thread[id=9171, name=zkCallback-1474-thread-1, state=TIMED_WAITING, 
group=TGRP-ChaosMonkeyNothingIsSafeTest] at 
java.base@10-ea/jdk.internal.misc.Unsafe.park(Native Method) at 
java.base@10-ea/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:234)
 at 
java.base@10-ea/java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:462)
 at 
java.base@10-ea/java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:361)
 at 
java.base@10-ea/java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:937)
 at 
java.base@10-ea/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1060)
 at 
java.base@10-ea/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1121)
 at 
java.base@10-ea/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
 at java.base@10-ea/java.lang.Thread.run(Thread.java:844)3) 
Thread[id=9167, 
name=TEST-ChaosMonkeyNothingIsSafeTest.test-seed#[3D012FB18D92CB07]-SendThread(127.0.0.1:45741),
 state=TIMED_WAITING, group=TGRP-ChaosMonkeyNothingIsSafeTest] at 
java.base@10-ea/java.lang.Thread.sleep(Native Method) at 
app//org.apache.zookeeper.ClientCnxnSocketNIO.cleanup(ClientCnxnSocketNIO.java:230)
 at 
app//org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1246)   
  at 
app//org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1170)4) 
Thread[id=9365, name=zkCallback-1474-thread-4, state=TIMED_WAITING, 
group=TGRP-ChaosMonkeyNothingIsSafeTest] at 
java.base@10-ea/jdk.internal.misc.Unsafe.park(Native Method) at 
java.base@10-ea/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:234)
 at 
java.base@10-ea/java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:462)
 at 
java.base@10-ea/java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:361)
 at 
java.base@10-ea/java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:937)
 at 
java.base@10-ea/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1060)
 at 
java.base@10-ea/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1121)
 at 
java.base@10-ea/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
 at java.base@10-ea/java.lang.Thread.run(Thread.java:844)5) 
Thread[id=9350, name=zkCallback-1474-thread-3, state=TIMED_WAITING, 
group=TGRP-ChaosMonkeyNothingIsSafeTest] at 
java.base@10-ea/jdk.internal.misc.Unsafe.park(Native Method) at 
java.base@10-ea/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:234)
 at 
java.base@10-ea/java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:462)
 at 
java.base@10-ea/java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:361)
 at 
java.base@10-ea/java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:937)
 at 
java.base@10-ea/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1060)
 at 
java.base@10-ea/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1121)
 at 
java.base@10-ea/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
 at java.base@10-ea/java.lang.Thread.run(Thread.java:844)6) 
Thread[id=9326, name=zkCallback-1474-thread-2, state=TIMED_WAITING, 
group=TGRP-ChaosMonkeyNothingIsSafeTest] at 
java.base@10-ea/jdk.internal.misc.Unsafe.park(Native Method) at 
java.base@10-ea/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:234)
 at 
java.base@10-ea/java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:462)
 at 
java.base@10-ea/java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:361)
 at 
java.base@10-ea/java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:937)
 at 
java.base@10-ea/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1060)
 

[jira] [Commented] (LUCENE-8104) Grouping module should no longer depend on Queries module (ValueSource)

2017-12-19 Thread Alan Woodward (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16296505#comment-16296505
 ] 

Alan Woodward commented on LUCENE-8104:
---

The facets module also uses a FunctionQuery in TestTaxonomyFacetSumValueSource, 
so it can't be removed entirely.  But we could make it a test-only dependency 
instead?

> Grouping module should no longer depend on Queries module (ValueSource)
> ---
>
> Key: LUCENE-8104
> URL: https://issues.apache.org/jira/browse/LUCENE-8104
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/grouping
>Reporter: David Smiley
>
> The Grouping module depends on the Queries module in GroupingSearch / 
> ValueSourceGroupSelector to use the ValueSource framework.  It should instead 
> use the newer DoubleValueSource or LongValueSource framework in Core.  As I 
> write this, this appears to be the last part of Lucene to refer to the 
> ValueSource framework, and I think we should then remove it -- for another 
> issue of course.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8099) Deprecate CustomScoreQuery, BoostedQuery and BoostingQuery

2017-12-19 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16296474#comment-16296474
 ] 

ASF subversion and git services commented on LUCENE-8099:
-

Commit 79073fafd34341afb575b4a045f2511d35d30d90 in lucene-solr's branch 
refs/heads/master from [~romseygeek]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=79073fa ]

LUCENE-8099: Fix IntelliJ dependencies


> Deprecate CustomScoreQuery, BoostedQuery and BoostingQuery
> --
>
> Key: LUCENE-8099
> URL: https://issues.apache.org/jira/browse/LUCENE-8099
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Minor
> Fix For: 7.3
>
> Attachments: LUCENE-8099.patch, LUCENE-8099.patch
>
>
> After LUCENE-7998, these three queries can all be replaced by a 
> FunctionScoreQuery.  Using lucene-expressions makes them much easier to use 
> as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8099) Deprecate CustomScoreQuery, BoostedQuery and BoostingQuery

2017-12-19 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16296473#comment-16296473
 ] 

ASF subversion and git services commented on LUCENE-8099:
-

Commit 493220ac3be6b42dea8b5a75d1ecf2063f6da92a in lucene-solr's branch 
refs/heads/branch_7x from [~romseygeek]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=493220a ]

LUCENE-8099: Fix IntelliJ dependencies


> Deprecate CustomScoreQuery, BoostedQuery and BoostingQuery
> --
>
> Key: LUCENE-8099
> URL: https://issues.apache.org/jira/browse/LUCENE-8099
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Minor
> Fix For: 7.3
>
> Attachments: LUCENE-8099.patch, LUCENE-8099.patch
>
>
> After LUCENE-7998, these three queries can all be replaced by a 
> FunctionScoreQuery.  Using lucene-expressions makes them much easier to use 
> as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] Release Lucene/Solr 7.2.0 RC1

2017-12-19 Thread Adrien Grand
This vote has passed, thanks all for voting. I will proceed with the next
steps.

Le mar. 19 déc. 2017 à 07:34, Shalin Shekhar Mangar 
a écrit :

> +1
>
> SUCCESS! [1:12:05.185679]
>
> On Fri, Dec 15, 2017 at 1:52 PM, Adrien Grand  wrote:
> > Please vote for release candidate 1 for Lucene/Solr 7.2.0
> >
> > The artifacts can be downloaded from:
> >
> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-7.2.0-RC1-revbca54cad5a9f6a80800944fd5bd585b68acde8c8/
> >
> > You can run the smoke tester directly with this command:
> >
> > python3 -u dev-tools/scripts/smokeTestRelease.py \
> >
> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-7.2.0-RC1-revbca54cad5a9f6a80800944fd5bd585b68acde8c8/
> >
> > Here's my +1
>
>
>
> --
> Regards,
> Shalin Shekhar Mangar.
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


[jira] [Commented] (LUCENE-8104) Grouping module should no longer depend on Queries module (ValueSource)

2017-12-19 Thread Alan Woodward (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16296426#comment-16296426
 ] 

Alan Woodward commented on LUCENE-8104:
---

LUCENE-7889 is a start at implementing this.

Removing ValueSource entirely would be great, but it's heavily used within 
Solr.  Would you want to just move the classes into the Solr subproject, or try 
and rework things so that Solr uses DoubleValuesSource instead?

> Grouping module should no longer depend on Queries module (ValueSource)
> ---
>
> Key: LUCENE-8104
> URL: https://issues.apache.org/jira/browse/LUCENE-8104
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/grouping
>Reporter: David Smiley
>
> The Grouping module depends on the Queries module in GroupingSearch / 
> ValueSourceGroupSelector to use the ValueSource framework.  It should instead 
> use the newer DoubleValueSource or LongValueSource framework in Core.  As I 
> write this, this appears to be the last part of Lucene to refer to the 
> ValueSource framework, and I think we should then remove it -- for another 
> issue of course.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org