[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk1.8.0_144) - Build # 1171 - Still Unstable!

2018-01-12 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/1171/
Java: 64bit/jdk1.8.0_144 -XX:+UseCompressedOops -XX:+UseSerialGC

2 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation

Error Message:
2 threads leaked from SUITE scope at 
org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation: 1) 
Thread[id=35873, name=jetty-launcher-9486-thread-1-EventThread, 
state=TIMED_WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation] 
at sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
 at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
 at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277)  
   at 
org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:323)
 at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:105)  
   at 
org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:288)
 at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279)
 at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:41)
 at 
org.apache.curator.framework.recipes.shared.SharedValue.readValue(SharedValue.java:244)
 at 
org.apache.curator.framework.recipes.shared.SharedValue.access$100(SharedValue.java:44)
 at 
org.apache.curator.framework.recipes.shared.SharedValue$1.process(SharedValue.java:61)
 at 
org.apache.curator.framework.imps.NamespaceWatcher.process(NamespaceWatcher.java:67)
 at 
org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:530)   
  at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:505)   
 2) Thread[id=35869, name=jetty-launcher-9486-thread-2-EventThread, 
state=TIMED_WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation] 
at sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
 at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
 at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277)  
   at 
org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:323)
 at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:105)  
   at 
org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:288)
 at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279)
 at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:41)
 at 
org.apache.curator.framework.recipes.shared.SharedValue.readValue(SharedValue.java:244)
 at 
org.apache.curator.framework.recipes.shared.SharedValue.access$100(SharedValue.java:44)
 at 
org.apache.curator.framework.recipes.shared.SharedValue$1.process(SharedValue.java:61)
 at 
org.apache.curator.framework.imps.NamespaceWatcher.process(NamespaceWatcher.java:67)
 at 
org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:530)   
  at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:505)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 2 threads leaked from SUITE 
scope at org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation: 
   1) Thread[id=35873, name=jetty-launcher-9486-thread-1-EventThread, 
state=TIMED_WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation]
at sun.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277)
at 
org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:323)
at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:105)
at 
org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:288)
at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279)
at 

[JENKINS] Lucene-Solr-master-Windows (32bit/jdk1.8.0_144) - Build # 7110 - Still Unstable!

2018-01-12 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/7110/
Java: 32bit/jdk1.8.0_144 -client -XX:+UseParallelGC

8 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.lucene.store.TestMmapDirectory

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J1\temp\lucene.store.TestMmapDirectory_B3478A9E46382E62-001\testDirectoryFilter-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J1\temp\lucene.store.TestMmapDirectory_B3478A9E46382E62-001\testDirectoryFilter-001
 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J1\temp\lucene.store.TestMmapDirectory_B3478A9E46382E62-001\testDirectoryFilter-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J1\temp\lucene.store.TestMmapDirectory_B3478A9E46382E62-001\testDirectoryFilter-001

at __randomizedtesting.SeedInfo.seed([B3478A9E46382E62]:0)
at org.apache.lucene.util.IOUtils.rm(IOUtils.java:329)
at 
org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:216)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterAlways(TestRuleAdapter.java:31)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  
org.apache.solr.cloud.autoscaling.sim.TestDistribStateManager.testGetSetRemoveData

Error Message:
Node watch should have fired!

Stack Trace:
java.lang.AssertionError: Node watch should have fired!
at 
__randomizedtesting.SeedInfo.seed([B3B8CFCB7ED9E0F4:952820992A5726FE]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.autoscaling.sim.TestDistribStateManager.testGetSetRemoveData(TestDistribStateManager.java:256)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 

[JENKINS] Lucene-Solr-Tests-7.x - Build # 313 - Failure

2018-01-12 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.x/313/

17 tests failed.
FAILED:  org.apache.solr.cloud.CollectionsAPISolrJTest.testSplitShard

Error Message:
Timeout occured while waiting response from server at: 
http://127.0.0.1:49012/solr

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: http://127.0.0.1:49012/solr
at 
__randomizedtesting.SeedInfo.seed([88E3C60A7721C224:53E96B6669D4FE9B]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:654)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1104)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:884)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
at 
org.apache.solr.cloud.CollectionsAPISolrJTest.testSplitShard(CollectionsAPISolrJTest.java:243)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-10-ea+37) - Build # 21266 - Still Unstable!

2018-01-12 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/21266/
Java: 64bit/jdk-10-ea+37 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

2 tests failed.
FAILED:  org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.test

Error Message:
Could not find collection:collection2

Stack Trace:
java.lang.AssertionError: Could not find collection:collection2
at 
__randomizedtesting.SeedInfo.seed([B2396574FC9AA482:3A6D5AAE5266C97A]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNotNull(Assert.java:526)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:155)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:140)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:135)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForRecoveriesToFinish(AbstractFullDistribZkTestBase.java:913)
at 
org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.testIndexingBatchPerRequestWithHttpSolrClient(FullSolrCloudDistribCmdsTest.java:612)
at 
org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.test(FullSolrCloudDistribCmdsTest.java:152)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:993)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)

[JENKINS] Lucene-Solr-7.x-Solaris (64bit/jdk1.8.0) - Build # 390 - Still Unstable!

2018-01-12 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Solaris/390/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseParallelGC

4 tests failed.
FAILED:  org.apache.solr.cloud.autoscaling.sim.TestLargeCluster.testAddNode

Error Message:
no IGNORED events

Stack Trace:
java.lang.AssertionError: no IGNORED events
at 
__randomizedtesting.SeedInfo.seed([A9E6FA464D570416:E09E7E5821A8B0E]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.autoscaling.sim.TestLargeCluster.testAddNode(TestLargeCluster.java:267)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  org.apache.solr.core.TestDynamicLoading.testDynamicLoading

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([A9E6FA464D570416:71ABD711BA8AA1B6]:0)
at org.junit.Assert.fail(Assert.java:92)
at 

[JENKINS-EA] Lucene-Solr-7.x-Linux (64bit/jdk-10-ea+37) - Build # 1170 - Still Unstable!

2018-01-12 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/1170/
Java: 64bit/jdk-10-ea+37 -XX:-UseCompressedOops -XX:+UseParallelGC

3 tests failed.
FAILED:  
org.apache.solr.client.solrj.io.stream.StreamExpressionTest.testMultiVariateNormalDistribution

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([5E9A2644272FE770:C461AF331006A57B]:0)
at org.junit.Assert.fail(Assert.java:92)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertTrue(Assert.java:54)
at 
org.apache.solr.client.solrj.io.stream.StreamExpressionTest.testMultiVariateNormalDistribution(StreamExpressionTest.java:7549)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  
org.apache.solr.cloud.autoscaling.sim.TestClusterStateProvider.testAutoScalingConfig

Error Message:
expected: org.apache.solr.client.solrj.cloud.autoscaling.AutoScalingConfig<{   
"cluster-preferences":[{"maximize":"freedisk"}],   

[JENKINS] Lucene-Solr-SmokeRelease-7.2 - Build # 11 - Still Failing

2018-01-12 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-7.2/11/

No tests ran.

Build Log:
[...truncated 28043 lines...]
prepare-release-no-sign:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.2/lucene/build/smokeTestRelease/dist
 [copy] Copying 476 files to 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.2/lucene/build/smokeTestRelease/dist/lucene
 [copy] Copying 215 files to 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.2/lucene/build/smokeTestRelease/dist/solr
   [smoker] Java 1.8 JAVA_HOME=/home/jenkins/tools/java/latest1.8
   [smoker] NOTE: output encoding is UTF-8
   [smoker] 
   [smoker] Load release URL 
"file:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.2/lucene/build/smokeTestRelease/dist/"...
   [smoker] 
   [smoker] Test Lucene...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.01 sec (43.0 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download lucene-7.2.1-src.tgz...
   [smoker] 32.0 MB in 0.02 sec (1296.3 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-7.2.1.tgz...
   [smoker] 71.0 MB in 0.06 sec (1252.5 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-7.2.1.zip...
   [smoker] 81.5 MB in 0.07 sec (1250.4 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack lucene-7.2.1.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6227 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-7.2.1.zip...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6227 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-7.2.1-src.tgz...
   [smoker] make sure no JARs/WARs in src dist...
   [smoker] run "ant validate"
   [smoker] run tests w/ Java 8 and testArgs='-Dtests.slow=false'...
   [smoker] test demo with 1.8...
   [smoker]   got 214 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] generate javadocs w/ Java 8...
   [smoker] 
   [smoker] Crawl/parse...
   [smoker] 
   [smoker] Verify...
   [smoker]   confirm all releases have coverage in TestBackwardsCompatibility
   [smoker] find all past Lucene releases...
   [smoker] run TestBackwardsCompatibility..
   [smoker] success!
   [smoker] 
   [smoker] Test Solr...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.00 sec (64.1 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download solr-7.2.1-src.tgz...
   [smoker] 54.1 MB in 0.35 sec (154.0 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-7.2.1.tgz...
   [smoker] 146.1 MB in 0.83 sec (175.1 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-7.2.1.zip...
   [smoker] 147.1 MB in 0.60 sec (243.7 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack solr-7.2.1.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] unpack lucene-7.2.1.tgz...
   [smoker]   **WARNING**: skipping check of 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.2/lucene/build/smokeTestRelease/tmp/unpack/solr-7.2.1/contrib/dataimporthandler-extras/lib/javax.mail-1.5.1.jar:
 it has javax.* classes
   [smoker]   **WARNING**: skipping check of 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.2/lucene/build/smokeTestRelease/tmp/unpack/solr-7.2.1/contrib/dataimporthandler-extras/lib/activation-1.1.1.jar:
 it has javax.* classes
   [smoker] copying unpacked distribution for Java 8 ...
   [smoker] test solr example w/ Java 8...
   [smoker]   start Solr instance 
(log=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.2/lucene/build/smokeTestRelease/tmp/unpack/solr-7.2.1-java8/solr-example.log)...
   [smoker] No process found for Solr node running on port 8983
   [smoker]   Running techproducts example on port 8983 from 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.2/lucene/build/smokeTestRelease/tmp/unpack/solr-7.2.1-java8
   [smoker] Creating Solr home directory 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.2/lucene/build/smokeTestRelease/tmp/unpack/solr-7.2.1-java8/example/techproducts/solr
   [smoker] 
   [smoker] Starting up Solr on port 8983 using command:
   [smoker] "bin/solr" start -p 8983 -s "example/techproducts/solr"
   [smoker] 
   [smoker] Waiting up to 180 seconds to see Solr running on port 8983 [|]  
 [/]   [-]   [\]  
   [smoker] Started Solr server on port 8983 (pid=25523). Happy searching!
   [smoker] 
   

[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 1622 - Still Unstable!

2018-01-12 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/1622/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseG1GC

3 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.sim.TestClusterStateProvider.testAutoScalingConfig

Error Message:
expected: org.apache.solr.client.solrj.cloud.autoscaling.AutoScalingConfig<{   
"cluster-preferences":[{"maximize":"freedisk"}],   
"triggers":{".auto_add_replicas":{   "name":".auto_add_replicas",   
"event":"nodeLost",   "waitFor":30,   "actions":[ {   
"name":"auto_add_replicas_plan",   
"class":"solr.AutoAddReplicasPlanAction"}, {   
"name":"execute_plan",   "class":"solr.ExecutePlanAction"}],   
"enabled":true}},   "listeners":{".auto_add_replicas.system":{   
"trigger":".auto_add_replicas",   "afterAction":[],   "stage":[ 
"STARTED", "ABORTED", "SUCCEEDED", "FAILED", 
"BEFORE_ACTION", "AFTER_ACTION", "IGNORED"],   
"class":"org.apache.solr.cloud.autoscaling.SystemLogListener",   
"beforeAction":[]}},   "properties":{}}> but was: 
org.apache.solr.client.solrj.cloud.autoscaling.AutoScalingConfig<{   
"cluster-preferences":[{"maximize":"freedisk"}],   
"triggers":{".auto_add_replicas":{   "name":".auto_add_replicas",   
"event":"nodeLost",   "waitFor":30,   "actions":[ {   
"name":"auto_add_replicas_plan",   
"class":"solr.AutoAddReplicasPlanAction"}, {   
"name":"execute_plan",   "class":"solr.ExecutePlanAction"}],   
"enabled":true}},   "listeners":{".auto_add_replicas.system":{   
"trigger":".auto_add_replicas",   "afterAction":[],   "stage":[ 
"STARTED", "ABORTED", "SUCCEEDED", "FAILED", 
"BEFORE_ACTION", "AFTER_ACTION", "IGNORED"],   
"class":"org.apache.solr.cloud.autoscaling.SystemLogListener",   
"beforeAction":[]}},   "properties":{}}>

Stack Trace:
java.lang.AssertionError: expected: 
org.apache.solr.client.solrj.cloud.autoscaling.AutoScalingConfig<{
  "cluster-preferences":[{"maximize":"freedisk"}],
  "triggers":{".auto_add_replicas":{
  "name":".auto_add_replicas",
  "event":"nodeLost",
  "waitFor":30,
  "actions":[
{
  "name":"auto_add_replicas_plan",
  "class":"solr.AutoAddReplicasPlanAction"},
{
  "name":"execute_plan",
  "class":"solr.ExecutePlanAction"}],
  "enabled":true}},
  "listeners":{".auto_add_replicas.system":{
  "trigger":".auto_add_replicas",
  "afterAction":[],
  "stage":[
"STARTED",
"ABORTED",
"SUCCEEDED",
"FAILED",
"BEFORE_ACTION",
"AFTER_ACTION",
"IGNORED"],
  "class":"org.apache.solr.cloud.autoscaling.SystemLogListener",
  "beforeAction":[]}},
  "properties":{}}> but was: 
org.apache.solr.client.solrj.cloud.autoscaling.AutoScalingConfig<{
  "cluster-preferences":[{"maximize":"freedisk"}],
  "triggers":{".auto_add_replicas":{
  "name":".auto_add_replicas",
  "event":"nodeLost",
  "waitFor":30,
  "actions":[
{
  "name":"auto_add_replicas_plan",
  "class":"solr.AutoAddReplicasPlanAction"},
{
  "name":"execute_plan",
  "class":"solr.ExecutePlanAction"}],
  "enabled":true}},
  "listeners":{".auto_add_replicas.system":{
  "trigger":".auto_add_replicas",
  "afterAction":[],
  "stage":[
"STARTED",
"ABORTED",
"SUCCEEDED",
"FAILED",
"BEFORE_ACTION",
"AFTER_ACTION",
"IGNORED"],
  "class":"org.apache.solr.cloud.autoscaling.SystemLogListener",
  "beforeAction":[]}},
  "properties":{}}>
at 
__randomizedtesting.SeedInfo.seed([D1679C3BF6DD9A4C:EEEF9F93E18B6AAB]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:147)
at 
org.apache.solr.cloud.autoscaling.sim.TestClusterStateProvider.testAutoScalingConfig(TestClusterStateProvider.java:214)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 

[jira] [Updated] (SOLR-11810) Upgrade Jetty to 9.4.x

2018-01-12 Thread Varun Thacker (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker updated SOLR-11810:
-
Attachment: SOLR-11810.patch

precommit passes with the latest patch. 

I'll run the tests a few more times and then commit it in the next 2-3 days if 
everything  runs smoothly 

Or wait Erick beat me to it, so I'll let him commit it 

> Upgrade Jetty to 9.4.x 
> ---
>
> Key: SOLR-11810
> URL: https://issues.apache.org/jira/browse/SOLR-11810
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Assignee: Erick Erickson
> Attachments: SOLR-11810.patch, SOLR-11810.patch, SOLR-11810.patch
>
>
> Jetty 9.4.x was released over a year back : 
> https://dev.eclipse.org/mhonarc/lists/jetty-announce/msg00097.html .  Solr 
> doesn't use any of the major improvements listed on the announce thread but 
> it's the version that's in active development. 
> We should upgrade to Jetty 9.4.x series from 9.3.x
> The latest version right now is 9.4.8.v20171121 . Upgrading it locally 
> required a few compile time changes only. 
> Under "Default Sessions" in 
> https://www.eclipse.org/jetty/documentation/9.4.x/upgrading-jetty.html#_upgrading_from_jetty_9_3_x_to_jetty_9_4_0
>   it states that "In previous versions of Jetty this was referred to as 
> "hash" session management." . 
> The patch fixes all the compile time issues.
> Currently two tests are failing:
> TestRestManager
> TestManagedSynonymGraphFilterFactory
> Steps to upgrade the Jetty version were :
> 1. Modify {{ivy-versions.properties}} to reflect the new version number
> 2. Run {{ant jar-checksums}} to generate new JAR checksums



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_144) - Build # 21265 - Still Unstable!

2018-01-12 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/21265/
Java: 32bit/jdk1.8.0_144 -client -XX:+UseSerialGC

6 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.analytics.legacy.LegacyNoFacetCloudTest

Error Message:
Error starting up MiniSolrCloudCluster

Stack Trace:
java.lang.Exception: Error starting up MiniSolrCloudCluster
at __randomizedtesting.SeedInfo.seed([AC5770E25B384D37]:0)
at 
org.apache.solr.cloud.MiniSolrCloudCluster.checkForExceptions(MiniSolrCloudCluster.java:507)
at 
org.apache.solr.cloud.MiniSolrCloudCluster.(MiniSolrCloudCluster.java:251)
at 
org.apache.solr.cloud.SolrCloudTestCase$Builder.configure(SolrCloudTestCase.java:190)
at 
org.apache.solr.analytics.legacy.LegacyAbstractAnalyticsCloudTest.setupCollection(LegacyAbstractAnalyticsCloudTest.java:48)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:874)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)
Suppressed: java.lang.AssertionError
at 
sun.reflect.generics.reflectiveObjects.WildcardTypeImpl.getUpperBoundASTs(WildcardTypeImpl.java:86)
at 
sun.reflect.generics.reflectiveObjects.WildcardTypeImpl.getUpperBounds(WildcardTypeImpl.java:122)
at 
sun.reflect.generics.reflectiveObjects.WildcardTypeImpl.toString(WildcardTypeImpl.java:190)
at java.lang.reflect.Type.getTypeName(Type.java:46)
at 
sun.reflect.generics.reflectiveObjects.ParameterizedTypeImpl.toString(ParameterizedTypeImpl.java:234)
at java.lang.reflect.Type.getTypeName(Type.java:46)
at 
java.lang.reflect.Method.specificToGenericStringHeader(Method.java:421)
at 
java.lang.reflect.Executable.sharedToGenericString(Executable.java:163)
at java.lang.reflect.Method.toGenericString(Method.java:415)
at java.beans.MethodRef.set(MethodRef.java:46)
at 
java.beans.MethodDescriptor.setMethod(MethodDescriptor.java:117)
at java.beans.MethodDescriptor.(MethodDescriptor.java:72)
at java.beans.MethodDescriptor.(MethodDescriptor.java:56)
at 
java.beans.Introspector.getTargetMethodInfo(Introspector.java:1205)
at java.beans.Introspector.getBeanInfo(Introspector.java:426)
at java.beans.Introspector.getBeanInfo(Introspector.java:173)
at java.beans.Introspector.getBeanInfo(Introspector.java:260)
at java.beans.Introspector.(Introspector.java:407)
at java.beans.Introspector.getBeanInfo(Introspector.java:173)
   

[jira] [Commented] (SOLR-11810) Upgrade Jetty to 9.4.x

2018-01-12 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11810?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16324768#comment-16324768
 ] 

Erick Erickson commented on SOLR-11810:
---

All tests pass. I'll pound this over the weekend with some of the slower 
options turned on and commit probably Monday unless

1> there are objections
2> I find some issues

It seems like a good time to commit as we'll get some mileage on this before 
7.3.

Do note that I was testing a couple of issues and I could reliably reproduce 
bogus update failures/recoveries and with this patch they would no longer 
reproduce so there are practical reasons to upgrade.

> Upgrade Jetty to 9.4.x 
> ---
>
> Key: SOLR-11810
> URL: https://issues.apache.org/jira/browse/SOLR-11810
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Assignee: Erick Erickson
> Attachments: SOLR-11810.patch, SOLR-11810.patch
>
>
> Jetty 9.4.x was released over a year back : 
> https://dev.eclipse.org/mhonarc/lists/jetty-announce/msg00097.html .  Solr 
> doesn't use any of the major improvements listed on the announce thread but 
> it's the version that's in active development. 
> We should upgrade to Jetty 9.4.x series from 9.3.x
> The latest version right now is 9.4.8.v20171121 . Upgrading it locally 
> required a few compile time changes only. 
> Under "Default Sessions" in 
> https://www.eclipse.org/jetty/documentation/9.4.x/upgrading-jetty.html#_upgrading_from_jetty_9_3_x_to_jetty_9_4_0
>   it states that "In previous versions of Jetty this was referred to as 
> "hash" session management." . 
> The patch fixes all the compile time issues.
> Currently two tests are failing:
> TestRestManager
> TestManagedSynonymGraphFilterFactory
> Steps to upgrade the Jetty version were :
> 1. Modify {{ivy-versions.properties}} to reflect the new version number
> 2. Run {{ant jar-checksums}} to generate new JAR checksums



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11810) Upgrade Jetty to 9.4.x

2018-01-12 Thread Varun Thacker (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker updated SOLR-11810:
-
Attachment: SOLR-11810.patch

Fixes missing change to JettyWebappTest

> Upgrade Jetty to 9.4.x 
> ---
>
> Key: SOLR-11810
> URL: https://issues.apache.org/jira/browse/SOLR-11810
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Assignee: Erick Erickson
> Attachments: SOLR-11810.patch, SOLR-11810.patch
>
>
> Jetty 9.4.x was released over a year back : 
> https://dev.eclipse.org/mhonarc/lists/jetty-announce/msg00097.html .  Solr 
> doesn't use any of the major improvements listed on the announce thread but 
> it's the version that's in active development. 
> We should upgrade to Jetty 9.4.x series from 9.3.x
> The latest version right now is 9.4.8.v20171121 . Upgrading it locally 
> required a few compile time changes only. 
> Under "Default Sessions" in 
> https://www.eclipse.org/jetty/documentation/9.4.x/upgrading-jetty.html#_upgrading_from_jetty_9_3_x_to_jetty_9_4_0
>   it states that "In previous versions of Jetty this was referred to as 
> "hash" session management." . 
> The patch fixes all the compile time issues.
> Currently two tests are failing:
> TestRestManager
> TestManagedSynonymGraphFilterFactory
> Steps to upgrade the Jetty version were :
> 1. Modify {{ivy-versions.properties}} to reflect the new version number
> 2. Run {{ant jar-checksums}} to generate new JAR checksums



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-MacOSX (64bit/jdk1.8.0) - Build # 396 - Failure!

2018-01-12 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-MacOSX/396/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseSerialGC

6 tests failed.
FAILED:  org.apache.solr.cloud.TestTlogReplica.testRecovery

Error Message:


Stack Trace:
java.lang.NullPointerException
at 
__randomizedtesting.SeedInfo.seed([D91A3384D123341E:18EA4A28FC73FEB9]:0)
at 
org.apache.solr.cloud.TestTlogReplica.getSolrRunner(TestTlogReplica.java:898)
at 
org.apache.solr.cloud.TestTlogReplica.testRecovery(TestTlogReplica.java:534)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  org.apache.solr.cloud.TestTlogReplica.testOnlyLeaderIndexes

Error Message:


Stack Trace:
java.lang.NullPointerException
at 
__randomizedtesting.SeedInfo.seed([D91A3384D123341E:C51B4E09A4864A8D]:0)
at 
org.apache.solr.cloud.TestTlogReplica.getSolrCore(TestTlogReplica.java:863)
at 

[JENKINS] Lucene-Solr-SmokeRelease-master - Build # 920 - Still Failing

2018-01-12 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-master/920/

No tests ran.

Build Log:
[...truncated 28243 lines...]
prepare-release-no-sign:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist
 [copy] Copying 491 files to 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/lucene
 [copy] Copying 215 files to 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/solr
   [smoker] Java 1.8 JAVA_HOME=/home/jenkins/tools/java/latest1.8
   [smoker] NOTE: output encoding is UTF-8
   [smoker] 
   [smoker] Load release URL 
"file:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/"...
   [smoker] 
   [smoker] Test Lucene...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.01 sec (37.1 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download lucene-8.0.0-src.tgz...
   [smoker] 30.1 MB in 0.02 sec (1266.4 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-8.0.0.tgz...
   [smoker] 72.9 MB in 0.07 sec (1026.8 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-8.0.0.zip...
   [smoker] 83.3 MB in 0.08 sec (994.6 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack lucene-8.0.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6231 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-8.0.0.zip...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6231 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-8.0.0-src.tgz...
   [smoker] make sure no JARs/WARs in src dist...
   [smoker] run "ant validate"
   [smoker] run tests w/ Java 8 and testArgs='-Dtests.slow=false'...
   [smoker] test demo with 1.8...
   [smoker]   got 212 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] generate javadocs w/ Java 8...
   [smoker] 
   [smoker] Crawl/parse...
   [smoker] 
   [smoker] Verify...
   [smoker]   confirm all releases have coverage in TestBackwardsCompatibility
   [smoker] find all past Lucene releases...
   [smoker] run TestBackwardsCompatibility..
   [smoker] success!
   [smoker] 
   [smoker] Test Solr...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.01 sec (26.7 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download solr-8.0.0-src.tgz...
   [smoker] 52.3 MB in 0.05 sec (1014.9 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-8.0.0.tgz...
   [smoker] 150.2 MB in 0.14 sec (1079.7 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-8.0.0.zip...
   [smoker] 151.2 MB in 0.15 sec (1021.1 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack solr-8.0.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] unpack lucene-8.0.0.tgz...
   [smoker]   **WARNING**: skipping check of 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-8.0.0/contrib/dataimporthandler-extras/lib/javax.mail-1.5.1.jar:
 it has javax.* classes
   [smoker]   **WARNING**: skipping check of 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-8.0.0/contrib/dataimporthandler-extras/lib/activation-1.1.1.jar:
 it has javax.* classes
   [smoker] copying unpacked distribution for Java 8 ...
   [smoker] test solr example w/ Java 8...
   [smoker]   start Solr instance 
(log=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-8.0.0-java8/solr-example.log)...
   [smoker] No process found for Solr node running on port 8983
   [smoker]   Running techproducts example on port 8983 from 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-8.0.0-java8
   [smoker] *** [WARN] *** Your open file limit is currently 6.  
   [smoker]  It should be set to 65000 to avoid operational disruption. 
   [smoker]  If you no longer wish to see this warning, set SOLR_ULIMIT_CHECKS 
to false in your profile or solr.in.sh
   [smoker] *** [WARN] ***  Your Max Processes Limit is currently 10240. 
   [smoker]  It should be set to 65000 to avoid operational disruption. 
   [smoker]  If you no longer wish to see this warning, set SOLR_ULIMIT_CHECKS 
to false in your profile or 

[jira] [Updated] (SOLR-11746) numeric fields need better error handling for prefix/wildcard syntax -- consider uniform support for "foo:* == foo:[* TO *]"

2018-01-12 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11746?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-11746:

Attachment: SOLR-11746.patch

Kai: Thank you for your patch.

I really don't think we should be special casing points fields in the parser 
like that -- if for no other reason then that it does nothing to fix the bug 
with docValue only trie fields, or address the concerns about ensuring these 2 
syntaxes are functionally equivalent for all types.

What suprised me the most about your patch was realizing that 
{{SolrQueryParserBase.getWildcardQuery}} is the method getting triggered by the 
grammer when {{foo:\*}} is parsed -- i assumed it was smart enough to use 
{{SolrQueryParserBase.getPrefixQuery}} with an empty prefix in this case.

I'm attaching a new patch that:
* makes {{getWildcardQuery}} delegate to {{getPrefixQuery(...,"")}} when the 
wildcard pattern is {{\*}}
* makes {{FieldType.getPrefixQuery}} smart enough to delegate to 
{{getRangeQuery(parser, sf,null,null,true,true)}} when the prefix is the empty 
string
* beefs up the QueryEqualiyTesting to cover more field types
* adds new testing to TestSolrQueryParser to ensure that both syntaxes do what 
is intended: match all docs that contain the specific field.

I feel like this solution is more robust, and IIUC should even improve the 
performance of things like StrField and TextField by avoiding the need for an 
Atomoton that matches all terms

What do folks think of this approach?


> numeric fields need better error handling for prefix/wildcard syntax -- 
> consider uniform support for "foo:* == foo:[* TO *]"
> 
>
> Key: SOLR-11746
> URL: https://issues.apache.org/jira/browse/SOLR-11746
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
> Attachments: SOLR-11746.patch, SOLR-11746.patch
>
>
> On the solr-user mailing list, Torsten Krah pointed out that with Trie 
> numeric fields, query syntax such as {{foo_d:\*}} has been functionality 
> equivilent to {{foo_d:\[\* TO \*]}} and asked why this was not also supported 
> for Point based numeric fields.
> The fact that this type of syntax works (for {{indexed="true"}} Trie fields) 
> appears to have been an (untested, undocumented) fluke of Trie fields given 
> that they use indexed terms for the (encoded) numeric terms and inherit the 
> default implementation of {{FieldType.getPrefixQuery}} which produces a 
> prefix query against the {{""}} (empty string) term.  
> (Note that this syntax has aparently _*never*_ worked for Trie fields with 
> {{indexed="false" docValues="true"}} )
> In general, we should assess the behavior users attempt a prefix/wildcard 
> syntax query against numeric fields, as currently the behavior is largely 
> non-sensical:  prefix/wildcard syntax frequently match no docs w/o any sort 
> of error, and the aformentioned {{numeric_field:*}} behaves inconsistently 
> between points/trie fields and between indexed/docValued trie fields.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-11851) Issue After adding another node in Apache Solr Cluster

2018-01-12 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11851?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved SOLR-11851.
---
Resolution: Not A Problem

Please raise this question on the user's list at solr-u...@lucene.apache.org, 
see: (http://lucene.apache.org/solr/community.html#mailing-lists-irc) there are 
a _lot_ more people watching that list who may be able to help. 

If it's determined that this really is a code issue in Solr and not a 
configuration/usage problem, we can raise a new JIRA or reopen this one.

When you do make the comment on the user's list, you need to include the error 
from the solr log in node3, that should give a better explanation about the 
cause of why that node didn't come up.


> Issue After adding another node in Apache Solr Cluster 
> ---
>
> Key: SOLR-11851
> URL: https://issues.apache.org/jira/browse/SOLR-11851
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Build
>Affects Versions: 7.2
> Environment: Red Hat Enterprise Linux Server release 7.4 
> VM1 configured with-
> 1. Zookeeper1, 2 and 3 on different port
> 2. Solr 7.2 configured with 2 node and 2 shard and 2 replica
> VM2- New Server, we are trying to add in existing cluster. We followed the 
> instruction from Apache Solr reference guide for 7.2. as below-
> unzip the Solr-7.2.0.tar.gz and-
> mkdir -p example/cloud/node3/solr
> cp server/solr/solr.xml example/cloud/node3/solr
> bin/solr start -cloud -s example/cloud/node3/solr -p 8987 -z  VM1>:
>  
>Reporter: Sushil Tripathi
>
> Environment Detail-
> 
> Red Hat Enterprise Linux Server release 7.4 
> VM1 configured with-
> 1. Zookeeper1, 2 and 3 on different port
> 2. Solr 7.2 configured with 2 node and 2 shard and 2 replica
> VM2- New Server, we are trying to add in existing cluster. We followed the 
> instruction from Apache Solr reference guide for 7.2. as below-
> unzip the Solr-7.2.0.tar.gz and-
> mkdir -p example/cloud/node3/solr
> cp server/solr/solr.xml example/cloud/node3/solr
> bin/solr start -cloud -s example/cloud/node3/solr -p 8987 -z  VM1>:
> Issue-
> =
> while calling URL- http://10.0.12.57:8983/solr/
> It seems new node still not part of cluster also not having any core and 
> indexes. Thanks for help in advance.
> Error -
> =
> HTTP ERROR 404
> Problem accessing /solr/. Reason:
> Not Found
> Caused by:
> javax.servlet.UnavailableException: Error processing the request. 
> CoreContainer is either not initialized or shutting down.
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:342)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:326)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1751)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>   at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
>   at 
> org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
>   at org.eclipse.jetty.server.Server.handle(Server.java:534)
>   at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320)
>   at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
>   at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283)
>   at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108)
>   at 
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
>   at 
> 

[jira] [Commented] (SOLR-11597) Implement RankNet.

2018-01-12 Thread Michael A. Alcorn (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16324712#comment-16324712
 ] 

Michael A. Alcorn commented on SOLR-11597:
--

[~cpoerschke] - yes, they are separate. RankNet is specifically a learning to 
rank model whereas the other architectures being discussed are more for 
modeling language. I describe how to train a RankNet model using Keras 
[here|https://github.com/airalcorn2/Solr-LTR#RankNet]. The weights can be 
exported from Keras in any format the user wants. The original Solr 
representation I suggested made it easy to export the weights from Keras since 
the weights are contained in matrices.

> Implement RankNet.
> --
>
> Key: SOLR-11597
> URL: https://issues.apache.org/jira/browse/SOLR-11597
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: contrib - LTR
>Reporter: Michael A. Alcorn
>
> Implement RankNet as described in [this 
> tutorial|https://github.com/airalcorn2/Solr-LTR].



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11597) Implement RankNet.

2018-01-12 Thread Christine Poerschke (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16324703#comment-16324703
 ] 

Christine Poerschke commented on SOLR-11597:


Returning to this after Deeplearning4J SOLR-11838 and Streaming Expressions 
SOLR-11852 diversions ...

So, would it be fair to assume that there is a use case for this type of neural 
network separate from the more complex neural networks that might in future be 
supported separately e.g. via Deeplearning4J integration?

Assuming there is a use case, how would folks typically train such models? I'm 
wondering if that would help us move forward and decide on the weights 
representation question.

> Implement RankNet.
> --
>
> Key: SOLR-11597
> URL: https://issues.apache.org/jira/browse/SOLR-11597
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: contrib - LTR
>Reporter: Michael A. Alcorn
>
> Implement RankNet as described in [this 
> tutorial|https://github.com/airalcorn2/Solr-LTR].



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11838) explore supporting Deeplearning4j NeuralNetwork models

2018-01-12 Thread Christine Poerschke (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16324700#comment-16324700
 ] 

Christine Poerschke commented on SOLR-11838:


bq. ... using a streaming expression to pull data from  ...

I have not had an opportunity before to try out streaming expressions ... so 
had a go and hacked up a proof-of-concept this evening (SOLR-11852) - the 
connection to Deeplearning4j here could be for the pre-trained DL4J model 
definitions to be stored  and for contrib/ltr to be able to access 
them from there via a streaming expression.

The pre-trained models would of course still have to be hydrated and for 
contrib/ltr purposes the 
[LTRScoringModel.java|https://github.com/apache/lucene-solr/blob/master/solr/contrib/ltr/src/java/org/apache/solr/ltr/model/LTRScoringModel.java]
 base class to be implemented.



> explore supporting Deeplearning4j NeuralNetwork models
> --
>
> Key: SOLR-11838
> URL: https://issues.apache.org/jira/browse/SOLR-11838
> Project: Solr
>  Issue Type: New Feature
>Reporter: Christine Poerschke
> Attachments: SOLR-11838.patch
>
>
> [~yuyano] wrote in SOLR-11597:
> bq. ... If we think to apply this to more complex neural networks in the 
> future, we will need to support layers ...
> [~malcorn_redhat] wrote in SOLR-11597:
> bq. ... In my opinion, if this is a route Solr eventually wants to go, I 
> think a better strategy would be to just add a dependency on 
> [Deeplearning4j|https://deeplearning4j.org/] ...
> Creating this ticket for the idea to be explored further (if anyone is 
> interested in exploring it), complimentary to and independent of the 
> SOLR-11597 RankNet related effort.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11852) retrieval of contrib/ltr model definitions via a streaming expression

2018-01-12 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke updated SOLR-11852:
---
Attachment: SOLR-11852.patch

Attaching a rough proof-of-concept where a new StreamingExpressionWrapperModel 
class extends the abstract WrapperModel class added by SOLR-11250 --  the 
{{WrapperModel.fetchModelMap()}} implementation uses a streaming expression.

A shell script {{ltr-model-from-streaming-expression-poc.sh}} demonstrates how 
a streaming expression can be used to retrieve a model stored as a document 
field in a different collection.


> retrieval of contrib/ltr model definitions via a streaming expression
> -
>
> Key: SOLR-11852
> URL: https://issues.apache.org/jira/browse/SOLR-11852
> Project: Solr
>  Issue Type: New Feature
>  Components: contrib - LTR, streaming expressions
>Reporter: Christine Poerschke
> Attachments: SOLR-11852.patch
>
>
> This ticket is to explore retrieval of Learning-To-Rank model definitions via 
> a streaming expression, as an alternative to full model definitions stored in 
> ZooKeeper.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-7.2 - Build # 22 - Unstable

2018-01-12 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.2/22/

9 tests failed.
FAILED:  org.apache.solr.cloud.ShardSplitTest.testSplitAfterFailedSplit

Error Message:
Shard split did not succeed after a previous failed split attempt left 
sub-shards in construction state

Stack Trace:
java.lang.AssertionError: Shard split did not succeed after a previous failed 
split attempt left sub-shards in construction state
at 
__randomizedtesting.SeedInfo.seed([740384DFD5402904:8D4E1770E935648E]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.ShardSplitTest.testSplitAfterFailedSplit(ShardSplitTest.java:294)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:993)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 

[jira] [Created] (SOLR-11852) retrieval of contrib/ltr model definitions via a streaming expression

2018-01-12 Thread Christine Poerschke (JIRA)
Christine Poerschke created SOLR-11852:
--

 Summary: retrieval of contrib/ltr model definitions via a 
streaming expression
 Key: SOLR-11852
 URL: https://issues.apache.org/jira/browse/SOLR-11852
 Project: Solr
  Issue Type: New Feature
  Components: contrib - LTR, streaming expressions
Reporter: Christine Poerschke


This ticket is to explore retrieval of Learning-To-Rank model definitions via a 
streaming expression, as an alternative to full model definitions stored in 
ZooKeeper.




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk1.8.0) - Build # 4382 - Still Unstable!

2018-01-12 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/4382/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.sim.TestClusterStateProvider.testAutoScalingConfig

Error Message:
expected: org.apache.solr.client.solrj.cloud.autoscaling.AutoScalingConfig<{   
"cluster-preferences":[{"maximize":"freedisk"}],   
"triggers":{".auto_add_replicas":{   "name":".auto_add_replicas",   
"event":"nodeLost",   "waitFor":30,   "actions":[ {   
"name":"auto_add_replicas_plan",   
"class":"solr.AutoAddReplicasPlanAction"}, {   
"name":"execute_plan",   "class":"solr.ExecutePlanAction"}],   
"enabled":true}},   "listeners":{".auto_add_replicas.system":{   
"trigger":".auto_add_replicas",   "afterAction":[],   "stage":[ 
"STARTED", "ABORTED", "SUCCEEDED", "FAILED", 
"BEFORE_ACTION", "AFTER_ACTION", "IGNORED"],   
"class":"org.apache.solr.cloud.autoscaling.SystemLogListener",   
"beforeAction":[]}},   "properties":{}}> but was: 
org.apache.solr.client.solrj.cloud.autoscaling.AutoScalingConfig<{   
"cluster-preferences":[{"maximize":"freedisk"}],   
"triggers":{".auto_add_replicas":{   "name":".auto_add_replicas",   
"event":"nodeLost",   "waitFor":30,   "actions":[ {   
"name":"auto_add_replicas_plan",   
"class":"solr.AutoAddReplicasPlanAction"}, {   
"name":"execute_plan",   "class":"solr.ExecutePlanAction"}],   
"enabled":true}},   "listeners":{".auto_add_replicas.system":{   
"trigger":".auto_add_replicas",   "afterAction":[],   "stage":[ 
"STARTED", "ABORTED", "SUCCEEDED", "FAILED", 
"BEFORE_ACTION", "AFTER_ACTION", "IGNORED"],   
"class":"org.apache.solr.cloud.autoscaling.SystemLogListener",   
"beforeAction":[]}},   "properties":{}}>

Stack Trace:
java.lang.AssertionError: expected: 
org.apache.solr.client.solrj.cloud.autoscaling.AutoScalingConfig<{
  "cluster-preferences":[{"maximize":"freedisk"}],
  "triggers":{".auto_add_replicas":{
  "name":".auto_add_replicas",
  "event":"nodeLost",
  "waitFor":30,
  "actions":[
{
  "name":"auto_add_replicas_plan",
  "class":"solr.AutoAddReplicasPlanAction"},
{
  "name":"execute_plan",
  "class":"solr.ExecutePlanAction"}],
  "enabled":true}},
  "listeners":{".auto_add_replicas.system":{
  "trigger":".auto_add_replicas",
  "afterAction":[],
  "stage":[
"STARTED",
"ABORTED",
"SUCCEEDED",
"FAILED",
"BEFORE_ACTION",
"AFTER_ACTION",
"IGNORED"],
  "class":"org.apache.solr.cloud.autoscaling.SystemLogListener",
  "beforeAction":[]}},
  "properties":{}}> but was: 
org.apache.solr.client.solrj.cloud.autoscaling.AutoScalingConfig<{
  "cluster-preferences":[{"maximize":"freedisk"}],
  "triggers":{".auto_add_replicas":{
  "name":".auto_add_replicas",
  "event":"nodeLost",
  "waitFor":30,
  "actions":[
{
  "name":"auto_add_replicas_plan",
  "class":"solr.AutoAddReplicasPlanAction"},
{
  "name":"execute_plan",
  "class":"solr.ExecutePlanAction"}],
  "enabled":true}},
  "listeners":{".auto_add_replicas.system":{
  "trigger":".auto_add_replicas",
  "afterAction":[],
  "stage":[
"STARTED",
"ABORTED",
"SUCCEEDED",
"FAILED",
"BEFORE_ACTION",
"AFTER_ACTION",
"IGNORED"],
  "class":"org.apache.solr.cloud.autoscaling.SystemLogListener",
  "beforeAction":[]}},
  "properties":{}}>
at 
__randomizedtesting.SeedInfo.seed([D34589F863A532EA:ECCD8A5074F3C20D]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:147)
at 
org.apache.solr.cloud.autoscaling.sim.TestClusterStateProvider.testAutoScalingConfig(TestClusterStateProvider.java:214)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 

[JENKINS] Lucene-Solr-Tests-master - Build # 2254 - Still Unstable

2018-01-12 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/2254/

2 tests failed.
FAILED:  
org.apache.solr.cloud.LegacyCloudClusterPropTest.testCreateCollectionSwitchLegacyCloud

Error Message:
Could not find collection : legacyFalse

Stack Trace:
org.apache.solr.common.SolrException: Could not find collection : legacyFalse
at 
__randomizedtesting.SeedInfo.seed([C50FA1E1108AC19A:14085364B4854AA8]:0)
at 
org.apache.solr.common.cloud.ClusterState.getCollection(ClusterState.java:118)
at 
org.apache.solr.cloud.SolrCloudTestCase.getCollectionState(SolrCloudTestCase.java:247)
at 
org.apache.solr.cloud.LegacyCloudClusterPropTest.checkMandatoryProps(LegacyCloudClusterPropTest.java:153)
at 
org.apache.solr.cloud.LegacyCloudClusterPropTest.createAndTest(LegacyCloudClusterPropTest.java:90)
at 
org.apache.solr.cloud.LegacyCloudClusterPropTest.testCreateCollectionSwitchLegacyCloud(LegacyCloudClusterPropTest.java:70)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)

[JENKINS] Lucene-Solr-7.x-Windows (64bit/jdk-9.0.1) - Build # 397 - Still Unstable!

2018-01-12 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Windows/397/
Java: 64bit/jdk-9.0.1 -XX:+UseCompressedOops -XX:+UseParallelGC

11 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.lucene.index.TestStressIndexing2

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J1\temp\lucene.index.TestStressIndexing2_CF38C8C5F1351689-001\tempDir-001:
 java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J1\temp\lucene.index.TestStressIndexing2_CF38C8C5F1351689-001\tempDir-001

C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J1\temp\lucene.index.TestStressIndexing2_CF38C8C5F1351689-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J1\temp\lucene.index.TestStressIndexing2_CF38C8C5F1351689-001
 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J1\temp\lucene.index.TestStressIndexing2_CF38C8C5F1351689-001\tempDir-001:
 java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J1\temp\lucene.index.TestStressIndexing2_CF38C8C5F1351689-001\tempDir-001
   
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J1\temp\lucene.index.TestStressIndexing2_CF38C8C5F1351689-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J1\temp\lucene.index.TestStressIndexing2_CF38C8C5F1351689-001

at __randomizedtesting.SeedInfo.seed([CF38C8C5F1351689]:0)
at org.apache.lucene.util.IOUtils.rm(IOUtils.java:329)
at 
org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:216)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterAlways(TestRuleAdapter.java:31)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  junit.framework.TestSuite.org.apache.lucene.search.TestBoolean2

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J1\temp\lucene.search.TestBoolean2_CF38C8C5F1351689-001\tempDir-003:
 java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J1\temp\lucene.search.TestBoolean2_CF38C8C5F1351689-001\tempDir-003

C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J1\temp\lucene.search.TestBoolean2_CF38C8C5F1351689-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J1\temp\lucene.search.TestBoolean2_CF38C8C5F1351689-001
 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J1\temp\lucene.search.TestBoolean2_CF38C8C5F1351689-001\tempDir-003:
 java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J1\temp\lucene.search.TestBoolean2_CF38C8C5F1351689-001\tempDir-003
   
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J1\temp\lucene.search.TestBoolean2_CF38C8C5F1351689-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J1\temp\lucene.search.TestBoolean2_CF38C8C5F1351689-001

at __randomizedtesting.SeedInfo.seed([CF38C8C5F1351689]:0)
at org.apache.lucene.util.IOUtils.rm(IOUtils.java:329)
at 
org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:216)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterAlways(TestRuleAdapter.java:31)
at 

[jira] [Updated] (SOLR-6811) TestLBHttpSolrServer.testSimple stall: 2 CloserThreads waiting for same lock?

2018-01-12 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett updated SOLR-6811:

Component/s: Tests

> TestLBHttpSolrServer.testSimple stall: 2 CloserThreads waiting for same lock? 
> --
>
> Key: SOLR-6811
> URL: https://issues.apache.org/jira/browse/SOLR-6811
> Project: Solr
>  Issue Type: Bug
>  Components: Tests
>Reporter: Hoss Man
> Attachments: td.1.txt, td.2.txt, td.3.txt
>
>
> got a stall today in TestLBHttpSolrServer.testSimple on 5x branch
> looking at the stack dumps, it seems like there are 2 instances of 
> CloserThread wait()ing to be notified on the same "lock" Object?
> 2 things seem suspicious:
> a) what are they waiting for? what thread is expected to be notifying? 
> (because i don't see anything else running that might do the job)
> b) why are there 2 instances of CloserThread?  from a quick skim it seems 
> like there should only be one.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-6809) SOLR failed to create index for Cassandra 2.1.1 list which have user defined data type

2018-01-12 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett closed SOLR-6809.
---

> SOLR failed to create index for Cassandra 2.1.1 list which have user defined 
> data type
> --
>
> Key: SOLR-6809
> URL: https://issues.apache.org/jira/browse/SOLR-6809
> Project: Solr
>  Issue Type: Bug
>  Components: contrib - DataImportHandler
>Affects Versions: 4.10.2
>Reporter: madheswaran
>  Labels: patch
>
> 16767 [qtp297774990-12] INFO  org.apache.solr.handler.dataimport.DataImporter 
>  – Loading DIH Configuration: dataconfigCassandra.xml
> 16779 [qtp297774990-12] INFO  org.apache.solr.handler.dataimport.DataImporter 
>  – Data Configuration loaded successfully
> 16788 [Thread-15] INFO  org.apache.solr.handler.dataimport.DataImporter  – 
> Starting Full Import
> 16789 [qtp297774990-12] INFO  org.apache.solr.core.SolrCore  – [Entity_dev] 
> webapp=/solr path=/dataimport 
> params={optimize=false=true=true=true=false=full-import=false=json}
>  status=0 QTime=27
> 16810 [qtp297774990-12] INFO  org.apache.solr.core.SolrCore  – [Entity_dev] 
> webapp=/solr path=/dataimport 
> params={indent=true=status&_=1416042006354=json} status=0 QTime=0
> 16831 [Thread-15] INFO  
> org.apache.solr.handler.dataimport.SimplePropertiesWriter  – Read 
> dataimport.properties
> 16917 [Thread-15] INFO  org.apache.solr.search.SolrIndexSearcher  – Opening 
> Searcher@6214b0dc[Entity_dev] realtime
> 16945 [Thread-15] INFO  org.apache.solr.handler.dataimport.JdbcDataSource  – 
> Creating a connection for entity Entity with URL: 
> jdbc:cassandra://10.234.31.153:9160/galaxy_dev
> 17082 [Thread-15] INFO  org.apache.solr.handler.dataimport.JdbcDataSource  – 
> Time taken for getConnection(): 136
> 17429 [Thread-15] ERROR org.apache.solr.handler.dataimport.DocBuilder  – 
> Exception while processing: Entity document : SolrInputDocument(fields: 
> []):org.apache.solr.handler.dataimport.DataImportHandlerException: Unable to 
> execute query: select * from entity Processing Document # 1
> at 
> org.apache.solr.handler.dataimport.DataImportHandlerException.wrapAndThrow(DataImportHandlerException.java:71)
> at 
> org.apache.solr.handler.dataimport.JdbcDataSource$ResultSetIterator.(JdbcDataSource.java:283)
> at 
> org.apache.solr.handler.dataimport.JdbcDataSource.getData(JdbcDataSource.java:240)
> at 
> org.apache.solr.handler.dataimport.JdbcDataSource.getData(JdbcDataSource.java:44)
> at 
> org.apache.solr.handler.dataimport.SqlEntityProcessor.initQuery(SqlEntityProcessor.java:59)
> at 
> org.apache.solr.handler.dataimport.SqlEntityProcessor.nextRow(SqlEntityProcessor.java:73)
> at 
> org.apache.solr.handler.dataimport.EntityProcessorWrapper.nextRow(EntityProcessorWrapper.java:243)
> at 
> org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:476)
> at 
> org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:415)
> at 
> org.apache.solr.handler.dataimport.DocBuilder.doFullDump(DocBuilder.java:330)
> at 
> org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:232)
> at 
> org.apache.solr.handler.dataimport.DataImporter.doFullImport(DataImporter.java:416)
> at 
> org.apache.solr.handler.dataimport.DataImporter.runCmd(DataImporter.java:480)
> at 
> org.apache.solr.handler.dataimport.DataImporter$1.run(DataImporter.java:461)
> Caused by: java.lang.NullPointerException
> at org.apache.cassandra.cql.jdbc.ListMaker.compose(ListMaker.java:61)
> at 
> org.apache.cassandra.cql.jdbc.TypedColumn.(TypedColumn.java:68)
> at 
> org.apache.cassandra.cql.jdbc.CassandraResultSet.createColumn(CassandraResultSet.java:1174)
> at 
> org.apache.cassandra.cql.jdbc.CassandraResultSet.populateColumns(CassandraResultSet.java:240)
> at 
> org.apache.cassandra.cql.jdbc.CassandraResultSet.(CassandraResultSet.java:200)
> at 
> org.apache.cassandra.cql.jdbc.CassandraStatement.doExecute(CassandraStatement.java:169)
> at 
> org.apache.cassandra.cql.jdbc.CassandraStatement.execute(CassandraStatement.java:205)
> at 
> org.apache.solr.handler.dataimport.JdbcDataSource$ResultSetIterator.(JdbcDataSource.java:276)
> ... 12 more
> Cassandra Table:
> CREATE TABLE dev.entity (
> id uuid PRIMARY KEY,
> begining int, 
>
> domain text,
> domain_type text, 
>
> template_name text,   
> 
> field_values 

[jira] [Resolved] (SOLR-6809) SOLR failed to create index for Cassandra 2.1.1 list which have user defined data type

2018-01-12 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett resolved SOLR-6809.
-
Resolution: Invalid

Issue reported was in the driver, which isn't part of the Solr project.

> SOLR failed to create index for Cassandra 2.1.1 list which have user defined 
> data type
> --
>
> Key: SOLR-6809
> URL: https://issues.apache.org/jira/browse/SOLR-6809
> Project: Solr
>  Issue Type: Bug
>  Components: contrib - DataImportHandler
>Affects Versions: 4.10.2
>Reporter: madheswaran
>  Labels: patch
>
> 16767 [qtp297774990-12] INFO  org.apache.solr.handler.dataimport.DataImporter 
>  – Loading DIH Configuration: dataconfigCassandra.xml
> 16779 [qtp297774990-12] INFO  org.apache.solr.handler.dataimport.DataImporter 
>  – Data Configuration loaded successfully
> 16788 [Thread-15] INFO  org.apache.solr.handler.dataimport.DataImporter  – 
> Starting Full Import
> 16789 [qtp297774990-12] INFO  org.apache.solr.core.SolrCore  – [Entity_dev] 
> webapp=/solr path=/dataimport 
> params={optimize=false=true=true=true=false=full-import=false=json}
>  status=0 QTime=27
> 16810 [qtp297774990-12] INFO  org.apache.solr.core.SolrCore  – [Entity_dev] 
> webapp=/solr path=/dataimport 
> params={indent=true=status&_=1416042006354=json} status=0 QTime=0
> 16831 [Thread-15] INFO  
> org.apache.solr.handler.dataimport.SimplePropertiesWriter  – Read 
> dataimport.properties
> 16917 [Thread-15] INFO  org.apache.solr.search.SolrIndexSearcher  – Opening 
> Searcher@6214b0dc[Entity_dev] realtime
> 16945 [Thread-15] INFO  org.apache.solr.handler.dataimport.JdbcDataSource  – 
> Creating a connection for entity Entity with URL: 
> jdbc:cassandra://10.234.31.153:9160/galaxy_dev
> 17082 [Thread-15] INFO  org.apache.solr.handler.dataimport.JdbcDataSource  – 
> Time taken for getConnection(): 136
> 17429 [Thread-15] ERROR org.apache.solr.handler.dataimport.DocBuilder  – 
> Exception while processing: Entity document : SolrInputDocument(fields: 
> []):org.apache.solr.handler.dataimport.DataImportHandlerException: Unable to 
> execute query: select * from entity Processing Document # 1
> at 
> org.apache.solr.handler.dataimport.DataImportHandlerException.wrapAndThrow(DataImportHandlerException.java:71)
> at 
> org.apache.solr.handler.dataimport.JdbcDataSource$ResultSetIterator.(JdbcDataSource.java:283)
> at 
> org.apache.solr.handler.dataimport.JdbcDataSource.getData(JdbcDataSource.java:240)
> at 
> org.apache.solr.handler.dataimport.JdbcDataSource.getData(JdbcDataSource.java:44)
> at 
> org.apache.solr.handler.dataimport.SqlEntityProcessor.initQuery(SqlEntityProcessor.java:59)
> at 
> org.apache.solr.handler.dataimport.SqlEntityProcessor.nextRow(SqlEntityProcessor.java:73)
> at 
> org.apache.solr.handler.dataimport.EntityProcessorWrapper.nextRow(EntityProcessorWrapper.java:243)
> at 
> org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:476)
> at 
> org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:415)
> at 
> org.apache.solr.handler.dataimport.DocBuilder.doFullDump(DocBuilder.java:330)
> at 
> org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:232)
> at 
> org.apache.solr.handler.dataimport.DataImporter.doFullImport(DataImporter.java:416)
> at 
> org.apache.solr.handler.dataimport.DataImporter.runCmd(DataImporter.java:480)
> at 
> org.apache.solr.handler.dataimport.DataImporter$1.run(DataImporter.java:461)
> Caused by: java.lang.NullPointerException
> at org.apache.cassandra.cql.jdbc.ListMaker.compose(ListMaker.java:61)
> at 
> org.apache.cassandra.cql.jdbc.TypedColumn.(TypedColumn.java:68)
> at 
> org.apache.cassandra.cql.jdbc.CassandraResultSet.createColumn(CassandraResultSet.java:1174)
> at 
> org.apache.cassandra.cql.jdbc.CassandraResultSet.populateColumns(CassandraResultSet.java:240)
> at 
> org.apache.cassandra.cql.jdbc.CassandraResultSet.(CassandraResultSet.java:200)
> at 
> org.apache.cassandra.cql.jdbc.CassandraStatement.doExecute(CassandraStatement.java:169)
> at 
> org.apache.cassandra.cql.jdbc.CassandraStatement.execute(CassandraStatement.java:205)
> at 
> org.apache.solr.handler.dataimport.JdbcDataSource$ResultSetIterator.(JdbcDataSource.java:276)
> ... 12 more
> Cassandra Table:
> CREATE TABLE dev.entity (
> id uuid PRIMARY KEY,
> begining int, 
>
> domain text,
> domain_type text, 
>
> template_name text,  

[JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_144) - Build # 21264 - Still Unstable!

2018-01-12 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/21264/
Java: 32bit/jdk1.8.0_144 -server -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.sim.TestClusterStateProvider.testAutoScalingConfig

Error Message:
expected: org.apache.solr.client.solrj.cloud.autoscaling.AutoScalingConfig<{   
"cluster-preferences":[{"maximize":"freedisk"}],   
"triggers":{".auto_add_replicas":{   "name":".auto_add_replicas",   
"event":"nodeLost",   "waitFor":30,   "actions":[ {   
"name":"auto_add_replicas_plan",   
"class":"solr.AutoAddReplicasPlanAction"}, {   
"name":"execute_plan",   "class":"solr.ExecutePlanAction"}],   
"enabled":true}},   "listeners":{".auto_add_replicas.system":{   
"trigger":".auto_add_replicas",   "afterAction":[],   "stage":[ 
"STARTED", "ABORTED", "SUCCEEDED", "FAILED", 
"BEFORE_ACTION", "AFTER_ACTION", "IGNORED"],   
"class":"org.apache.solr.cloud.autoscaling.SystemLogListener",   
"beforeAction":[]}},   "properties":{}}> but was: 
org.apache.solr.client.solrj.cloud.autoscaling.AutoScalingConfig<{   
"cluster-preferences":[{"maximize":"freedisk"}],   
"triggers":{".auto_add_replicas":{   "name":".auto_add_replicas",   
"event":"nodeLost",   "waitFor":30,   "actions":[ {   
"name":"auto_add_replicas_plan",   
"class":"solr.AutoAddReplicasPlanAction"}, {   
"name":"execute_plan",   "class":"solr.ExecutePlanAction"}],   
"enabled":true}},   "listeners":{".auto_add_replicas.system":{   
"trigger":".auto_add_replicas",   "afterAction":[],   "stage":[ 
"STARTED", "ABORTED", "SUCCEEDED", "FAILED", 
"BEFORE_ACTION", "AFTER_ACTION", "IGNORED"],   
"class":"org.apache.solr.cloud.autoscaling.SystemLogListener",   
"beforeAction":[]}},   "properties":{}}>

Stack Trace:
java.lang.AssertionError: expected: 
org.apache.solr.client.solrj.cloud.autoscaling.AutoScalingConfig<{
  "cluster-preferences":[{"maximize":"freedisk"}],
  "triggers":{".auto_add_replicas":{
  "name":".auto_add_replicas",
  "event":"nodeLost",
  "waitFor":30,
  "actions":[
{
  "name":"auto_add_replicas_plan",
  "class":"solr.AutoAddReplicasPlanAction"},
{
  "name":"execute_plan",
  "class":"solr.ExecutePlanAction"}],
  "enabled":true}},
  "listeners":{".auto_add_replicas.system":{
  "trigger":".auto_add_replicas",
  "afterAction":[],
  "stage":[
"STARTED",
"ABORTED",
"SUCCEEDED",
"FAILED",
"BEFORE_ACTION",
"AFTER_ACTION",
"IGNORED"],
  "class":"org.apache.solr.cloud.autoscaling.SystemLogListener",
  "beforeAction":[]}},
  "properties":{}}> but was: 
org.apache.solr.client.solrj.cloud.autoscaling.AutoScalingConfig<{
  "cluster-preferences":[{"maximize":"freedisk"}],
  "triggers":{".auto_add_replicas":{
  "name":".auto_add_replicas",
  "event":"nodeLost",
  "waitFor":30,
  "actions":[
{
  "name":"auto_add_replicas_plan",
  "class":"solr.AutoAddReplicasPlanAction"},
{
  "name":"execute_plan",
  "class":"solr.ExecutePlanAction"}],
  "enabled":true}},
  "listeners":{".auto_add_replicas.system":{
  "trigger":".auto_add_replicas",
  "afterAction":[],
  "stage":[
"STARTED",
"ABORTED",
"SUCCEEDED",
"FAILED",
"BEFORE_ACTION",
"AFTER_ACTION",
"IGNORED"],
  "class":"org.apache.solr.cloud.autoscaling.SystemLogListener",
  "beforeAction":[]}},
  "properties":{}}>
at 
__randomizedtesting.SeedInfo.seed([E10BB4F5A3CD3B52:DE83B75DB49BCBB5]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:147)
at 
org.apache.solr.cloud.autoscaling.sim.TestClusterStateProvider.testAutoScalingConfig(TestClusterStateProvider.java:214)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 

[jira] [Updated] (SOLR-6803) Pivot Performance

2018-01-12 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6803?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett updated SOLR-6803:

Component/s: faceting

> Pivot Performance
> -
>
> Key: SOLR-6803
> URL: https://issues.apache.org/jira/browse/SOLR-6803
> Project: Solr
>  Issue Type: Bug
>  Components: faceting
>Affects Versions: 5.1
>Reporter: Neil Ireson
>Priority: Minor
> Attachments: PivotPerformanceTest.java
>
>
> I found that my pivot search for terms per day was taking an age so I knocked 
> up a quick test, using a collection of 1 million documents with a different 
> number of random terms and times, to compare different ways of getting the 
> counts.
> 1) Combined = combining the term and time in a single field.
> 2) Facet = for each term set the query to the term and then get the time 
> facet 
> 3) Pivot = use the term/time pivot facet.
> The following two tables present the results for version 4.9.1 vs 4.10.1, as 
> an average of five runs.
> 4.9.1 (Processing time in ms)
> |Values (#)   |  Combined (ms)| Facet (ms)| Pivot (ms)|
> |100   |22|21|52|
> |1000  |   178|57|   115|
> |1 |  1363|   211|   310|
> |10|  2592|  1009|   978|
> |50|  3125|  3753|  2476|
> |100   |  3957|  6789|  3725|
> 4.10.1 (Processing time in ms)
> |Values (#)   |  Combined (ms)| Facet (ms)| Pivot (ms)|
> |100   |21|21|75|
> |1000  |   188|60|   265|
> |1 |  1438|   215|  1826|
> |10|  2768|  1073| 16594|
> |50|  3266|  3686| 99682|
> |100   |  4080|  6777|208873|
> The results show that, as the number of pivot values increases (i.e. number 
> of terms * number of times), pivot performance in 4.10.1 get progressively 
> worse.
> I tried to look at the code but there was a lot of changes in pivoting 
> between 4.9 and 4.10, and so it is not clear to me what has cause the 
> performance issues. However the results seem to indicate that if the pivot 
> was simply a combined facet search, it could potentially produce better and 
> more robust performance.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6785) polyfields are encapsulated and escaped in csv

2018-01-12 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett updated SOLR-6785:

Component/s: update

> polyfields are encapsulated and escaped in csv
> --
>
> Key: SOLR-6785
> URL: https://issues.apache.org/jira/browse/SOLR-6785
> Project: Solr
>  Issue Type: Bug
>  Components: update
>Reporter: Michael Lawrence
>
> The wiki for UpdateCSV (which also presumably applies to the CSV response 
> writer) states: "If an escape is specified, the encapsulator is not used 
> unless also explicitly specified since most formats use either encapsulation 
> or escaping, not both." That makes a lot of sense. However,  the fix for 
> SOLR-3959 makes it so that polyfields are always escaped, even when 
> enscapsulation is enabled. Are we sure that SOLR-3959 was even a bug in the 
> first place? Why should commas in the output of polyfields be treated any 
> differently than commas elsewhere? One possible enhancement to the interface 
> would be to use "\" as the separator by default when encapsulation is 
> disabled, but I propose starting by reverting the "fix" for SOLR-3959.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6772) Support regex based atomic remove

2018-01-12 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6772?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett updated SOLR-6772:

  Priority: Minor  (was: Major)
Issue Type: New Feature  (was: Bug)

> Support regex based atomic remove
> -
>
> Key: SOLR-6772
> URL: https://issues.apache.org/jira/browse/SOLR-6772
> Project: Solr
>  Issue Type: New Feature
>Reporter: Steven Bower
>Priority: Minor
>
> This is a follow-on ticket from SOLR-3862 ... The goal here is to support 
> regex based field value removal for the following use cases:
> 1. You may not know the values you like to remove, imagine a permissioning 
> case: [ user-u1, user-u2, group-g1 ] where you want to remove all users (ie 
> user-.*)
> 2. You may have a large number of values an it would be expensive to list 
> them all but you could encapsulate in a regex.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6771) Sending DIH request to non-leader can result in different number of successful documents

2018-01-12 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6771?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett updated SOLR-6771:

Component/s: contrib - DataImportHandler

> Sending DIH request to non-leader can result in different number of 
> successful documents
> 
>
> Key: SOLR-6771
> URL: https://issues.apache.org/jira/browse/SOLR-6771
> Project: Solr
>  Issue Type: Bug
>  Components: contrib - DataImportHandler
>Affects Versions: 4.10
>Reporter: Greg Harris
>
> Basically if you send a DIH request to the non-leader the following set of 
> circumstances can occur:
> 1) If there are errors in some of the documents the request itself is 
> rejected by the leader (try making a required field null with some documents 
> to make sure there are rejections). 
> 2) This causes all documents on that request to appear to fail. The number of 
> documents that a follower is able to update DIH with appears variable. 
> 3) You need to use a large number of documents it appears to see the anomaly. 
> This results in the following error on the follower:
> 2014-11-20 12:06:16.470; 34054 [Thread-18] WARN  
> org.apache.solr.update.processor.DistributedUpdateProcessor  – Error sending 
> update
> org.apache.solr.common.SolrException: Bad Request
> request: 
> http://10.0.2.15:8983/solr/collection1/update?update.distrib=TOLEADER=http%3A%2F%2F10.0.2.15%3A8982%2Fsolr%2Fcollection1%2F=javabin=2
> at 
> org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrServer$Runner.run(ConcurrentUpdateSolrServer.java:240)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:722)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6759) ExpandComponent does not call finish() on DelegatingCollectors

2018-01-12 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6759?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett updated SOLR-6759:

Component/s: query parsers

> ExpandComponent does not call finish() on DelegatingCollectors
> --
>
> Key: SOLR-6759
> URL: https://issues.apache.org/jira/browse/SOLR-6759
> Project: Solr
>  Issue Type: Bug
>  Components: query parsers
>Reporter: Simon Endele
>Assignee: Joel Bernstein
> Fix For: 4.10.5
>
> Attachments: ExpandComponent.java.patch
>
>
> We have a PostFilter for ACL filtering in action that has a similar structure 
> as CollapsingQParserPlugin, i.e. it's DelegatingCollector gathers all 
> documents and calls delegate.collect() for all docs finally in its finish() 
> method.
> In contrast to CollapsingQParserPlugin our PostFilter is also called by the 
> ExpandComponent (for purpose).
> But as the finish method is never called by the ExpandComponent, the "expand" 
> section in the result is always empty.
> Tested with Solr 4.10.2.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6756) The cloud-dev scripts do not seem to work with the new example layout.

2018-01-12 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett updated SOLR-6756:

Component/s: SolrCloud

> The cloud-dev scripts do not seem to work with the new example layout.
> --
>
> Key: SOLR-6756
> URL: https://issues.apache.org/jira/browse/SOLR-6756
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Reporter: Mark Miller
>Assignee: Mark Miller
> Fix For: 6.0
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-6720) Update of a multivalued property with an empty list deletes all properties

2018-01-12 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett resolved SOLR-6720.
-
Resolution: Cannot Reproduce

>From comments, it seems this does work as expected.

> Update of a multivalued property with an empty list deletes all properties
> --
>
> Key: SOLR-6720
> URL: https://issues.apache.org/jira/browse/SOLR-6720
> Project: Solr
>  Issue Type: Bug
>  Components: update
>Affects Versions: 4.10
> Environment: Windows 7 64 bit
>Reporter: Günther Ruck
>
> I tried to update a multivalued-Property of a Solr-Dokument with an empty 
> list. In Solrj I took the option 'set' to replace the current value.
> After the update operation all properties of the document are deleted.
> The operation works as expected if update is called with "null" instead of 
> the empty list.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-6720) Update of a multivalued property with an empty list deletes all properties

2018-01-12 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett closed SOLR-6720.
---

> Update of a multivalued property with an empty list deletes all properties
> --
>
> Key: SOLR-6720
> URL: https://issues.apache.org/jira/browse/SOLR-6720
> Project: Solr
>  Issue Type: Bug
>  Components: update
>Affects Versions: 4.10
> Environment: Windows 7 64 bit
>Reporter: Günther Ruck
>
> I tried to update a multivalued-Property of a Solr-Dokument with an empty 
> list. In Solrj I took the option 'set' to replace the current value.
> After the update operation all properties of the document are deleted.
> The operation works as expected if update is called with "null" instead of 
> the empty list.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6719) Collection API: CREATE ignores 'property.name' when creating individual cores

2018-01-12 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett updated SOLR-6719:

Component/s: SolrCloud

> Collection API: CREATE ignores 'property.name' when creating individual cores
> -
>
> Key: SOLR-6719
> URL: https://issues.apache.org/jira/browse/SOLR-6719
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Reporter: Hoss Man
>
> Yashveer Rana pointed this out in the ref guide comments...
> https://cwiki.apache.org/confluence/display/solr/Collections+API?focusedCommentId=47382851#comment-47382851
> * Collection CREATE is documented to support "property._name_=_value_" (where 
> 'name' and 'property' are italics placeholders for user supplied key=val) as 
> "Set core property _name_ to _value_. See core.properties file contents."
> * The [docs for 
> core.properties|https://cwiki.apache.org/confluence/display/solr/Format+of+solr.xml#Formatofsolr.xml-core.properties_files]
>  include a list of supported property values, including "name" (literal) as 
> "The name of the SolrCore. You'll use this name to reference the SolrCore 
> when running commands with the CoreAdminHandler."
> From these docs, it's reasonable to assume that using a URL like this...
> http://localhost:8983/solr/admin/collections?action=CREATE=my_collection=2=data_driven_schema_configs=my_corename
> ...should cause "my_collection" to be created, with the core name used for 
> every replica being "my_corename" ... but that doesn't happen.  instead the 
> replicas get core names like "my_collection_shard1_replica1"
> 
> This is either a bug, or (my suspicion) it's intentional that the user 
> specific core name is not being used -- if it's intentional, then the 
> Collection CREATE command should fail with a clear error if a user does try 
> to use "property.name" rather then silently ignoring it and the Collection 
> CREATE docs should be updated to make it clear that "name" is an exception to 
> the general property.foo -> foo in core.properties support.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6714) Collection RELOAD returns 200 even when some shards fail to reload -- other APIs with similar problems?

2018-01-12 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett updated SOLR-6714:

Component/s: SolrCloud

> Collection RELOAD returns 200 even when some shards fail to reload -- other 
> APIs with similar problems?
> ---
>
> Key: SOLR-6714
> URL: https://issues.apache.org/jira/browse/SOLR-6714
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Reporter: Hoss Man
>
> Using 4.10.2, if you startup a simple 2 node cloud with...
> {noformat}
> ./bin/solr start -e cloud -noprompt
> {noformat}
> And then try to force a situation where a replica is hozed like this...
> {noformat}
> rm -rf node1/solr/gettingstarted_shard1_replica1/*
> chmod a-rw node1/solr/gettingstarted_shard1_replica1
> {noformat}
> The result of a Collection RELOAD command is still a success...
> {noformat}
> curl -sS -D - 
> 'http://localhost:8983/solr/admin/collections?action=RELOAD=gettingstarted'
> HTTP/1.1 200 OK
> Content-Type: application/xml; charset=UTF-8
> Transfer-Encoding: chunked
> 
> 
> 0 name="QTime">1866 name="127.0.1.1:8983_solr">org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException:Error
>  handling 'reload' action name="127.0.1.1:8983_solr"> name="status">01631 name="127.0.1.1:7574_solr"> name="status">01710 name="127.0.1.1:7574_solr"> name="status">01795
> 
> {noformat}
> The HTTP stats code of collection level APIs should not be 200 if any of the 
> underlying requests that it depends on result in 4xx or 5xx errors.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6710) EarlyTerminatingCollectorException thrown during auto-warming

2018-01-12 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6710?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett updated SOLR-6710:

Component/s: multicore

> EarlyTerminatingCollectorException thrown during auto-warming
> -
>
> Key: SOLR-6710
> URL: https://issues.apache.org/jira/browse/SOLR-6710
> Project: Solr
>  Issue Type: Bug
>  Components: multicore
>Affects Versions: 4.10.1
> Environment: Solaris, Solr in multicore-setup
>Reporter: Dirk Högemann
>Priority: Minor
>
> Our production Solr-Slaves-Cores (we have about 40 Cores (each has a moderate 
> size about 10K documents to  90K documents)) produce many exceptions of type:
> 014-11-05 15:06:06.247 [searcherExecutor-158-thread-1] ERROR 
> org.apache.solr.search.SolrCache: Error during auto-warming of 
> key:org.apache.solr.search.QueryResultKey@62340b01:org.apache.solr.search.EarlyTerminatingCollectorException
> Our relevant solrconfig is
>   
> 
>   18
> 
>   
>   
> 2
>class="solr.FastLRUCache"
>   size="8192"
>   initialSize="8192"
>   autowarmCount="4096"/>
>
>class="solr.FastLRUCache"
>   size="8192"
>   initialSize="8192"
>   autowarmCount="4096"/>
>   
>class="solr.FastLRUCache"
>   size="8192"
>   initialSize="8192"
>   autowarmCount="4096"/>
>   
> Answer from List (Mikhail Khludnev):
> https://github.com/apache/lucene-solr/blob/20f9303f5e2378e2238a5381291414881ddb8172/solr/core/src/java/org/apache/solr/search/SolrIndexSearcher.java#L522
> at least this ERRORs broke nothing  see
> https://github.com/apache/lucene-solr/blob/20f9303f5e2378e2238a5381291414881ddb8172/solr/core/src/java/org/apache/solr/search/FastLRUCache.java#L165
> anyway, here are two usability issues:
>  - of key:org.apache.solr.search.QueryResultKey@62340b01 lack of readable
> toString()
>  - I don't think regeneration exceptions are ERRORs, they seem WARNs for me
> or even lower. also for courtesy, particularly
> EarlyTerminatingCollectorExcepions can be recognized, and even ignored,
> providing SolrIndexSearcher.java#L522
> -> Maybe the log-level could be set to info/warn, if there are no 
> implications on the functionality?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6700) ChildDocTransformer doesn't return correct children after updating and optimising solr index

2018-01-12 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6700?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett updated SOLR-6700:

Component/s: update

> ChildDocTransformer doesn't return correct children after updating and 
> optimising solr index
> 
>
> Key: SOLR-6700
> URL: https://issues.apache.org/jira/browse/SOLR-6700
> Project: Solr
>  Issue Type: Bug
>  Components: update
>Reporter: Bogdan Marinescu
>Priority: Critical
> Fix For: 4.10.5
>
>
> I have an index with nested documents. 
> {code:title=schema.xml snippet|borderStyle=solid}
>   multiValued="false" />
>  required="true"/>
> 
> 
> 
> 
> 
> {code}
> Afterwards I add the following documents:
> {code}
> 
>   
> 1
> Test Artist 1
> 1
> 
> 11
> Test Album 1
>   Test Song 1
> 2
> 
>   
>   
> 2
> Test Artist 2
> 1
> 
> 22
> Test Album 2
>   Test Song 2
> 2
> 
>   
> 
> {code}
> After performing the following query 
> {quote}
> http://localhost:8983/solr/collection1/select?q=%7B!parent+which%3DentityType%3A1%7D=*%2Cscore%2C%5Bchild+parentFilter%3DentityType%3A1%5D=json=true
> {quote}
> I get a correct answer (child matches parent, check _root_ field)
> {code:title=add docs|borderStyle=solid}
> {
>   "responseHeader":{
> "status":0,
> "QTime":1,
> "params":{
>   "fl":"*,score,[child parentFilter=entityType:1]",
>   "indent":"true",
>   "q":"{!parent which=entityType:1}",
>   "wt":"json"}},
>   "response":{"numFound":2,"start":0,"maxScore":1.0,"docs":[
>   {
> "id":"1",
> "pName":"Test Artist 1",
> "entityType":1,
> "_version_":1483832661048819712,
> "_root_":"1",
> "score":1.0,
> "_childDocuments_":[
> {
>   "id":"11",
>   "cAlbum":"Test Album 1",
>   "cSong":"Test Song 1",
>   "entityType":2,
>   "_root_":"1"}]},
>   {
> "id":"2",
> "pName":"Test Artist 2",
> "entityType":1,
> "_version_":1483832661050916864,
> "_root_":"2",
> "score":1.0,
> "_childDocuments_":[
> {
>   "id":"22",
>   "cAlbum":"Test Album 2",
>   "cSong":"Test Song 2",
>   "entityType":2,
>   "_root_":"2"}]}]
>   }}
> {code}
> Afterwards I try to update one document:
> {code:title=update doc|borderStyle=solid}
> 
> 
> 1
> INIT
> 
> 
> {code}
> After performing the previous query I get the right result (like the previous 
> one but with the pName field updated).
> The problem only comes after performing an *optimize*. 
> Now, the same query yields the following result:
> {code}
> {
>   "responseHeader":{
> "status":0,
> "QTime":1,
> "params":{
>   "fl":"*,score,[child parentFilter=entityType:1]",
>   "indent":"true",
>   "q":"{!parent which=entityType:1}",
>   "wt":"json"}},
>   "response":{"numFound":2,"start":0,"maxScore":1.0,"docs":[
>   {
> "id":"2",
> "pName":"Test Artist 2",
> "entityType":1,
> "_version_":1483832661050916864,
> "_root_":"2",
> "score":1.0,
> "_childDocuments_":[
> {
>   "id":"11",
>   "cAlbum":"Test Album 1",
>   "cSong":"Test Song 1",
>   "entityType":2,
>   "_root_":"1"},
> {
>   "id":"22",
>   "cAlbum":"Test Album 2",
>   "cSong":"Test Song 2",
>   "entityType":2,
>   "_root_":"2"}]},
>   {
> "id":"1",
> "pName":"INIT",
> "entityType":1,
> "_root_":"1",
> "_version_":1483832916867809280,
> "score":1.0}]
>   }}
> {code}
> As can be seen, the document with id:2 now contains the child with id:11 that 
> belongs to the document with id:1. 
> I haven't found any references on the web about this except 
> http://blog.griddynamics.com/2013/09/solr-block-join-support.html
> Similar issue: SOLR-6096
> Is this problem known? Is there a workaround for this? 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4198) Allow codecs to index term impacts

2018-01-12 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16324427#comment-16324427
 ] 

Robert Muir commented on LUCENE-4198:
-

Yeah, its good to split it up into sizable chunks: we can always mark the new 
API experimental to be safe

I like the idea of the more general solution to speed up boolean and maybe 
proxy queries in the future. Will look over the API to the new enum.

> Allow codecs to index term impacts
> --
>
> Key: LUCENE-4198
> URL: https://issues.apache.org/jira/browse/LUCENE-4198
> Project: Lucene - Core
>  Issue Type: Sub-task
>  Components: core/index
>Reporter: Robert Muir
> Attachments: LUCENE-4198-BMW.patch, LUCENE-4198.patch, 
> LUCENE-4198.patch, LUCENE-4198.patch, LUCENE-4198.patch, 
> LUCENE-4198_flush.patch
>
>
> Subtask of LUCENE-4100.
> Thats an example of something similar to impact indexing (though, his 
> implementation currently stores a max for the entire term, the problem is the 
> same).
> We can imagine other similar algorithms too: I think the codec API should be 
> able to support these.
> Currently it really doesnt: Stefan worked around the problem by providing a 
> tool to 'rewrite' your index, he passes the IndexReader and Similarity to it. 
> But it would be better if we fixed the codec API.
> One problem is that the Postings writer needs to have access to the 
> Similarity. Another problem is that it needs access to the term and 
> collection statistics up front, rather than after the fact.
> This might have some cost (hopefully minimal), so I'm thinking to experiment 
> in a branch with these changes and see if we can make it work well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-4198) Allow codecs to index term impacts

2018-01-12 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16324415#comment-16324415
 ] 

Adrien Grand edited comment on LUCENE-4198 at 1/12/18 7:16 PM:
---

To give some insight into future work on scorers, here is an untested patch 
(the only tests for now are that luceneutil gives the same hits back) that 
implements some ideas from the BMW paper.

The new {{BlockMaxConjunctionScorer}} skips blocks whose sum of max scores is 
less than the max competitive score, and also skips hits when the score of the 
max scoring clause is less than the minimum required score minus max scores of 
other clauses.

{{WANDScorer}} uses the block max scores to get an upper bound of the score of 
the current candidate, which already helps {{OrHighLow}}. It could also skip 
over blocks when the sum of the max scores is not competitive, but the impl 
needs a bit more work than for conjunctions.

Baseline is LUCENE-4198.patch, patch is LUCENE-4198.patch and 
LUCENE-4198-BMW.patch combined.

{noformat}
TaskQPS baseline  StdDev   QPS patch  StdDev
Pct diff
 LowTerm 2365.07  (2.8%) 2313.92  (2.5%)   
-2.2% (  -7% -3%)
   OrHighMed   73.78  (2.9%)   72.70  (2.5%)   
-1.5% (  -6% -4%)
   HighTermDayOfYearSort   88.44 (11.4%)   87.15 (13.0%)   
-1.5% ( -23% -   25%)
HighTerm  650.28  (5.8%)  646.81  (5.7%)   
-0.5% ( -11% -   11%)
 Respell  228.08  (2.5%)  227.84  (2.4%)   
-0.1% (  -4% -4%)
 MedTerm 1189.63  (4.2%) 1189.27  (4.6%)   
-0.0% (  -8% -9%)
 MedSpanNear   12.21  (5.0%)   12.24  (5.5%)
0.2% (  -9% -   11%)
HighSpanNear7.26  (5.5%)7.28  (5.8%)
0.2% ( -10% -   12%)
Wildcard  108.43  (7.0%)  108.95  (6.8%)
0.5% ( -12% -   15%)
 Prefix3  128.80  (8.1%)  129.46  (7.8%)
0.5% ( -14% -   17%)
   HighTermMonthSort  172.27  (8.0%)  173.28  (8.0%)
0.6% ( -14% -   18%)
  Fuzzy2  104.86  (5.7%)  105.79  (6.5%)
0.9% ( -10% -   13%)
 LowSloppyPhrase   14.80  (5.6%)   14.93  (6.1%)
0.9% ( -10% -   13%)
 LowSpanNear   95.06  (3.4%)   96.07  (4.2%)
1.1% (  -6% -8%)
HighSloppyPhrase3.96  (8.6%)4.02  (9.7%)
1.6% ( -15% -   21%)
  IntNRQ   29.80  (7.0%)   30.50  (6.9%)
2.4% ( -10% -   17%)
  Fuzzy1  281.25  (4.8%)  288.77  (9.5%)
2.7% ( -11% -   17%)
 MedSloppyPhrase   53.95  (8.0%)   55.43  (9.0%)
2.7% ( -13% -   21%)
  OrHighHigh   23.86  (4.1%)   24.70  (2.7%)
3.5% (  -3% -   10%)
   MedPhrase   42.45  (2.2%)   44.10  (3.2%)
3.9% (  -1% -9%)
   LowPhrase   19.57  (2.7%)   20.47  (3.6%)
4.6% (  -1% -   11%)
  HighPhrase   15.76  (4.1%)   16.91  (5.3%)
7.3% (  -1% -   17%)
   OrHighLow  209.91  (2.3%)  261.10  (3.5%)   
24.4% (  18% -   30%)
 AndHighHigh   27.22  (2.1%)   47.66  (5.1%)   
75.1% (  66% -   84%)
  AndHighLow  514.84  (3.5%)  920.46  (6.0%)   
78.8% (  66% -   91%)
  AndHighMed   56.15  (2.0%)  107.60  (5.4%)   
91.6% (  82% -  101%)
{noformat}




was (Author: jpountz):
To give some insight into future work on scorers, here is an untested patch 
(the only tests for now are that luceneutil gives the same hits back) that 
implements some ideas from the BMW paper.

The new {{BlockMaxConjunctionScorer}} skips blocks whose sum of max scores is 
less than the max competitive score, and also skips hits when the score of the 
max scoring clause is less than the minimum required score minus max scores of 
other clauses.

{{WANDScorer}} uses the block max scores to get an upper bound of the score of 
the current candidate, which already helps {{OrHighLow}}. It could also skip 
over blocks when the sum of the max scores is not competitive, but the impl 
needs a bit more work than for conjunctions.

{noformat}
TaskQPS baseline  StdDev   QPS patch  StdDev
Pct diff
 LowTerm 2365.07  (2.8%) 2313.92  (2.5%)   
-2.2% (  -7% -3%)
   OrHighMed   73.78  (2.9%)   72.70  (2.5%)   
-1.5% (  -6% -4%)
   HighTermDayOfYearSort   88.44 (11.4%)   87.15 (13.0%)   
-1.5% ( -23% -   25%)
HighTerm  650.28  (5.8%)  

[jira] [Updated] (LUCENE-4198) Allow codecs to index term impacts

2018-01-12 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-4198:
-
Attachment: LUCENE-4198-BMW.patch

To give some insight into future work on scorers, here is an untested patch 
(the only tests for now are that luceneutil gives the same hits back) that 
implements some ideas from the BMW paper.

The new {{BlockMaxConjunctionScorer}} skips blocks whose sum of max scores is 
less than the max competitive score, and also skips hits when the score of the 
max scoring clause is less than the minimum required score minus max scores of 
other clauses.

{{WANDScorer}} uses the block max scores to get an upper bound of the score of 
the current candidate, which already helps {{OrHighLow}}. It could also skip 
over blocks when the sum of the max scores is not competitive, but the impl 
needs a bit more work than for conjunctions.

{noformat}
TaskQPS baseline  StdDev   QPS patch  StdDev
Pct diff
 LowTerm 2365.07  (2.8%) 2313.92  (2.5%)   
-2.2% (  -7% -3%)
   OrHighMed   73.78  (2.9%)   72.70  (2.5%)   
-1.5% (  -6% -4%)
   HighTermDayOfYearSort   88.44 (11.4%)   87.15 (13.0%)   
-1.5% ( -23% -   25%)
HighTerm  650.28  (5.8%)  646.81  (5.7%)   
-0.5% ( -11% -   11%)
 Respell  228.08  (2.5%)  227.84  (2.4%)   
-0.1% (  -4% -4%)
 MedTerm 1189.63  (4.2%) 1189.27  (4.6%)   
-0.0% (  -8% -9%)
 MedSpanNear   12.21  (5.0%)   12.24  (5.5%)
0.2% (  -9% -   11%)
HighSpanNear7.26  (5.5%)7.28  (5.8%)
0.2% ( -10% -   12%)
Wildcard  108.43  (7.0%)  108.95  (6.8%)
0.5% ( -12% -   15%)
 Prefix3  128.80  (8.1%)  129.46  (7.8%)
0.5% ( -14% -   17%)
   HighTermMonthSort  172.27  (8.0%)  173.28  (8.0%)
0.6% ( -14% -   18%)
  Fuzzy2  104.86  (5.7%)  105.79  (6.5%)
0.9% ( -10% -   13%)
 LowSloppyPhrase   14.80  (5.6%)   14.93  (6.1%)
0.9% ( -10% -   13%)
 LowSpanNear   95.06  (3.4%)   96.07  (4.2%)
1.1% (  -6% -8%)
HighSloppyPhrase3.96  (8.6%)4.02  (9.7%)
1.6% ( -15% -   21%)
  IntNRQ   29.80  (7.0%)   30.50  (6.9%)
2.4% ( -10% -   17%)
  Fuzzy1  281.25  (4.8%)  288.77  (9.5%)
2.7% ( -11% -   17%)
 MedSloppyPhrase   53.95  (8.0%)   55.43  (9.0%)
2.7% ( -13% -   21%)
  OrHighHigh   23.86  (4.1%)   24.70  (2.7%)
3.5% (  -3% -   10%)
   MedPhrase   42.45  (2.2%)   44.10  (3.2%)
3.9% (  -1% -9%)
   LowPhrase   19.57  (2.7%)   20.47  (3.6%)
4.6% (  -1% -   11%)
  HighPhrase   15.76  (4.1%)   16.91  (5.3%)
7.3% (  -1% -   17%)
   OrHighLow  209.91  (2.3%)  261.10  (3.5%)   
24.4% (  18% -   30%)
 AndHighHigh   27.22  (2.1%)   47.66  (5.1%)   
75.1% (  66% -   84%)
  AndHighLow  514.84  (3.5%)  920.46  (6.0%)   
78.8% (  66% -   91%)
  AndHighMed   56.15  (2.0%)  107.60  (5.4%)   
91.6% (  82% -  101%)
{noformat}



> Allow codecs to index term impacts
> --
>
> Key: LUCENE-4198
> URL: https://issues.apache.org/jira/browse/LUCENE-4198
> Project: Lucene - Core
>  Issue Type: Sub-task
>  Components: core/index
>Reporter: Robert Muir
> Attachments: LUCENE-4198-BMW.patch, LUCENE-4198.patch, 
> LUCENE-4198.patch, LUCENE-4198.patch, LUCENE-4198.patch, 
> LUCENE-4198_flush.patch
>
>
> Subtask of LUCENE-4100.
> Thats an example of something similar to impact indexing (though, his 
> implementation currently stores a max for the entire term, the problem is the 
> same).
> We can imagine other similar algorithms too: I think the codec API should be 
> able to support these.
> Currently it really doesnt: Stefan worked around the problem by providing a 
> tool to 'rewrite' your index, he passes the IndexReader and Similarity to it. 
> But it would be better if we fixed the codec API.
> One problem is that the Postings writer needs to have access to the 
> Similarity. Another problem is that it needs access to the term and 
> collection statistics up front, rather than after the fact.
> This might have some cost (hopefully minimal), so I'm thinking to experiment 
> in a branch with these changes and see if we can make it work well.



--
This message 

[jira] [Commented] (SOLR-6672) function results' names should not include trailing whitespace

2018-01-12 Thread Cassandra Targett (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16324404#comment-16324404
 ] 

Cassandra Targett commented on SOLR-6672:
-

I can still reproduce this with 7.2. Index the example docs and do something 
like: 
{{http://localhost:8983/solr/techproducts/select?fl=id%20add(1,2)%20score=name:game}},
 and you get an output like:

{code}
"response":{"numFound":2,"start":0,"maxScore":3.5124934,"docs":[
  {
"id":"0812550706",
"add(1,2) ":3.0,
"score":3.5124934},
  {
"id":"0553573403",
"add(1,2) ":3.0,
"score":2.9619396}]
  }
{code}

> function results' names should not include trailing whitespace
> --
>
> Key: SOLR-6672
> URL: https://issues.apache.org/jira/browse/SOLR-6672
> Project: Solr
>  Issue Type: Bug
>  Components: search
>Reporter: Mike Sokolov
>Priority: Minor
>
> If you include a function as a result field in a list of multiple fields 
> separated by white space, the corresponding key in the result markup includes 
> trailing whitespace; Example:
> {code}
> fl="id field(units_used) archive_id"
> {code}
> ends up returning results like this:
> {code}
>   {
> "id": "nest.epubarchive.1",
> "archive_id": "urn:isbn:97849D42C5A01",
> "field(units_used) ": 123
>   ^
>   }
> {code}
> A workaround is to use comma separators instead of whitespace
> {code} 
> fl="id,field(units_used),archive_id"
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6652) Expand Component should search across collections like Collapse does

2018-01-12 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6652?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett updated SOLR-6652:

Issue Type: Improvement  (was: Bug)
   Summary: Expand Component should search across collections like Collapse 
does  (was: Expand Component does not search across collections like Collapse 
does)

> Expand Component should search across collections like Collapse does
> 
>
> Key: SOLR-6652
> URL: https://issues.apache.org/jira/browse/SOLR-6652
> Project: Solr
>  Issue Type: Improvement
>  Components: query parsers
>Affects Versions: 4.10
>Reporter: Greg Harris
>
> It seems the Collapse query parser supports searching multiple collections 
> via parameter: collection=xx,yy,zz. However, expand=true does not support 
> this and all documents are returned from a single collection. Kind of 
> confusing since expand is used with Collapse. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-6632) "Error CREATEing SolrCore" .. "Caused by: null"

2018-01-12 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett resolved SOLR-6632.
-
   Resolution: Fixed
Fix Version/s: 5.0

>From comments, I'm going to guess this was fixed and just never closed. 

> "Error CREATEing SolrCore" .. "Caused by: null"
> ---
>
> Key: SOLR-6632
> URL: https://issues.apache.org/jira/browse/SOLR-6632
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
>Assignee: Noble Paul
> Fix For: 5.0
>
> Attachments: 
> SOLR-6632_jenkins_policeman_Lucene-Solr-5.x-MacOSX_1849.txt
>
>
> We've seen 6 nearly identical non-reproducible jenkins failures with errors 
> stemming from an NPE in ClusterState since r1624556 (SOLR-5473, SOLR-5474, 
> SOLR-5810) was committed (Sep 12th)
> Example...
> http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-MacOSX/1849/consoleText
> {noformat}
>[junit4] ERROR111s | CollectionsAPIDistributedZkTest.testDistribSearch 
> <<<
>[junit4]> Throwable #1: 
> org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: Error 
> CREATEing SolrCore 'halfcollection_shard1_replica1': Unable to create core 
> [halfcollection_shard1_replica1] Caused by: null
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([7FF06594A345DF76:FE16EB8CD41ABF4A]:0)
>[junit4]>  at 
> org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:569)
>[junit4]>  at 
> org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:215)
>[junit4]>  at 
> org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:211)
>[junit4]>  at 
> org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testErrorHandling(CollectionsAPIDistributedZkTest.java:583)
>[junit4]>  at 
> org.apache.solr.cloud.CollectionsAPIDistributedZkTest.doTest(CollectionsAPIDistributedZkTest.java:205)
>[junit4]>  at 
> org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
>[junit4]>  at java.lang.Thread.run(Thread.java:745)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-6632) "Error CREATEing SolrCore" .. "Caused by: null"

2018-01-12 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett closed SOLR-6632.
---

> "Error CREATEing SolrCore" .. "Caused by: null"
> ---
>
> Key: SOLR-6632
> URL: https://issues.apache.org/jira/browse/SOLR-6632
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
>Assignee: Noble Paul
> Fix For: 5.0
>
> Attachments: 
> SOLR-6632_jenkins_policeman_Lucene-Solr-5.x-MacOSX_1849.txt
>
>
> We've seen 6 nearly identical non-reproducible jenkins failures with errors 
> stemming from an NPE in ClusterState since r1624556 (SOLR-5473, SOLR-5474, 
> SOLR-5810) was committed (Sep 12th)
> Example...
> http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-MacOSX/1849/consoleText
> {noformat}
>[junit4] ERROR111s | CollectionsAPIDistributedZkTest.testDistribSearch 
> <<<
>[junit4]> Throwable #1: 
> org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: Error 
> CREATEing SolrCore 'halfcollection_shard1_replica1': Unable to create core 
> [halfcollection_shard1_replica1] Caused by: null
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([7FF06594A345DF76:FE16EB8CD41ABF4A]:0)
>[junit4]>  at 
> org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:569)
>[junit4]>  at 
> org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:215)
>[junit4]>  at 
> org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:211)
>[junit4]>  at 
> org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testErrorHandling(CollectionsAPIDistributedZkTest.java:583)
>[junit4]>  at 
> org.apache.solr.cloud.CollectionsAPIDistributedZkTest.doTest(CollectionsAPIDistributedZkTest.java:205)
>[junit4]>  at 
> org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
>[junit4]>  at java.lang.Thread.run(Thread.java:745)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6619) Improperly handle the InteruptedException in ConccurentUpdateSolrServer

2018-01-12 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6619?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett updated SOLR-6619:

Component/s: clients - java

> Improperly handle the InteruptedException in ConccurentUpdateSolrServer 
> 
>
> Key: SOLR-6619
> URL: https://issues.apache.org/jira/browse/SOLR-6619
> Project: Solr
>  Issue Type: Bug
>  Components: clients - java
>Reporter: Junhao Li
>Priority: Minor
> Attachments: SOLR-6619.patch
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> ConccurentUpdateSolrServer 
> """
> if (isXml) {  
> out.write("".getBytes(StandardCharsets.UTF_8)); 
> }
> """
> should be moved to the finally statement.
> If InteruptedException is raised, it will send the wrong xml document without 
> "" to the server.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-6618) SolrCore Initialization Failures when the solr is restarted, unable to Initialization a collection

2018-01-12 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6618?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett resolved SOLR-6618.
-
Resolution: Cannot Reproduce

A problem like this is likely tied to either the configuration files or the 
startup params used, but neither are supplied here. There's nothing to go on to 
troubleshoot this.

> SolrCore Initialization Failures when the solr is restarted, unable to 
> Initialization a collection
> --
>
> Key: SOLR-6618
> URL: https://issues.apache.org/jira/browse/SOLR-6618
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.8
>Reporter: Vijaya Jonnakuti
>
> I have uploaded  one config:default and  do specify 
> collection.configName=default when I create the collection
> and when solr is restart I get this error 
> org.apache.solr.common.cloud.ZooKeeperException:org.apache.solr.common.cloud.ZooKeeperException:
>  Could not find configName for collection overnighttest found:[default, 
> collection1, collection2 and so on]
> These collection1 and collection2 empty configs are created when I run 
> DataImportHandler using ZKPropertiesWriter 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-6618) SolrCore Initialization Failures when the solr is restarted, unable to Initialization a collection

2018-01-12 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6618?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett closed SOLR-6618.
---

> SolrCore Initialization Failures when the solr is restarted, unable to 
> Initialization a collection
> --
>
> Key: SOLR-6618
> URL: https://issues.apache.org/jira/browse/SOLR-6618
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.8
>Reporter: Vijaya Jonnakuti
>
> I have uploaded  one config:default and  do specify 
> collection.configName=default when I create the collection
> and when solr is restart I get this error 
> org.apache.solr.common.cloud.ZooKeeperException:org.apache.solr.common.cloud.ZooKeeperException:
>  Could not find configName for collection overnighttest found:[default, 
> collection1, collection2 and so on]
> These collection1 and collection2 empty configs are created when I run 
> DataImportHandler using ZKPropertiesWriter 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.2-Windows (64bit/jdk1.8.0_144) - Build # 37 - Still Unstable!

2018-01-12 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.2-Windows/37/
Java: 64bit/jdk1.8.0_144 -XX:-UseCompressedOops -XX:+UseParallelGC

4 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.lucene.store.TestRAMDirectory

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-7.2-Windows\lucene\build\core\test\J0\temp\lucene.store.TestRAMDirectory_5F571002FA895695-001\testString-001:
 java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.2-Windows\lucene\build\core\test\J0\temp\lucene.store.TestRAMDirectory_5F571002FA895695-001\testString-001

C:\Users\jenkins\workspace\Lucene-Solr-7.2-Windows\lucene\build\core\test\J0\temp\lucene.store.TestRAMDirectory_5F571002FA895695-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.2-Windows\lucene\build\core\test\J0\temp\lucene.store.TestRAMDirectory_5F571002FA895695-001
 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\jenkins\workspace\Lucene-Solr-7.2-Windows\lucene\build\core\test\J0\temp\lucene.store.TestRAMDirectory_5F571002FA895695-001\testString-001:
 java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.2-Windows\lucene\build\core\test\J0\temp\lucene.store.TestRAMDirectory_5F571002FA895695-001\testString-001
   
C:\Users\jenkins\workspace\Lucene-Solr-7.2-Windows\lucene\build\core\test\J0\temp\lucene.store.TestRAMDirectory_5F571002FA895695-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.2-Windows\lucene\build\core\test\J0\temp\lucene.store.TestRAMDirectory_5F571002FA895695-001

at __randomizedtesting.SeedInfo.seed([5F571002FA895695]:0)
at org.apache.lucene.util.IOUtils.rm(IOUtils.java:329)
at 
org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:216)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterAlways(TestRuleAdapter.java:31)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  
org.apache.lucene.replicator.IndexReplicationClientTest.testConsistencyOnExceptions

Error Message:
Captured an uncaught exception in thread: Thread[id=16, 
name=ReplicationThread-index, state=RUNNABLE, 
group=TGRP-IndexReplicationClientTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=16, name=ReplicationThread-index, 
state=RUNNABLE, group=TGRP-IndexReplicationClientTest]
at 
__randomizedtesting.SeedInfo.seed([C566DEADD9938176:4AE8390DCBFF7289]:0)
Caused by: java.lang.AssertionError: handler failed too many times: -1
at __randomizedtesting.SeedInfo.seed([C566DEADD9938176]:0)
at 
org.apache.lucene.replicator.IndexReplicationClientTest$4.handleUpdateException(IndexReplicationClientTest.java:304)
at 
org.apache.lucene.replicator.ReplicationClient$ReplicationThread.run(ReplicationClient.java:77)


FAILED:  
junit.framework.TestSuite.org.apache.solr.client.solrj.embedded.JettyWebappTest

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-7.2-Windows\solr\build\solr-solrj\test\J0\temp\solr.client.solrj.embedded.JettyWebappTest_5B89757CE7D1C7FD-001\tempDir-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.2-Windows\solr\build\solr-solrj\test\J0\temp\solr.client.solrj.embedded.JettyWebappTest_5B89757CE7D1C7FD-001\tempDir-001

C:\Users\jenkins\workspace\Lucene-Solr-7.2-Windows\solr\build\solr-solrj\test\J0\temp\solr.client.solrj.embedded.JettyWebappTest_5B89757CE7D1C7FD-001\tempDir-001\solr.xml:
 java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.2-Windows\solr\build\solr-solrj\test\J0\temp\solr.client.solrj.embedded.JettyWebappTest_5B89757CE7D1C7FD-001\tempDir-001\solr.xml


[jira] [Commented] (SOLR-11722) API to create a Time Routed Alias and first collection

2018-01-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16324348#comment-16324348
 ] 

ASF GitHub Bot commented on SOLR-11722:
---

Github user nsoft commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/304#discussion_r161294218
  
--- Diff: 
solr/core/src/java/org/apache/solr/handler/admin/CollectionsHandler.java ---
@@ -476,6 +451,31 @@ private static void 
addStatusToResponse(NamedList results, RequestStatus
   SolrIdentifierValidator.validateAliasName(req.getParams().get(NAME));
   return req.getParams().required().getAll(null, NAME, "collections");
 }),
+CREATEROUTEDALIAS_OP(CREATEROUTEDALIAS, (req, rsp, h) -> {
+  String alias = req.getParams().get(NAME);
+  SolrIdentifierValidator.validateAliasName(alias);
+  Map params = req.getParams().required()
+  .getAll(null, REQUIRED_ROUTING_PARAMS.toArray(new 
String[REQUIRED_ROUTING_PARAMS.size()]));
+  req.getParams().getAll(params, NONREQUIRED_ROUTING_PARAMS);
+  // subset the params to reuse the collection creation/parsing code
+  ModifiableSolrParams collectionParams = 
extractPrefixedParams("create-collection.", req.getParams());
+  if (collectionParams.get(NAME) != null) {
+SolrException solrException = new SolrException(BAD_REQUEST, 
"routed aliases calculate names for their " +
+"dependent collections, you cannot specify the name.");
+log.error("Could not create routed alias",solrException);
+throw solrException;
+  }
+  SolrParams v1Params = convertToV1WhenRequired(req, collectionParams);
+
+  // We need to add this temporary name just to pass validation.
--- End diff --

ah that's actually checked here: 
https://github.com/apache/lucene-solr/pull/304/files#diff-3fe6a8aeb14a57e63507fa17f8346771R207,
 but It could be moved to this class (or done both places)


> API to create a Time Routed Alias and first collection
> --
>
> Key: SOLR-11722
> URL: https://issues.apache.org/jira/browse/SOLR-11722
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: David Smiley
> Attachments: SOLR-11722.patch, SOLR-11722.patch
>
>
> This issue is about creating a single API command to create a "Time Routed 
> Alias" along with its first collection.  Need to decide what endpoint URL it 
> is and parameters.
> Perhaps in v2 it'd be {{/api/collections?command=create-routed-alias}} or 
> alternatively piggy-back off of command=create-alias but we add more options, 
> perhaps with a prefix like "router"?
> Inputs:
> * alias name
> * misc collection creation metadata (e.g. config, numShards, ...) perhaps in 
> this context with a prefix like "collection."
> * metadata for TimeRoutedAliasUpdateProcessor, currently: router.field
> * date specifier for first collection; can include "date math".
> We'll certainly add more options as future features unfold.
> I believe the collection needs to be created first (referring to the alias 
> name via a core property), and then the alias pointing to it which demands 
> collections exist first.  When figuring the collection name, you'll need to 
> reference the format in TimeRoutedAliasUpdateProcessor.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #304: SOLR-11722

2018-01-12 Thread nsoft
Github user nsoft commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/304#discussion_r161294218
  
--- Diff: 
solr/core/src/java/org/apache/solr/handler/admin/CollectionsHandler.java ---
@@ -476,6 +451,31 @@ private static void 
addStatusToResponse(NamedList results, RequestStatus
   SolrIdentifierValidator.validateAliasName(req.getParams().get(NAME));
   return req.getParams().required().getAll(null, NAME, "collections");
 }),
+CREATEROUTEDALIAS_OP(CREATEROUTEDALIAS, (req, rsp, h) -> {
+  String alias = req.getParams().get(NAME);
+  SolrIdentifierValidator.validateAliasName(alias);
+  Map params = req.getParams().required()
+  .getAll(null, REQUIRED_ROUTING_PARAMS.toArray(new 
String[REQUIRED_ROUTING_PARAMS.size()]));
+  req.getParams().getAll(params, NONREQUIRED_ROUTING_PARAMS);
+  // subset the params to reuse the collection creation/parsing code
+  ModifiableSolrParams collectionParams = 
extractPrefixedParams("create-collection.", req.getParams());
+  if (collectionParams.get(NAME) != null) {
+SolrException solrException = new SolrException(BAD_REQUEST, 
"routed aliases calculate names for their " +
+"dependent collections, you cannot specify the name.");
+log.error("Could not create routed alias",solrException);
+throw solrException;
+  }
+  SolrParams v1Params = convertToV1WhenRequired(req, collectionParams);
+
+  // We need to add this temporary name just to pass validation.
--- End diff --

ah that's actually checked here: 
https://github.com/apache/lucene-solr/pull/304/files#diff-3fe6a8aeb14a57e63507fa17f8346771R207,
 but It could be moved to this class (or done both places)


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-master - Build # 1447 - Failure

2018-01-12 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1447/

7 tests failed.
FAILED:  org.apache.lucene.analysis.core.TestRandomChains.testRandomChains

Error Message:
startOffset must be non-negative, and endOffset must be >= startOffset, and 
offsets must not go backwards startOffset=0,endOffset=11,lastStartOffset=7 for 
field 'dummy'

Stack Trace:
java.lang.IllegalArgumentException: startOffset must be non-negative, and 
endOffset must be >= startOffset, and offsets must not go backwards 
startOffset=0,endOffset=11,lastStartOffset=7 for field 'dummy'
at 
__randomizedtesting.SeedInfo.seed([7CA2061763145E4A:41432F762406438A]:0)
at 
org.apache.lucene.index.DefaultIndexingChain$PerField.invert(DefaultIndexingChain.java:767)
at 
org.apache.lucene.index.DefaultIndexingChain.processField(DefaultIndexingChain.java:430)
at 
org.apache.lucene.index.DefaultIndexingChain.processDocument(DefaultIndexingChain.java:392)
at 
org.apache.lucene.index.DocumentsWriterPerThread.updateDocument(DocumentsWriterPerThread.java:240)
at 
org.apache.lucene.index.DocumentsWriter.updateDocument(DocumentsWriter.java:497)
at 
org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1727)
at 
org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:1462)
at 
org.apache.lucene.index.RandomIndexWriter.addDocument(RandomIndexWriter.java:171)
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:672)
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:562)
at 
org.apache.lucene.analysis.core.TestRandomChains.testRandomChains(TestRandomChains.java:856)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)

[jira] [Updated] (SOLR-11851) Issue After adding another node in Apache Solr Cluster

2018-01-12 Thread Sushil Tripathi (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11851?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sushil Tripathi updated SOLR-11851:
---
Description: 
Environment Detail-

Red Hat Enterprise Linux Server release 7.4 
VM1 configured with-
1. Zookeeper1, 2 and 3 on different port
2. Solr 7.2 configured with 2 node and 2 shard and 2 replica

VM2- New Server, we are trying to add in existing cluster. We followed the 
instruction from Apache Solr reference guide for 7.2. as below-

unzip the Solr-7.2.0.tar.gz and-
mkdir -p example/cloud/node3/solr
cp server/solr/solr.xml example/cloud/node3/solr
bin/solr start -cloud -s example/cloud/node3/solr -p 8987 -z :


Issue-
=
while calling URL- http://10.0.12.57:8983/solr/

It seems new node still not part of cluster also not having any core and 
indexes. Thanks for help in advance.

Error -
=
HTTP ERROR 404

Problem accessing /solr/. Reason:

Not Found

Caused by:

javax.servlet.UnavailableException: Error processing the request. CoreContainer 
is either not initialized or shutting down.
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:342)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:326)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1751)
at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
at 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
at 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
at 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
at 
org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
at org.eclipse.jetty.server.Server.handle(Server.java:534)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320)
at 
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
at 
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283)
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108)
at 
org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
at java.lang.Thread.run(Thread.java:748)


  was:
while calling URL- http://10.0.12.57:8983/solr/

It seems new node still not part of cluster also not having any core and 
indexes. Thanks for help in advance.

Error -

HTTP ERROR 404

Problem accessing /solr/. Reason:

Not Found

Caused by:

javax.servlet.UnavailableException: Error processing the request. CoreContainer 
is either not initialized or shutting down.
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:342)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:326)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1751)
at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
at 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
at 

[jira] [Created] (SOLR-11851) Issue After adding another node in Apache Solr Cluster

2018-01-12 Thread Sushil Tripathi (JIRA)
Sushil Tripathi created SOLR-11851:
--

 Summary: Issue After adding another node in Apache Solr Cluster 
 Key: SOLR-11851
 URL: https://issues.apache.org/jira/browse/SOLR-11851
 Project: Solr
  Issue Type: Test
  Security Level: Public (Default Security Level. Issues are Public)
  Components: Build
Affects Versions: 7.2
 Environment: Red Hat Enterprise Linux Server release 7.4 
VM1 configured with-
1. Zookeeper1, 2 and 3 on different port
2. Solr 7.2 configured with 2 node and 2 shard and 2 replica

VM2- New Server, we are trying to add in existing cluster. We followed the 
instruction from Apache Solr reference guide for 7.2. as below-

unzip the Solr-7.2.0.tar.gz and-
mkdir -p example/cloud/node3/solr
cp server/solr/solr.xml example/cloud/node3/solr
bin/solr start -cloud -s example/cloud/node3/solr -p 8987 -z :
 

Reporter: Sushil Tripathi


while calling URL- http://10.0.12.57:8983/solr/

It seems new node still not part of cluster also not having any core and 
indexes. Thanks for help in advance.

Error -

HTTP ERROR 404

Problem accessing /solr/. Reason:

Not Found

Caused by:

javax.servlet.UnavailableException: Error processing the request. CoreContainer 
is either not initialized or shutting down.
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:342)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:326)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1751)
at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
at 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
at 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
at 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
at 
org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
at org.eclipse.jetty.server.Server.handle(Server.java:534)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320)
at 
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
at 
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283)
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108)
at 
org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
at java.lang.Thread.run(Thread.java:748)




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Linux (32bit/jdk1.8.0_144) - Build # 1169 - Unstable!

2018-01-12 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/1169/
Java: 32bit/jdk1.8.0_144 -server -XX:+UseSerialGC

3 tests failed.
FAILED:  org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.test

Error Message:
Could not find collection:collection2

Stack Trace:
java.lang.AssertionError: Could not find collection:collection2
at 
__randomizedtesting.SeedInfo.seed([E3DBAC311B0AE8DF:6B8F93EBB5F68527]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNotNull(Assert.java:526)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:155)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:140)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:135)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForRecoveriesToFinish(AbstractFullDistribZkTestBase.java:913)
at 
org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.testIndexingBatchPerRequestWithHttpSolrClient(FullSolrCloudDistribCmdsTest.java:612)
at 
org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.test(FullSolrCloudDistribCmdsTest.java:152)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:993)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Closed] (SOLR-11850) Seeing lot of ERROR servlet.SolrDispatchFilter - org.mortbay.jetty.EofException

2018-01-12 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett closed SOLR-11850.


> Seeing lot of ERROR servlet.SolrDispatchFilter - 
> org.mortbay.jetty.EofException
> ---
>
> Key: SOLR-11850
> URL: https://issues.apache.org/jira/browse/SOLR-11850
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
> Environment: Production
>Reporter: Shashi
>
> Hi, We have a application on Apache solar 4.0.0.2011.11.15.18.03.14.
> Recently we are seeing a lot of errors in logs.
> 2018-01-12 16:22:59,013 ERROR servlet.SolrDispatchFilter - 
> org.mortbay.jetty.EofException
>   at org.mortbay.jetty.HttpGenerator.flush(HttpGenerator.java:791)
>   at 
> org.mortbay.jetty.AbstractGenerator$Output.flush(AbstractGenerator.java:569)
>   at 
> org.mortbay.jetty.HttpConnection$Output.flush(HttpConnection.java:1012)
>   at sun.nio.cs.StreamEncoder.implFlush(Unknown Source)
>   at sun.nio.cs.StreamEncoder.flush(Unknown Source)
>   at java.io.OutputStreamWriter.flush(Unknown Source)
>   at org.apache.solr.common.util.FastWriter.flush(FastWriter.java:115)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.writeResponse(SolrDispatchFilter.java:341)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:261)
>   at 
> com.google.inject.servlet.FilterDefinition.doFilter(FilterDefinition.java:129)
>   at 
> com.google.inject.servlet.FilterChainInvocation.doFilter(FilterChainInvocation.java:59)
>   at 
> com.google.inject.servlet.ManagedFilterPipeline.dispatch(ManagedFilterPipeline.java:122)
>   at com.google.inject.servlet.GuiceFilter.doFilter(GuiceFilter.java:110)
>   at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
>   at 
> org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
>   at 
> org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
>   at 
> org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
>   at 
> org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
>   at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
>   at 
> org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
>   at 
> org.mortbay.jetty.handler.HandlerCollection.handle(HandlerCollection.java:114)
>   at 
> org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
>   at org.mortbay.jetty.Server.handle(Server.java:326)
>   at 
> org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
>   at 
> org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928)
>   at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)
>   at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)
>   at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
>   at 
> org.mortbay.jetty.bio.SocketConnector$Connection.run(SocketConnector.java:228)
>   at 
> org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
> Caused by: java.net.SocketException: Connection reset by peer: socket write 
> error
>   at java.net.SocketOutputStream.socketWrite0(Native Method)
>   at java.net.SocketOutputStream.socketWrite(Unknown Source)
>   at java.net.SocketOutputStream.write(Unknown Source)
>   at org.mortbay.io.ByteArrayBuffer.writeTo(ByteArrayBuffer.java:368)
>   at org.mortbay.io.bio.StreamEndPoint.flush(StreamEndPoint.java:129)
>   at org.mortbay.io.bio.StreamEndPoint.flush(StreamEndPoint.java:149)
>   at org.mortbay.jetty.HttpGenerator.flush(HttpGenerator.java:714)
>   ... 29 more
> 2018-01-12 16:22:59,013 WARN mortbay.log - Committed before 500 null
> org.mortbay.jetty.EofException
>   at org.mortbay.jetty.HttpGenerator.flush(HttpGenerator.java:791)
>   at 
> org.mortbay.jetty.AbstractGenerator$Output.flush(AbstractGenerator.java:569)
>   at 
> org.mortbay.jetty.HttpConnection$Output.flush(HttpConnection.java:1012)
>   at sun.nio.cs.StreamEncoder.implFlush(Unknown Source)
>   at sun.nio.cs.StreamEncoder.flush(Unknown Source)
>   at java.io.OutputStreamWriter.flush(Unknown Source)
>   at org.apache.solr.common.util.FastWriter.flush(FastWriter.java:115)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.writeResponse(SolrDispatchFilter.java:341)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:261)
>   at 
> com.google.inject.servlet.FilterDefinition.doFilter(FilterDefinition.java:129)
>   at 
> 

[jira] [Resolved] (SOLR-11850) Seeing lot of ERROR servlet.SolrDispatchFilter - org.mortbay.jetty.EofException

2018-01-12 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett resolved SOLR-11850.
--
Resolution: Invalid

> Seeing lot of ERROR servlet.SolrDispatchFilter - 
> org.mortbay.jetty.EofException
> ---
>
> Key: SOLR-11850
> URL: https://issues.apache.org/jira/browse/SOLR-11850
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
> Environment: Production
>Reporter: Shashi
>
> Hi, We have a application on Apache solar 4.0.0.2011.11.15.18.03.14.
> Recently we are seeing a lot of errors in logs.
> 2018-01-12 16:22:59,013 ERROR servlet.SolrDispatchFilter - 
> org.mortbay.jetty.EofException
>   at org.mortbay.jetty.HttpGenerator.flush(HttpGenerator.java:791)
>   at 
> org.mortbay.jetty.AbstractGenerator$Output.flush(AbstractGenerator.java:569)
>   at 
> org.mortbay.jetty.HttpConnection$Output.flush(HttpConnection.java:1012)
>   at sun.nio.cs.StreamEncoder.implFlush(Unknown Source)
>   at sun.nio.cs.StreamEncoder.flush(Unknown Source)
>   at java.io.OutputStreamWriter.flush(Unknown Source)
>   at org.apache.solr.common.util.FastWriter.flush(FastWriter.java:115)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.writeResponse(SolrDispatchFilter.java:341)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:261)
>   at 
> com.google.inject.servlet.FilterDefinition.doFilter(FilterDefinition.java:129)
>   at 
> com.google.inject.servlet.FilterChainInvocation.doFilter(FilterChainInvocation.java:59)
>   at 
> com.google.inject.servlet.ManagedFilterPipeline.dispatch(ManagedFilterPipeline.java:122)
>   at com.google.inject.servlet.GuiceFilter.doFilter(GuiceFilter.java:110)
>   at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
>   at 
> org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
>   at 
> org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
>   at 
> org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
>   at 
> org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
>   at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
>   at 
> org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
>   at 
> org.mortbay.jetty.handler.HandlerCollection.handle(HandlerCollection.java:114)
>   at 
> org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
>   at org.mortbay.jetty.Server.handle(Server.java:326)
>   at 
> org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
>   at 
> org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928)
>   at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)
>   at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)
>   at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
>   at 
> org.mortbay.jetty.bio.SocketConnector$Connection.run(SocketConnector.java:228)
>   at 
> org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
> Caused by: java.net.SocketException: Connection reset by peer: socket write 
> error
>   at java.net.SocketOutputStream.socketWrite0(Native Method)
>   at java.net.SocketOutputStream.socketWrite(Unknown Source)
>   at java.net.SocketOutputStream.write(Unknown Source)
>   at org.mortbay.io.ByteArrayBuffer.writeTo(ByteArrayBuffer.java:368)
>   at org.mortbay.io.bio.StreamEndPoint.flush(StreamEndPoint.java:129)
>   at org.mortbay.io.bio.StreamEndPoint.flush(StreamEndPoint.java:149)
>   at org.mortbay.jetty.HttpGenerator.flush(HttpGenerator.java:714)
>   ... 29 more
> 2018-01-12 16:22:59,013 WARN mortbay.log - Committed before 500 null
> org.mortbay.jetty.EofException
>   at org.mortbay.jetty.HttpGenerator.flush(HttpGenerator.java:791)
>   at 
> org.mortbay.jetty.AbstractGenerator$Output.flush(AbstractGenerator.java:569)
>   at 
> org.mortbay.jetty.HttpConnection$Output.flush(HttpConnection.java:1012)
>   at sun.nio.cs.StreamEncoder.implFlush(Unknown Source)
>   at sun.nio.cs.StreamEncoder.flush(Unknown Source)
>   at java.io.OutputStreamWriter.flush(Unknown Source)
>   at org.apache.solr.common.util.FastWriter.flush(FastWriter.java:115)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.writeResponse(SolrDispatchFilter.java:341)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:261)
>   at 
> com.google.inject.servlet.FilterDefinition.doFilter(FilterDefinition.java:129)
>   at 
> 

[jira] [Commented] (SOLR-11850) Seeing lot of ERROR servlet.SolrDispatchFilter - org.mortbay.jetty.EofException

2018-01-12 Thread Cassandra Targett (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16324276#comment-16324276
 ] 

Cassandra Targett commented on SOLR-11850:
--

Have you discussed this on the Solr-User mailing list yet? That's a better 
place for diagnosis than the JIRA project, which we reserve for suspected and 
confirmed bugs. More information is available at: 
https://lucene.apache.org/solr/community.html#mailing-lists-irc.

When you mail the list, please provide additional information such as what are 
you or your users are doing when this occurs, and any configurations associated 
with those actions.

This issue will be closed as Invalid since it is not a confirmed bug and not 
enough information has been supplied to begin any diagnosis.

> Seeing lot of ERROR servlet.SolrDispatchFilter - 
> org.mortbay.jetty.EofException
> ---
>
> Key: SOLR-11850
> URL: https://issues.apache.org/jira/browse/SOLR-11850
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
> Environment: Production
>Reporter: Shashi
>
> Hi, We have a application on Apache solar 4.0.0.2011.11.15.18.03.14.
> Recently we are seeing a lot of errors in logs.
> 2018-01-12 16:22:59,013 ERROR servlet.SolrDispatchFilter - 
> org.mortbay.jetty.EofException
>   at org.mortbay.jetty.HttpGenerator.flush(HttpGenerator.java:791)
>   at 
> org.mortbay.jetty.AbstractGenerator$Output.flush(AbstractGenerator.java:569)
>   at 
> org.mortbay.jetty.HttpConnection$Output.flush(HttpConnection.java:1012)
>   at sun.nio.cs.StreamEncoder.implFlush(Unknown Source)
>   at sun.nio.cs.StreamEncoder.flush(Unknown Source)
>   at java.io.OutputStreamWriter.flush(Unknown Source)
>   at org.apache.solr.common.util.FastWriter.flush(FastWriter.java:115)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.writeResponse(SolrDispatchFilter.java:341)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:261)
>   at 
> com.google.inject.servlet.FilterDefinition.doFilter(FilterDefinition.java:129)
>   at 
> com.google.inject.servlet.FilterChainInvocation.doFilter(FilterChainInvocation.java:59)
>   at 
> com.google.inject.servlet.ManagedFilterPipeline.dispatch(ManagedFilterPipeline.java:122)
>   at com.google.inject.servlet.GuiceFilter.doFilter(GuiceFilter.java:110)
>   at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
>   at 
> org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
>   at 
> org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
>   at 
> org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
>   at 
> org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
>   at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
>   at 
> org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
>   at 
> org.mortbay.jetty.handler.HandlerCollection.handle(HandlerCollection.java:114)
>   at 
> org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
>   at org.mortbay.jetty.Server.handle(Server.java:326)
>   at 
> org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
>   at 
> org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928)
>   at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)
>   at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)
>   at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
>   at 
> org.mortbay.jetty.bio.SocketConnector$Connection.run(SocketConnector.java:228)
>   at 
> org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
> Caused by: java.net.SocketException: Connection reset by peer: socket write 
> error
>   at java.net.SocketOutputStream.socketWrite0(Native Method)
>   at java.net.SocketOutputStream.socketWrite(Unknown Source)
>   at java.net.SocketOutputStream.write(Unknown Source)
>   at org.mortbay.io.ByteArrayBuffer.writeTo(ByteArrayBuffer.java:368)
>   at org.mortbay.io.bio.StreamEndPoint.flush(StreamEndPoint.java:129)
>   at org.mortbay.io.bio.StreamEndPoint.flush(StreamEndPoint.java:149)
>   at org.mortbay.jetty.HttpGenerator.flush(HttpGenerator.java:714)
>   ... 29 more
> 2018-01-12 16:22:59,013 WARN mortbay.log - Committed before 500 null
> org.mortbay.jetty.EofException
>   at org.mortbay.jetty.HttpGenerator.flush(HttpGenerator.java:791)
>   at 
> org.mortbay.jetty.AbstractGenerator$Output.flush(AbstractGenerator.java:569)
>   at 
> 

[jira] [Commented] (LUCENE-4198) Allow codecs to index term impacts

2018-01-12 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16324250#comment-16324250
 ] 

Robert Muir commented on LUCENE-4198:
-

There are a lot of approaches for getting the same benefits with facets, such 
as returning fast top-N search results immediately, then do exact facet search 
as a separate query. this can be asynchronous from a user interface perspective 
so it still reduces latency to the user. There is also the idea of approximate 
facet counts which i think has been explored a bit with lucene facets module. 
But we have to start somewhere.

> Allow codecs to index term impacts
> --
>
> Key: LUCENE-4198
> URL: https://issues.apache.org/jira/browse/LUCENE-4198
> Project: Lucene - Core
>  Issue Type: Sub-task
>  Components: core/index
>Reporter: Robert Muir
> Attachments: LUCENE-4198.patch, LUCENE-4198.patch, LUCENE-4198.patch, 
> LUCENE-4198.patch, LUCENE-4198_flush.patch
>
>
> Subtask of LUCENE-4100.
> Thats an example of something similar to impact indexing (though, his 
> implementation currently stores a max for the entire term, the problem is the 
> same).
> We can imagine other similar algorithms too: I think the codec API should be 
> able to support these.
> Currently it really doesnt: Stefan worked around the problem by providing a 
> tool to 'rewrite' your index, he passes the IndexReader and Similarity to it. 
> But it would be better if we fixed the codec API.
> One problem is that the Postings writer needs to have access to the 
> Similarity. Another problem is that it needs access to the term and 
> collection statistics up front, rather than after the fact.
> This might have some cost (hopefully minimal), so I'm thinking to experiment 
> in a branch with these changes and see if we can make it work well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4198) Allow codecs to index term impacts

2018-01-12 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16324247#comment-16324247
 ] 

Adrien Grand commented on LUCENE-4198:
--

Correct, it is only good at computing the top-scoring hits.

> Allow codecs to index term impacts
> --
>
> Key: LUCENE-4198
> URL: https://issues.apache.org/jira/browse/LUCENE-4198
> Project: Lucene - Core
>  Issue Type: Sub-task
>  Components: core/index
>Reporter: Robert Muir
> Attachments: LUCENE-4198.patch, LUCENE-4198.patch, LUCENE-4198.patch, 
> LUCENE-4198.patch, LUCENE-4198_flush.patch
>
>
> Subtask of LUCENE-4100.
> Thats an example of something similar to impact indexing (though, his 
> implementation currently stores a max for the entire term, the problem is the 
> same).
> We can imagine other similar algorithms too: I think the codec API should be 
> able to support these.
> Currently it really doesnt: Stefan worked around the problem by providing a 
> tool to 'rewrite' your index, he passes the IndexReader and Similarity to it. 
> But it would be better if we fixed the codec API.
> One problem is that the Postings writer needs to have access to the 
> Similarity. Another problem is that it needs access to the term and 
> collection statistics up front, rather than after the fact.
> This might have some cost (hopefully minimal), so I'm thinking to experiment 
> in a branch with these changes and see if we can make it work well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8129) Support for defining a Unicode set filter when using ICUFoldingFilter

2018-01-12 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16324246#comment-16324246
 ] 

Robert Muir commented on LUCENE-8129:
-

otherwise patch looks great to me. thanks!

> Support for defining a Unicode set filter when using ICUFoldingFilter
> -
>
> Key: LUCENE-8129
> URL: https://issues.apache.org/jira/browse/LUCENE-8129
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Reporter: Ere Maijala
>Priority: Minor
>  Labels: ICUFoldingFilterFactory, patch-available, patch-with-test
> Attachments: LUCENE-8129.patch
>
>
> While ICUNormalizer2FilterFactory supports a filter attribute to define a 
> Unicode set filter, ICUFoldingFilterFactory does not support it. A filter 
> allows one to e.g. exclude a set of characters from being folded. E.g. for 
> Finnish and Swedish the filter could be defined like this:
>   
> Note: An additional MappingCharFilterFactory or solr.LowerCaseFilterFactory 
> would be needed for lowercasing the characters excluded from folding. This is 
> similar to what ElasticSearch provides (see 
> https://www.elastic.co/guide/en/elasticsearch/plugins/current/analysis-icu-folding.html).
> I'll add a patch that does this similar to ICUNormalizer2FilterFactory. 
> Applies at least to master and branch_7x.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4198) Allow codecs to index term impacts

2018-01-12 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16324234#comment-16324234
 ] 

David Smiley commented on LUCENE-4198:
--

Just so I'm clear on something: does this performance improvement only apply 
when the search doesn't track the total hits, and generally wouldn't work with 
a faceted search as well?

> Allow codecs to index term impacts
> --
>
> Key: LUCENE-4198
> URL: https://issues.apache.org/jira/browse/LUCENE-4198
> Project: Lucene - Core
>  Issue Type: Sub-task
>  Components: core/index
>Reporter: Robert Muir
> Attachments: LUCENE-4198.patch, LUCENE-4198.patch, LUCENE-4198.patch, 
> LUCENE-4198.patch, LUCENE-4198_flush.patch
>
>
> Subtask of LUCENE-4100.
> Thats an example of something similar to impact indexing (though, his 
> implementation currently stores a max for the entire term, the problem is the 
> same).
> We can imagine other similar algorithms too: I think the codec API should be 
> able to support these.
> Currently it really doesnt: Stefan worked around the problem by providing a 
> tool to 'rewrite' your index, he passes the IndexReader and Similarity to it. 
> But it would be better if we fixed the codec API.
> One problem is that the Postings writer needs to have access to the 
> Similarity. Another problem is that it needs access to the term and 
> collection statistics up front, rather than after the fact.
> This might have some cost (hopefully minimal), so I'm thinking to experiment 
> in a branch with these changes and see if we can make it work well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Windows (32bit/jdk1.8.0_144) - Build # 7109 - Still Unstable!

2018-01-12 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/7109/
Java: 32bit/jdk1.8.0_144 -server -XX:+UseSerialGC

8 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.lucene.store.TestTrackingDirectoryWrapper

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J1\temp\lucene.store.TestTrackingDirectoryWrapper_7952F459458FF956-001\tempDir-006:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J1\temp\lucene.store.TestTrackingDirectoryWrapper_7952F459458FF956-001\tempDir-006
 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J1\temp\lucene.store.TestTrackingDirectoryWrapper_7952F459458FF956-001\tempDir-006:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J1\temp\lucene.store.TestTrackingDirectoryWrapper_7952F459458FF956-001\tempDir-006

at __randomizedtesting.SeedInfo.seed([7952F459458FF956]:0)
at org.apache.lucene.util.IOUtils.rm(IOUtils.java:329)
at 
org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:216)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterAlways(TestRuleAdapter.java:31)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  
junit.framework.TestSuite.org.apache.lucene.store.TestHardLinkCopyDirectoryWrapper

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\misc\test\J0\temp\lucene.store.TestHardLinkCopyDirectoryWrapper_FF33BE19797822FB-001\tempDir-008:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\misc\test\J0\temp\lucene.store.TestHardLinkCopyDirectoryWrapper_FF33BE19797822FB-001\tempDir-008

C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\misc\test\J0\temp\lucene.store.TestHardLinkCopyDirectoryWrapper_FF33BE19797822FB-001\tempDir-008\dir_2:
 java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\misc\test\J0\temp\lucene.store.TestHardLinkCopyDirectoryWrapper_FF33BE19797822FB-001\tempDir-008\dir_2

C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\misc\test\J0\temp\lucene.store.TestHardLinkCopyDirectoryWrapper_FF33BE19797822FB-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\misc\test\J0\temp\lucene.store.TestHardLinkCopyDirectoryWrapper_FF33BE19797822FB-001
 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\misc\test\J0\temp\lucene.store.TestHardLinkCopyDirectoryWrapper_FF33BE19797822FB-001\tempDir-008:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\misc\test\J0\temp\lucene.store.TestHardLinkCopyDirectoryWrapper_FF33BE19797822FB-001\tempDir-008
   
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\misc\test\J0\temp\lucene.store.TestHardLinkCopyDirectoryWrapper_FF33BE19797822FB-001\tempDir-008\dir_2:
 java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\misc\test\J0\temp\lucene.store.TestHardLinkCopyDirectoryWrapper_FF33BE19797822FB-001\tempDir-008\dir_2
   
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\misc\test\J0\temp\lucene.store.TestHardLinkCopyDirectoryWrapper_FF33BE19797822FB-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\misc\test\J0\temp\lucene.store.TestHardLinkCopyDirectoryWrapper_FF33BE19797822FB-001

at __randomizedtesting.SeedInfo.seed([FF33BE19797822FB]:0)
at 

[jira] [Created] (SOLR-11850) Seeing lot of ERROR servlet.SolrDispatchFilter - org.mortbay.jetty.EofException

2018-01-12 Thread Shashi (JIRA)
Shashi created SOLR-11850:
-

 Summary: Seeing lot of ERROR servlet.SolrDispatchFilter - 
org.mortbay.jetty.EofException
 Key: SOLR-11850
 URL: https://issues.apache.org/jira/browse/SOLR-11850
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
 Environment: Production
Reporter: Shashi


Hi, We have a application on Apache solar 4.0.0.2011.11.15.18.03.14.

Recently we are seeing a lot of errors in logs.

2018-01-12 16:22:59,013 ERROR servlet.SolrDispatchFilter - 
org.mortbay.jetty.EofException
at org.mortbay.jetty.HttpGenerator.flush(HttpGenerator.java:791)
at 
org.mortbay.jetty.AbstractGenerator$Output.flush(AbstractGenerator.java:569)
at 
org.mortbay.jetty.HttpConnection$Output.flush(HttpConnection.java:1012)
at sun.nio.cs.StreamEncoder.implFlush(Unknown Source)
at sun.nio.cs.StreamEncoder.flush(Unknown Source)
at java.io.OutputStreamWriter.flush(Unknown Source)
at org.apache.solr.common.util.FastWriter.flush(FastWriter.java:115)
at 
org.apache.solr.servlet.SolrDispatchFilter.writeResponse(SolrDispatchFilter.java:341)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:261)
at 
com.google.inject.servlet.FilterDefinition.doFilter(FilterDefinition.java:129)
at 
com.google.inject.servlet.FilterChainInvocation.doFilter(FilterChainInvocation.java:59)
at 
com.google.inject.servlet.ManagedFilterPipeline.dispatch(ManagedFilterPipeline.java:122)
at com.google.inject.servlet.GuiceFilter.doFilter(GuiceFilter.java:110)
at 
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
at 
org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
at 
org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
at 
org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
at 
org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
at 
org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
at 
org.mortbay.jetty.handler.HandlerCollection.handle(HandlerCollection.java:114)
at 
org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
at org.mortbay.jetty.Server.handle(Server.java:326)
at 
org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
at 
org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928)
at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)
at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)
at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
at 
org.mortbay.jetty.bio.SocketConnector$Connection.run(SocketConnector.java:228)
at 
org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
Caused by: java.net.SocketException: Connection reset by peer: socket write 
error
at java.net.SocketOutputStream.socketWrite0(Native Method)
at java.net.SocketOutputStream.socketWrite(Unknown Source)
at java.net.SocketOutputStream.write(Unknown Source)
at org.mortbay.io.ByteArrayBuffer.writeTo(ByteArrayBuffer.java:368)
at org.mortbay.io.bio.StreamEndPoint.flush(StreamEndPoint.java:129)
at org.mortbay.io.bio.StreamEndPoint.flush(StreamEndPoint.java:149)
at org.mortbay.jetty.HttpGenerator.flush(HttpGenerator.java:714)
... 29 more

2018-01-12 16:22:59,013 WARN mortbay.log - Committed before 500 null

org.mortbay.jetty.EofException
at org.mortbay.jetty.HttpGenerator.flush(HttpGenerator.java:791)
at 
org.mortbay.jetty.AbstractGenerator$Output.flush(AbstractGenerator.java:569)
at 
org.mortbay.jetty.HttpConnection$Output.flush(HttpConnection.java:1012)
at sun.nio.cs.StreamEncoder.implFlush(Unknown Source)
at sun.nio.cs.StreamEncoder.flush(Unknown Source)
at java.io.OutputStreamWriter.flush(Unknown Source)
at org.apache.solr.common.util.FastWriter.flush(FastWriter.java:115)
at 
org.apache.solr.servlet.SolrDispatchFilter.writeResponse(SolrDispatchFilter.java:341)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:261)
at 
com.google.inject.servlet.FilterDefinition.doFilter(FilterDefinition.java:129)
at 
com.google.inject.servlet.FilterChainInvocation.doFilter(FilterChainInvocation.java:59)
at 
com.google.inject.servlet.ManagedFilterPipeline.dispatch(ManagedFilterPipeline.java:122)
at com.google.inject.servlet.GuiceFilter.doFilter(GuiceFilter.java:110)
at 

[jira] [Updated] (SOLR-6616) Make shards.tolerant and timeAllowed work together

2018-01-12 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6616?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett updated SOLR-6616:

Component/s: search

> Make shards.tolerant and timeAllowed work together
> --
>
> Key: SOLR-6616
> URL: https://issues.apache.org/jira/browse/SOLR-6616
> Project: Solr
>  Issue Type: Bug
>  Components: search
>Reporter: Anshum Gupta
>Assignee: Anshum Gupta
> Attachments: SOLR-6616.patch
>
>
> From SOLR-5986:
> {quote}
> As of now, when timeAllowed is set, we never get back an exception but just 
> partialResults in the response header is set to true in case of a shard 
> failure. This translates to shards.tolerant being ignored in that case.
> On the code level, the TimeExceededException never reaches ShardHandler and 
> so the Exception is never set (similarly for ExitingReaderException) and/or 
> returned to the client.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6612) maxScore included in distributed search results even if score not requested

2018-01-12 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett updated SOLR-6612:

Component/s: multicore

> maxScore included in distributed search results even if score not requested
> ---
>
> Key: SOLR-6612
> URL: https://issues.apache.org/jira/browse/SOLR-6612
> Project: Solr
>  Issue Type: Bug
>  Components: multicore
>Affects Versions: 4.10.1
>Reporter: Steve Molloy
>Priority: Minor
> Attachments: SOLR-6612.patch
>
>
> When performing a search on a single core, maxScore is only included in 
> response if scores were specifically requested (fl=*,score). In distributed 
> searches however, maxScore is always part of results wether or not the scores 
> were requested. The behaviour should be the same whether the search is 
> distributed or not.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6612) maxScore included in distributed search results even if score not requested

2018-01-12 Thread Cassandra Targett (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16324209#comment-16324209
 ] 

Cassandra Targett commented on SOLR-6612:
-

I'm still able to reproduce this inconsistency with Solr 7.2.

> maxScore included in distributed search results even if score not requested
> ---
>
> Key: SOLR-6612
> URL: https://issues.apache.org/jira/browse/SOLR-6612
> Project: Solr
>  Issue Type: Bug
>  Components: multicore
>Affects Versions: 4.10.1
>Reporter: Steve Molloy
>Priority: Minor
> Attachments: SOLR-6612.patch
>
>
> When performing a search on a single core, maxScore is only included in 
> response if scores were specifically requested (fl=*,score). In distributed 
> searches however, maxScore is always part of results wether or not the scores 
> were requested. The behaviour should be the same whether the search is 
> distributed or not.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6593) DELETE*REPLICA Tests fail frequently on jenkins

2018-01-12 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6593?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett updated SOLR-6593:

Component/s: Tests

> DELETE*REPLICA Tests fail frequently on jenkins
> ---
>
> Key: SOLR-6593
> URL: https://issues.apache.org/jira/browse/SOLR-6593
> Project: Solr
>  Issue Type: Bug
>  Components: Tests
>Reporter: Anshum Gupta
>
> DeleteReplicaTest and DeleteLastCustomShardedReplica tests have been failing 
> on Jenkins from the time around when the 4x->5x changes happened.
> It might very well be another commit around that time and not be related to 
> the svn move/back ports etc.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6588) Combination of nested documents and incremental partial update on int field does not work

2018-01-12 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett updated SOLR-6588:

Component/s: (was: SolrJ)
 clients - java

> Combination of nested documents and incremental partial update on int field 
> does not work
> -
>
> Key: SOLR-6588
> URL: https://issues.apache.org/jira/browse/SOLR-6588
> Project: Solr
>  Issue Type: Bug
>  Components: clients - java
>Affects Versions: 4.9, 4.10
>Reporter: Ali Nzm
>  Labels: solrJ
>
> When you are facing with adding nested documents and incremental partial 
> update (for int field) for same solr document the nested part will not work. 
> This problem exists on both 4.9 and 4.10 version.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: What are the expectations for cleanup SolrCloud tests when tests.iters is specified?

2018-01-12 Thread Erick Erickson
Well, since I'm in there anyway I'll include the note in the patch. At
least that'll alert people to dig deeper.

On Thu, Jan 11, 2018 at 8:34 PM, David Smiley  wrote:
> Yeah thanks guys -- beast it is.
>
> I wonder if we should not document tests.iters (a bit more expert), or add a
> warning to it in the help output saying something like: NOTE: some tests are
> incompatible because BeforeClass/AfterClass isn't performed inbetween. Try
> beast.iters instead.
>
> On Thu, Jan 11, 2018 at 8:39 PM Erick Erickson 
> wrote:
>>
>> Ok, thanks both. That makes a lot of sense. I'll just use  beasting for
>> most anything SolrCloud related.
>>
>>
>> On Thu, Jan 11, 2018 at 4:56 PM, Chris Hostetter
>>  wrote:
>>>
>>> : (I had left the comment in question)
>>> : I think a test shouldn't have to explicitly clean up after itself,
>>> except
>>> : perhaps intra-method as-needed; test-infrastructure should do the class
>>> : (test suite).
>>>
>>> All test code should always be expected to clean up their messes at
>>> whatever "ending" stage corrisponds with the stage where the mess was
>>> made.
>>>
>>> how the mess is cleaned up, and wether infrastructure/scaffolding code
>>> helps do that dpeends on the specifics of the infrastucture/scaffolding
>>> in
>>> question -- if you make a mess in a test method that the general purpose
>>> infrastructure doesn't expect, then the burden is on you
>>> to add the level of infrastructure (either in your specific test class,
>>> or
>>> in a new abstract base class depending on how you think it might be
>>> re-usable) to do so.
>>>
>>> In the abstract: Assume AbstractParentTest class that creates some
>>> "parentMess" in @BeforeClass, and deletes "parentMess" in an
>>> @AfterClass
>>>
>>> 1) if you want all of your tests methods to have access to a
>>> shiny new/unique instance of "childMess" in every test method, then
>>> burden
>>> is on you to create/destroy childMess in your own @Before and @After
>>> methods
>>>
>>> 2) If you want test methods that are going to mutate "parentMess" then
>>> the
>>> burden is on you to ensure (ideally via @After methods that "do the right
>>> thing" even if the test method fails) that "parentMess" is correctly
>>> reset
>>> so that all the test methods depending on "parentMess" can run in any
>>> order (or run multiple times in a single instance) ... either that, or
>>> you
>>> shouldn't use AbstractParentTest -- you should create/destroy
>>> a "parentMess" instance yourself in your @Before & @After methods
>>>
>>> Concretely...
>>>
>>> : > The assumption was that everything would be cleaned up between runs
>>> : > doesn't appear to be true for SolrCloud tests. I think one of two
>>> things is
>>> : > happening:
>>> : >
>>> : > 1> collections (and perhaps aliases) are simply not cleaned up
>>> : >
>>> : > 2> there is a timing issue, we have waitForCollectionToDisappear in
>>> test
>>> : > code after all.
>>>
>>> ...these are vague statements ("everything", "SolrCloud tests", "not
>>> cleaned up") and not being intimately familiar with the test class in
>>> question it's not clear exactly is happening or what expectations various
>>> people have -- BUT -- assuming this is in regards to
>>> SolrCloudTestCase, that base class has very explicit docs about
>>> how it's intended to be used: you are expected to configure & init a
>>> MiniSolrCloudCluster instance in an @BeforeClass method -- it has helper
>>> code for this -- and that cluster lives for the lifespan of the class at
>>> which point an @AfterClass in SolrCloudTestCase will ensure it gets torn
>>> down.
>>>
>>> Tests which subclass SolrCloudTestCase should be initializing the cluster
>>> only in @BeforeClass.  Most tests should only be creating collections in
>>> @BeforeClass -- allthough you are certainly free to do things like
>>> create/destroy collections on a per test method basis in @Before/@After
>>> methods if you have a need for that sort of thing.
>>>
>>> If that's not the lifecycle you want -- if you want a lifecycle where
>>> ever
>>> individual test method gets it's own pristine new MiniSolrCloudCluster
>>> instance w/o any pre-existing collections, then you shouldn't use
>>> SolrCloudTestCase -- you should just create/destroy
>>> unique MiniSolrCloudCluster instances in your own @Before/@After methods.
>>>
>>>
>>> Bottom Line: there is no one size fits all test scaffolding -- not when
>>> we
>>> have some tests classes where we want to create a collection once, fill
>>> it
>>> with lots of docs, and then re-use it in 100s of test methods, but other
>>> classes want to test the very operation of creating/deleting collections.
>>>
>>> Use the tools that make sense for the test you're writting.
>>>
>>>
>>>
>>>
>>> -Hoss
>>> http://www.lucidworks.com/
>>>
>>> -
>>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>>> For 

[jira] [Commented] (LUCENE-8129) Support for defining a Unicode set filter when using ICUFoldingFilter

2018-01-12 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16324172#comment-16324172
 ] 

Robert Muir commented on LUCENE-8129:
-

Minor nitpick: can we rename it from {{normalizer}} to {{NORMALIZER}} too, 
since it acts as a constant?

> Support for defining a Unicode set filter when using ICUFoldingFilter
> -
>
> Key: LUCENE-8129
> URL: https://issues.apache.org/jira/browse/LUCENE-8129
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Reporter: Ere Maijala
>Priority: Minor
>  Labels: ICUFoldingFilterFactory, patch-available, patch-with-test
> Attachments: LUCENE-8129.patch
>
>
> While ICUNormalizer2FilterFactory supports a filter attribute to define a 
> Unicode set filter, ICUFoldingFilterFactory does not support it. A filter 
> allows one to e.g. exclude a set of characters from being folded. E.g. for 
> Finnish and Swedish the filter could be defined like this:
>   
> Note: An additional MappingCharFilterFactory or solr.LowerCaseFilterFactory 
> would be needed for lowercasing the characters excluded from folding. This is 
> similar to what ElasticSearch provides (see 
> https://www.elastic.co/guide/en/elasticsearch/plugins/current/analysis-icu-folding.html).
> I'll add a patch that does this similar to ICUNormalizer2FilterFactory. 
> Applies at least to master and branch_7x.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.2-Linux (32bit/jdk1.8.0_144) - Build # 128 - Still Unstable!

2018-01-12 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.2-Linux/128/
Java: 32bit/jdk1.8.0_144 -server -XX:+UseParallelGC

1 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation

Error Message:
2 threads leaked from SUITE scope at 
org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation: 1) 
Thread[id=22217, name=jetty-launcher-4382-thread-2-EventThread, 
state=TIMED_WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation] 
at sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
 at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
 at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277)  
   at 
org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:323)
 at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:105)  
   at 
org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:288)
 at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279)
 at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:41)
 at 
org.apache.curator.framework.recipes.shared.SharedValue.readValue(SharedValue.java:244)
 at 
org.apache.curator.framework.recipes.shared.SharedValue.access$100(SharedValue.java:44)
 at 
org.apache.curator.framework.recipes.shared.SharedValue$1.process(SharedValue.java:61)
 at 
org.apache.curator.framework.imps.NamespaceWatcher.process(NamespaceWatcher.java:67)
 at 
org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:530)   
  at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:505)   
 2) Thread[id=22213, name=jetty-launcher-4382-thread-1-EventThread, 
state=TIMED_WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation] 
at sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
 at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
 at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277)  
   at 
org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:323)
 at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:105)  
   at 
org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:288)
 at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279)
 at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:41)
 at 
org.apache.curator.framework.recipes.shared.SharedValue.readValue(SharedValue.java:244)
 at 
org.apache.curator.framework.recipes.shared.SharedValue.access$100(SharedValue.java:44)
 at 
org.apache.curator.framework.recipes.shared.SharedValue$1.process(SharedValue.java:61)
 at 
org.apache.curator.framework.imps.NamespaceWatcher.process(NamespaceWatcher.java:67)
 at 
org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:530)   
  at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:505)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 2 threads leaked from SUITE 
scope at org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation: 
   1) Thread[id=22217, name=jetty-launcher-4382-thread-2-EventThread, 
state=TIMED_WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation]
at sun.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277)
at 
org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:323)
at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:105)
at 
org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:288)
at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279)
at 

[jira] [Commented] (LUCENE-4198) Allow codecs to index term impacts

2018-01-12 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16324101#comment-16324101
 ] 

Adrien Grand commented on LUCENE-4198:
--

I tested wikibigall as well, which has the benefit of not having artificially 
truncated lengths like wikimedium:

{noformat}
TaskQPS baseline  StdDev   QPS patch  StdDev
Pct diff
  AndHighLow 1440.24  (3.0%)  794.43  (2.9%)  
-44.8% ( -49% -  -40%)
  AndHighMed  121.80  (1.4%)   94.75  (1.5%)  
-22.2% ( -24% -  -19%)
 AndHighHigh   56.62  (1.2%)   45.26  (1.4%)  
-20.1% ( -22% -  -17%)
   OrHighMed   93.16  (3.3%)   78.18  (3.1%)  
-16.1% ( -21% -   -9%)
   OrHighLow  827.62  (2.6%)  748.49  (3.5%)   
-9.6% ( -15% -   -3%)
  OrHighHigh   35.14  (4.4%)   32.25  (4.6%)   
-8.2% ( -16% -0%)
  Fuzzy1  265.67  (4.7%)  246.12  (5.0%)   
-7.4% ( -16% -2%)
   LowPhrase  166.32  (1.3%)  157.61  (1.6%)   
-5.2% (  -8% -   -2%)
  Fuzzy2  184.41  (4.3%)  176.40  (3.5%)   
-4.3% ( -11% -3%)
 LowSpanNear  749.77  (2.1%)  726.14  (2.2%)   
-3.2% (  -7% -1%)
   MedPhrase   23.77  (2.0%)   23.14  (1.9%)   
-2.6% (  -6% -1%)
  HighPhrase   18.73  (3.0%)   18.24  (3.0%)   
-2.6% (  -8% -3%)
 MedSpanNear  113.11  (2.3%)  110.17  (2.0%)   
-2.6% (  -6% -1%)
 MedSloppyPhrase   10.28  (6.5%)   10.07  (6.9%)   
-2.0% ( -14% -   12%)
 LowSloppyPhrase   12.68  (6.6%)   12.43  (7.1%)   
-2.0% ( -14% -   12%)
HighSloppyPhrase9.47  (7.0%)9.29  (7.5%)   
-1.9% ( -15% -   13%)
  IntNRQ   27.89  (7.0%)   27.58  (8.7%)   
-1.1% ( -15% -   15%)
HighSpanNear9.05  (4.9%)8.98  (4.7%)   
-0.8% (  -9% -9%)
 Respell  273.80  (2.3%)  273.79  (2.2%)   
-0.0% (  -4% -4%)
   HighTermMonthSort   68.77  (7.1%)   69.60  (7.8%)
1.2% ( -12% -   17%)
Wildcard   92.81  (5.8%)   94.67  (6.2%)
2.0% (  -9% -   14%)
   HighTermDayOfYearSort   61.99 (10.3%)   64.18 (10.9%)
3.5% ( -16% -   27%)
 Prefix3   41.42  (8.3%)   42.96  (8.2%)
3.7% ( -11% -   22%)
 LowTerm  694.99  (2.5%) 3126.69 (17.7%)  
349.9% ( 321% -  379%)
HighTerm   58.04  (2.7%)  490.60 (58.6%)  
745.3% ( 666% -  828%)
 MedTerm  120.80  (2.6%) 1053.44 (55.1%)  
772.1% ( 695% -  852%)
{noformat}

{{.doc}} file is 5.2% larger and the index is 1.5% larger overall.

> Allow codecs to index term impacts
> --
>
> Key: LUCENE-4198
> URL: https://issues.apache.org/jira/browse/LUCENE-4198
> Project: Lucene - Core
>  Issue Type: Sub-task
>  Components: core/index
>Reporter: Robert Muir
> Attachments: LUCENE-4198.patch, LUCENE-4198.patch, LUCENE-4198.patch, 
> LUCENE-4198.patch, LUCENE-4198_flush.patch
>
>
> Subtask of LUCENE-4100.
> Thats an example of something similar to impact indexing (though, his 
> implementation currently stores a max for the entire term, the problem is the 
> same).
> We can imagine other similar algorithms too: I think the codec API should be 
> able to support these.
> Currently it really doesnt: Stefan worked around the problem by providing a 
> tool to 'rewrite' your index, he passes the IndexReader and Similarity to it. 
> But it would be better if we fixed the codec API.
> One problem is that the Postings writer needs to have access to the 
> Similarity. Another problem is that it needs access to the term and 
> collection statistics up front, rather than after the fact.
> This might have some cost (hopefully minimal), so I'm thinking to experiment 
> in a branch with these changes and see if we can make it work well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Solaris (64bit/jdk1.8.0) - Build # 389 - Still Unstable!

2018-01-12 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Solaris/389/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseG1GC

2 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.sim.TestClusterStateProvider.testAutoScalingConfig

Error Message:
expected: org.apache.solr.client.solrj.cloud.autoscaling.AutoScalingConfig<{   
"cluster-preferences":[{"maximize":"freedisk"}],   
"triggers":{".auto_add_replicas":{   "name":".auto_add_replicas",   
"event":"nodeLost",   "waitFor":30,   "actions":[ {   
"name":"auto_add_replicas_plan",   
"class":"solr.AutoAddReplicasPlanAction"}, {   
"name":"execute_plan",   "class":"solr.ExecutePlanAction"}],   
"enabled":true}},   "listeners":{".auto_add_replicas.system":{   
"trigger":".auto_add_replicas",   "afterAction":[],   "stage":[ 
"STARTED", "ABORTED", "SUCCEEDED", "FAILED", 
"BEFORE_ACTION", "AFTER_ACTION", "IGNORED"],   
"class":"org.apache.solr.cloud.autoscaling.SystemLogListener",   
"beforeAction":[]}},   "properties":{}}> but was: 
org.apache.solr.client.solrj.cloud.autoscaling.AutoScalingConfig<{   
"cluster-preferences":[{"maximize":"freedisk"}],   
"triggers":{".auto_add_replicas":{   "name":".auto_add_replicas",   
"event":"nodeLost",   "waitFor":30,   "actions":[ {   
"name":"auto_add_replicas_plan",   
"class":"solr.AutoAddReplicasPlanAction"}, {   
"name":"execute_plan",   "class":"solr.ExecutePlanAction"}],   
"enabled":true}},   "listeners":{".auto_add_replicas.system":{   
"trigger":".auto_add_replicas",   "afterAction":[],   "stage":[ 
"STARTED", "ABORTED", "SUCCEEDED", "FAILED", 
"BEFORE_ACTION", "AFTER_ACTION", "IGNORED"],   
"class":"org.apache.solr.cloud.autoscaling.SystemLogListener",   
"beforeAction":[]}},   "properties":{}}>

Stack Trace:
java.lang.AssertionError: expected: 
org.apache.solr.client.solrj.cloud.autoscaling.AutoScalingConfig<{
  "cluster-preferences":[{"maximize":"freedisk"}],
  "triggers":{".auto_add_replicas":{
  "name":".auto_add_replicas",
  "event":"nodeLost",
  "waitFor":30,
  "actions":[
{
  "name":"auto_add_replicas_plan",
  "class":"solr.AutoAddReplicasPlanAction"},
{
  "name":"execute_plan",
  "class":"solr.ExecutePlanAction"}],
  "enabled":true}},
  "listeners":{".auto_add_replicas.system":{
  "trigger":".auto_add_replicas",
  "afterAction":[],
  "stage":[
"STARTED",
"ABORTED",
"SUCCEEDED",
"FAILED",
"BEFORE_ACTION",
"AFTER_ACTION",
"IGNORED"],
  "class":"org.apache.solr.cloud.autoscaling.SystemLogListener",
  "beforeAction":[]}},
  "properties":{}}> but was: 
org.apache.solr.client.solrj.cloud.autoscaling.AutoScalingConfig<{
  "cluster-preferences":[{"maximize":"freedisk"}],
  "triggers":{".auto_add_replicas":{
  "name":".auto_add_replicas",
  "event":"nodeLost",
  "waitFor":30,
  "actions":[
{
  "name":"auto_add_replicas_plan",
  "class":"solr.AutoAddReplicasPlanAction"},
{
  "name":"execute_plan",
  "class":"solr.ExecutePlanAction"}],
  "enabled":true}},
  "listeners":{".auto_add_replicas.system":{
  "trigger":".auto_add_replicas",
  "afterAction":[],
  "stage":[
"STARTED",
"ABORTED",
"SUCCEEDED",
"FAILED",
"BEFORE_ACTION",
"AFTER_ACTION",
"IGNORED"],
  "class":"org.apache.solr.cloud.autoscaling.SystemLogListener",
  "beforeAction":[]}},
  "properties":{}}>
at 
__randomizedtesting.SeedInfo.seed([1DBE2AAE8CF9A1CD:223629069BAF512A]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:147)
at 
org.apache.solr.cloud.autoscaling.sim.TestClusterStateProvider.testAutoScalingConfig(TestClusterStateProvider.java:214)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 

[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-9.0.1) - Build # 21263 - Still Unstable!

2018-01-12 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/21263/
Java: 64bit/jdk-9.0.1 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

5 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.BasicZkTest

Error Message:
SolrCore.getOpenCount()==2

Stack Trace:
java.lang.RuntimeException: SolrCore.getOpenCount()==2
at __randomizedtesting.SeedInfo.seed([1867B0CC74B2506A]:0)
at org.apache.solr.util.TestHarness.close(TestHarness.java:379)
at org.apache.solr.SolrTestCaseJ4.deleteCore(SolrTestCaseJ4.java:792)
at 
org.apache.solr.cloud.AbstractZkTestCase.azt_afterClass(AbstractZkTestCase.java:147)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:897)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.BasicZkTest

Error Message:
SolrCore.getOpenCount()==2

Stack Trace:
java.lang.RuntimeException: SolrCore.getOpenCount()==2
at __randomizedtesting.SeedInfo.seed([1867B0CC74B2506A]:0)
at org.apache.solr.util.TestHarness.close(TestHarness.java:379)
at org.apache.solr.SolrTestCaseJ4.deleteCore(SolrTestCaseJ4.java:792)
at 
org.apache.solr.SolrTestCaseJ4.teardownTestCases(SolrTestCaseJ4.java:288)
at jdk.internal.reflect.GeneratedMethodAccessor37.invoke(Unknown Source)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:897)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Updated] (LUCENE-8129) Support for defining a Unicode set filter when using ICUFoldingFilter

2018-01-12 Thread Ere Maijala (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ere Maijala updated LUCENE-8129:

Attachment: (was: SOLR-11811.patch)

> Support for defining a Unicode set filter when using ICUFoldingFilter
> -
>
> Key: LUCENE-8129
> URL: https://issues.apache.org/jira/browse/LUCENE-8129
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Reporter: Ere Maijala
>Priority: Minor
>  Labels: ICUFoldingFilterFactory, patch-available, patch-with-test
> Attachments: LUCENE-8129.patch
>
>
> While ICUNormalizer2FilterFactory supports a filter attribute to define a 
> Unicode set filter, ICUFoldingFilterFactory does not support it. A filter 
> allows one to e.g. exclude a set of characters from being folded. E.g. for 
> Finnish and Swedish the filter could be defined like this:
>   
> Note: An additional MappingCharFilterFactory or solr.LowerCaseFilterFactory 
> would be needed for lowercasing the characters excluded from folding. This is 
> similar to what ElasticSearch provides (see 
> https://www.elastic.co/guide/en/elasticsearch/plugins/current/analysis-icu-folding.html).
> I'll add a patch that does this similar to ICUNormalizer2FilterFactory. 
> Applies at least to master and branch_7x.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8129) Support for defining a Unicode set filter when using ICUFoldingFilter

2018-01-12 Thread Ere Maijala (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ere Maijala updated LUCENE-8129:

Attachment: LUCENE-8129.patch

Thanks, it was indeed bad. 

I checked that Normalizer2.getInstance calls Norm2AllModes.getInstance which 
returns a cached instance if available, so I believe you're right about it 
being immutable. An improved patch is attached.

> Support for defining a Unicode set filter when using ICUFoldingFilter
> -
>
> Key: LUCENE-8129
> URL: https://issues.apache.org/jira/browse/LUCENE-8129
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Reporter: Ere Maijala
>Priority: Minor
>  Labels: ICUFoldingFilterFactory, patch-available, patch-with-test
> Attachments: LUCENE-8129.patch
>
>
> While ICUNormalizer2FilterFactory supports a filter attribute to define a 
> Unicode set filter, ICUFoldingFilterFactory does not support it. A filter 
> allows one to e.g. exclude a set of characters from being folded. E.g. for 
> Finnish and Swedish the filter could be defined like this:
>   
> Note: An additional MappingCharFilterFactory or solr.LowerCaseFilterFactory 
> would be needed for lowercasing the characters excluded from folding. This is 
> similar to what ElasticSearch provides (see 
> https://www.elastic.co/guide/en/elasticsearch/plugins/current/analysis-icu-folding.html).
> I'll add a patch that does this similar to ICUNormalizer2FilterFactory. 
> Applies at least to master and branch_7x.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7964) suggest.highlight=true does not work when using context filter query

2018-01-12 Thread Amrit Sarkar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16323981#comment-16323981
 ] 

Amrit Sarkar commented on SOLR-7964:


Checking in, is this still an issue in Solr 7.x versions?

> suggest.highlight=true does not work when using context filter query
> 
>
> Key: SOLR-7964
> URL: https://issues.apache.org/jira/browse/SOLR-7964
> Project: Solr
>  Issue Type: Improvement
>  Components: Suggester
>Affects Versions: 5.4
>Reporter: Arcadius Ahouansou
>Priority: Minor
>  Labels: suggester
> Attachments: SOLR_7964.patch, SOLR_7964.patch
>
>
> When using the new suggester context filtering query param 
> {{suggest.contextFilterQuery}} introduced in SOLR-7888, the param 
> {{suggest.highlight=true}} has no effect.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-SmokeRelease-7.x - Build # 115 - Still Failing

2018-01-12 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-7.x/115/

No tests ran.

Build Log:
[...truncated 28287 lines...]
prepare-release-no-sign:
[mkdir] Created dir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/dist
 [copy] Copying 491 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/dist/lucene
 [copy] Copying 215 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/dist/solr
   [smoker] Java 1.8 JAVA_HOME=/home/jenkins/tools/java/latest1.8
   [smoker] NOTE: output encoding is UTF-8
   [smoker] 
   [smoker] Load release URL 
"file:/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/dist/"...
   [smoker] 
   [smoker] Test Lucene...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.42 sec (0.6 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download lucene-7.3.0-src.tgz...
   [smoker] 31.6 MB in 0.11 sec (279.9 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-7.3.0.tgz...
   [smoker] 73.0 MB in 0.17 sec (421.8 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-7.3.0.zip...
   [smoker] 83.5 MB in 0.25 sec (338.8 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack lucene-7.3.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6284 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-7.3.0.zip...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6284 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-7.3.0-src.tgz...
   [smoker] make sure no JARs/WARs in src dist...
   [smoker] run "ant validate"
   [smoker] run tests w/ Java 8 and testArgs='-Dtests.slow=false'...
   [smoker] test demo with 1.8...
   [smoker]   got 215 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] generate javadocs w/ Java 8...
   [smoker] 
   [smoker] Crawl/parse...
   [smoker] 
   [smoker] Verify...
   [smoker]   confirm all releases have coverage in TestBackwardsCompatibility
   [smoker] find all past Lucene releases...
   [smoker] run TestBackwardsCompatibility..
   [smoker] success!
   [smoker] 
   [smoker] Test Solr...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.01 sec (33.0 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download solr-7.3.0-src.tgz...
   [smoker] 53.9 MB in 0.81 sec (66.7 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-7.3.0.tgz...
   [smoker] 150.2 MB in 2.28 sec (66.0 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-7.3.0.zip...
   [smoker] 151.2 MB in 1.64 sec (92.4 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack solr-7.3.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] unpack lucene-7.3.0.tgz...
   [smoker]   **WARNING**: skipping check of 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/tmp/unpack/solr-7.3.0/contrib/dataimporthandler-extras/lib/javax.mail-1.5.1.jar:
 it has javax.* classes
   [smoker]   **WARNING**: skipping check of 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/tmp/unpack/solr-7.3.0/contrib/dataimporthandler-extras/lib/activation-1.1.1.jar:
 it has javax.* classes
   [smoker] copying unpacked distribution for Java 8 ...
   [smoker] test solr example w/ Java 8...
   [smoker]   start Solr instance 
(log=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/tmp/unpack/solr-7.3.0-java8/solr-example.log)...
   [smoker] No process found for Solr node running on port 8983
   [smoker]   Running techproducts example on port 8983 from 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/tmp/unpack/solr-7.3.0-java8
   [smoker] *** [WARN] *** Your open file limit is currently 6.  
   [smoker]  It should be set to 65000 to avoid operational disruption. 
   [smoker]  If you no longer wish to see this warning, set SOLR_ULIMIT_CHECKS 
to false in your profile or solr.in.sh
   [smoker] *** [WARN] ***  Your Max Processes Limit is currently 10240. 
   [smoker]  It should be set to 65000 to avoid operational disruption. 
   [smoker]  If you no longer wish to see this warning, set SOLR_ULIMIT_CHECKS 
to false in your profile or solr.in.sh
   [smoker] Creating Solr home directory 

[jira] [Commented] (SOLR-11722) API to create a Time Routed Alias and first collection

2018-01-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16323976#comment-16323976
 ] 

ASF GitHub Bot commented on SOLR-11722:
---

Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/304#discussion_r161226045
  
--- Diff: 
solr/core/src/java/org/apache/solr/handler/admin/CollectionsHandler.java ---
@@ -476,6 +451,31 @@ private static void 
addStatusToResponse(NamedList results, RequestStatus
   SolrIdentifierValidator.validateAliasName(req.getParams().get(NAME));
   return req.getParams().required().getAll(null, NAME, "collections");
 }),
+CREATEROUTEDALIAS_OP(CREATEROUTEDALIAS, (req, rsp, h) -> {
+  String alias = req.getParams().get(NAME);
+  SolrIdentifierValidator.validateAliasName(alias);
+  Map params = req.getParams().required()
+  .getAll(null, REQUIRED_ROUTING_PARAMS.toArray(new 
String[REQUIRED_ROUTING_PARAMS.size()]));
+  req.getParams().getAll(params, NONREQUIRED_ROUTING_PARAMS);
+  // subset the params to reuse the collection creation/parsing code
+  ModifiableSolrParams collectionParams = 
extractPrefixedParams("create-collection.", req.getParams());
+  if (collectionParams.get(NAME) != null) {
+SolrException solrException = new SolrException(BAD_REQUEST, 
"routed aliases calculate names for their " +
+"dependent collections, you cannot specify the name.");
+log.error("Could not create routed alias",solrException);
+throw solrException;
+  }
+  SolrParams v1Params = convertToV1WhenRequired(req, collectionParams);
+
+  // We need to add this temporary name just to pass validation.
--- End diff --

I wasn't clear; I don't have any issue with this part of the patch. I'm 
suggesting _adding_ a check that we require the configSet.  I needed to click 
on some line to insert my comment; maybe I should have put my review comment on 
the blank line.


> API to create a Time Routed Alias and first collection
> --
>
> Key: SOLR-11722
> URL: https://issues.apache.org/jira/browse/SOLR-11722
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: David Smiley
> Attachments: SOLR-11722.patch, SOLR-11722.patch
>
>
> This issue is about creating a single API command to create a "Time Routed 
> Alias" along with its first collection.  Need to decide what endpoint URL it 
> is and parameters.
> Perhaps in v2 it'd be {{/api/collections?command=create-routed-alias}} or 
> alternatively piggy-back off of command=create-alias but we add more options, 
> perhaps with a prefix like "router"?
> Inputs:
> * alias name
> * misc collection creation metadata (e.g. config, numShards, ...) perhaps in 
> this context with a prefix like "collection."
> * metadata for TimeRoutedAliasUpdateProcessor, currently: router.field
> * date specifier for first collection; can include "date math".
> We'll certainly add more options as future features unfold.
> I believe the collection needs to be created first (referring to the alias 
> name via a core property), and then the alias pointing to it which demands 
> collections exist first.  When figuring the collection name, you'll need to 
> reference the format in TimeRoutedAliasUpdateProcessor.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #304: SOLR-11722

2018-01-12 Thread dsmiley
Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/304#discussion_r161226045
  
--- Diff: 
solr/core/src/java/org/apache/solr/handler/admin/CollectionsHandler.java ---
@@ -476,6 +451,31 @@ private static void 
addStatusToResponse(NamedList results, RequestStatus
   SolrIdentifierValidator.validateAliasName(req.getParams().get(NAME));
   return req.getParams().required().getAll(null, NAME, "collections");
 }),
+CREATEROUTEDALIAS_OP(CREATEROUTEDALIAS, (req, rsp, h) -> {
+  String alias = req.getParams().get(NAME);
+  SolrIdentifierValidator.validateAliasName(alias);
+  Map params = req.getParams().required()
+  .getAll(null, REQUIRED_ROUTING_PARAMS.toArray(new 
String[REQUIRED_ROUTING_PARAMS.size()]));
+  req.getParams().getAll(params, NONREQUIRED_ROUTING_PARAMS);
+  // subset the params to reuse the collection creation/parsing code
+  ModifiableSolrParams collectionParams = 
extractPrefixedParams("create-collection.", req.getParams());
+  if (collectionParams.get(NAME) != null) {
+SolrException solrException = new SolrException(BAD_REQUEST, 
"routed aliases calculate names for their " +
+"dependent collections, you cannot specify the name.");
+log.error("Could not create routed alias",solrException);
+throw solrException;
+  }
+  SolrParams v1Params = convertToV1WhenRequired(req, collectionParams);
+
+  // We need to add this temporary name just to pass validation.
--- End diff --

I wasn't clear; I don't have any issue with this part of the patch. I'm 
suggesting _adding_ a check that we require the configSet.  I needed to click 
on some line to insert my comment; maybe I should have put my review comment on 
the blank line.


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #304: SOLR-11722

2018-01-12 Thread dsmiley
Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/304#discussion_r161223342
  
--- Diff: solr/core/src/java/org/apache/solr/cloud/CreateAliasCmd.java ---
@@ -30,13 +44,101 @@
 import org.apache.solr.common.cloud.ZkStateReader;
 import org.apache.solr.common.util.NamedList;
 import org.apache.solr.common.util.StrUtils;
+import org.apache.solr.update.processor.TimeRoutedAliasUpdateProcessor;
+import org.apache.solr.util.DateMathParser;
+import org.apache.solr.util.TimeZoneUtils;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
+import static java.time.format.DateTimeFormatter.ISO_INSTANT;
+import static 
org.apache.solr.cloud.OverseerCollectionMessageHandler.COLL_CONF;
+import static org.apache.solr.common.SolrException.ErrorCode.BAD_REQUEST;
 import static org.apache.solr.common.params.CommonParams.NAME;
+import static org.apache.solr.common.params.CommonParams.TZ;
+import static 
org.apache.solr.handler.admin.CollectionsHandler.ROUTED_ALIAS_COLLECTION_PROP_PFX;
+import static 
org.apache.solr.update.processor.TimeRoutedAliasUpdateProcessor.DATE_TIME_FORMATTER;
 
 
 public class CreateAliasCmd implements Cmd {
+
+  private static final Logger log = 
LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
+
+  public static final String ROUTING_TYPE = "router.name";
+  public static final String ROUTING_FIELD = "router.field";
+  public static final String ROUTING_INCREMENT = "router.interval";
+  public static final String ROUTING_MAX_FUTURE = "router.max-future-ms";
+  public static final String START = "start";
+  // Collection constants should all reflect names in the v2 structured 
input for this command, not v1
+  // names used for CREATE
+  public static final String CREATE_COLLECTION_CONFIG = 
"create-collection.config";
--- End diff --

My concern isn't just about the duplication of the Strings/names, it's 
about the "set" of them here.  Even if we could refer to a constant in some 
other class accessible to SolrJ, it would still be a maintenance burden to 
refer to the set of those that exist since it's redundant with the collection 
creation code.  So can we eliminate this entirely (I'd love that) or failing 
that have a single place where the set of these strings are defined so we don't 
even need to list them in CreateAliasCmd.


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11722) API to create a Time Routed Alias and first collection

2018-01-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16323968#comment-16323968
 ] 

ASF GitHub Bot commented on SOLR-11722:
---

Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/304#discussion_r161223342
  
--- Diff: solr/core/src/java/org/apache/solr/cloud/CreateAliasCmd.java ---
@@ -30,13 +44,101 @@
 import org.apache.solr.common.cloud.ZkStateReader;
 import org.apache.solr.common.util.NamedList;
 import org.apache.solr.common.util.StrUtils;
+import org.apache.solr.update.processor.TimeRoutedAliasUpdateProcessor;
+import org.apache.solr.util.DateMathParser;
+import org.apache.solr.util.TimeZoneUtils;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
+import static java.time.format.DateTimeFormatter.ISO_INSTANT;
+import static 
org.apache.solr.cloud.OverseerCollectionMessageHandler.COLL_CONF;
+import static org.apache.solr.common.SolrException.ErrorCode.BAD_REQUEST;
 import static org.apache.solr.common.params.CommonParams.NAME;
+import static org.apache.solr.common.params.CommonParams.TZ;
+import static 
org.apache.solr.handler.admin.CollectionsHandler.ROUTED_ALIAS_COLLECTION_PROP_PFX;
+import static 
org.apache.solr.update.processor.TimeRoutedAliasUpdateProcessor.DATE_TIME_FORMATTER;
 
 
 public class CreateAliasCmd implements Cmd {
+
+  private static final Logger log = 
LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
+
+  public static final String ROUTING_TYPE = "router.name";
+  public static final String ROUTING_FIELD = "router.field";
+  public static final String ROUTING_INCREMENT = "router.interval";
+  public static final String ROUTING_MAX_FUTURE = "router.max-future-ms";
+  public static final String START = "start";
+  // Collection constants should all reflect names in the v2 structured 
input for this command, not v1
+  // names used for CREATE
+  public static final String CREATE_COLLECTION_CONFIG = 
"create-collection.config";
--- End diff --

My concern isn't just about the duplication of the Strings/names, it's 
about the "set" of them here.  Even if we could refer to a constant in some 
other class accessible to SolrJ, it would still be a maintenance burden to 
refer to the set of those that exist since it's redundant with the collection 
creation code.  So can we eliminate this entirely (I'd love that) or failing 
that have a single place where the set of these strings are defined so we don't 
even need to list them in CreateAliasCmd.


> API to create a Time Routed Alias and first collection
> --
>
> Key: SOLR-11722
> URL: https://issues.apache.org/jira/browse/SOLR-11722
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: David Smiley
> Attachments: SOLR-11722.patch, SOLR-11722.patch
>
>
> This issue is about creating a single API command to create a "Time Routed 
> Alias" along with its first collection.  Need to decide what endpoint URL it 
> is and parameters.
> Perhaps in v2 it'd be {{/api/collections?command=create-routed-alias}} or 
> alternatively piggy-back off of command=create-alias but we add more options, 
> perhaps with a prefix like "router"?
> Inputs:
> * alias name
> * misc collection creation metadata (e.g. config, numShards, ...) perhaps in 
> this context with a prefix like "collection."
> * metadata for TimeRoutedAliasUpdateProcessor, currently: router.field
> * date specifier for first collection; can include "date math".
> We'll certainly add more options as future features unfold.
> I believe the collection needs to be created first (referring to the alias 
> name via a core property), and then the alias pointing to it which demands 
> collections exist first.  When figuring the collection name, you'll need to 
> reference the format in TimeRoutedAliasUpdateProcessor.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8129) Support for defining a Unicode set filter when using ICUFoldingFilter

2018-01-12 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16323940#comment-16323940
 ] 

Robert Muir commented on LUCENE-8129:
-

Need to double-check, but i'm pretty sure these things are completely immutable 
in ICU. So even better may be, to just change it from private static to public 
static and give a little javadoc blurb about what it is.

Then any code can use it for purposes such as this.

> Support for defining a Unicode set filter when using ICUFoldingFilter
> -
>
> Key: LUCENE-8129
> URL: https://issues.apache.org/jira/browse/LUCENE-8129
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Reporter: Ere Maijala
>Priority: Minor
>  Labels: ICUFoldingFilterFactory, patch-available, patch-with-test
> Attachments: SOLR-11811.patch
>
>
> While ICUNormalizer2FilterFactory supports a filter attribute to define a 
> Unicode set filter, ICUFoldingFilterFactory does not support it. A filter 
> allows one to e.g. exclude a set of characters from being folded. E.g. for 
> Finnish and Swedish the filter could be defined like this:
>   
> Note: An additional MappingCharFilterFactory or solr.LowerCaseFilterFactory 
> would be needed for lowercasing the characters excluded from folding. This is 
> similar to what ElasticSearch provides (see 
> https://www.elastic.co/guide/en/elasticsearch/plugins/current/analysis-icu-folding.html).
> I'll add a patch that does this similar to ICUNormalizer2FilterFactory. 
> Applies at least to master and branch_7x.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8129) Support for defining a Unicode set filter when using ICUFoldingFilter

2018-01-12 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16323919#comment-16323919
 ] 

Robert Muir commented on LUCENE-8129:
-

Thanks for the patch.

I don't like the change to the way the rules are loaded at all. Its now 
duplicated across the factory and the filter It also changes the constructor to 
load it from a file on every invocation whereas before it happened only once, 
this is too slow for any real use.

I think it is enough to just remove the keyword 'private' on the existing 
static instance, so that its package-private instead. Then the factory is able 
to access it, wrap it with a filter, and pass it to Normalizer2Filter.

> Support for defining a Unicode set filter when using ICUFoldingFilter
> -
>
> Key: LUCENE-8129
> URL: https://issues.apache.org/jira/browse/LUCENE-8129
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Reporter: Ere Maijala
>Priority: Minor
>  Labels: ICUFoldingFilterFactory, patch-available, patch-with-test
> Attachments: SOLR-11811.patch
>
>
> While ICUNormalizer2FilterFactory supports a filter attribute to define a 
> Unicode set filter, ICUFoldingFilterFactory does not support it. A filter 
> allows one to e.g. exclude a set of characters from being folded. E.g. for 
> Finnish and Swedish the filter could be defined like this:
>   
> Note: An additional MappingCharFilterFactory or solr.LowerCaseFilterFactory 
> would be needed for lowercasing the characters excluded from folding. This is 
> similar to what ElasticSearch provides (see 
> https://www.elastic.co/guide/en/elasticsearch/plugins/current/analysis-icu-folding.html).
> I'll add a patch that does this similar to ICUNormalizer2FilterFactory. 
> Applies at least to master and branch_7x.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Moved] (LUCENE-8129) Support for defining a Unicode set filter when using ICUFoldingFilter

2018-01-12 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir moved SOLR-11811 to LUCENE-8129:


 Security: (was: Public)
  Component/s: (was: Schema and Analysis)
   modules/analysis
Lucene Fields: New
  Key: LUCENE-8129  (was: SOLR-11811)
  Project: Lucene - Core  (was: Solr)

> Support for defining a Unicode set filter when using ICUFoldingFilter
> -
>
> Key: LUCENE-8129
> URL: https://issues.apache.org/jira/browse/LUCENE-8129
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Reporter: Ere Maijala
>Priority: Minor
>  Labels: ICUFoldingFilterFactory, patch-available, patch-with-test
> Attachments: SOLR-11811.patch
>
>
> While ICUNormalizer2FilterFactory supports a filter attribute to define a 
> Unicode set filter, ICUFoldingFilterFactory does not support it. A filter 
> allows one to e.g. exclude a set of characters from being folded. E.g. for 
> Finnish and Swedish the filter could be defined like this:
>   
> Note: An additional MappingCharFilterFactory or solr.LowerCaseFilterFactory 
> would be needed for lowercasing the characters excluded from folding. This is 
> similar to what ElasticSearch provides (see 
> https://www.elastic.co/guide/en/elasticsearch/plugins/current/analysis-icu-folding.html).
> I'll add a patch that does this similar to ICUNormalizer2FilterFactory. 
> Applies at least to master and branch_7x.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 1621 - Still Unstable!

2018-01-12 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/1621/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testCollectionsAPI

Error Message:
Error from server at 
http://127.0.0.1:36329/solr/awhollynewcollection_0_shard1_replica_n1: 
ClusterState says we are the leader 
(http://127.0.0.1:36329/solr/awhollynewcollection_0_shard1_replica_n1), but 
locally we don't think so. Request came from null

Stack Trace:
org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: Error from 
server at http://127.0.0.1:36329/solr/awhollynewcollection_0_shard1_replica_n1: 
ClusterState says we are the leader 
(http://127.0.0.1:36329/solr/awhollynewcollection_0_shard1_replica_n1), but 
locally we don't think so. Request came from null
at 
__randomizedtesting.SeedInfo.seed([74CA753206874A5E:3CBF018600B465CB]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:550)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1013)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:884)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:946)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:946)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:946)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:946)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:946)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.client.solrj.request.UpdateRequest.commit(UpdateRequest.java:233)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testCollectionsAPI(CollectionsAPIDistributedZkTest.java:461)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-11063) Policy should accept disk space as a hint

2018-01-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11063?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16323908#comment-16323908
 ] 

ASF subversion and git services commented on SOLR-11063:


Commit 37e2faa8a947b51ac5308c2f15f6c23d7ec87ae0 in lucene-solr's branch 
refs/heads/branch_7x from [~noble.paul]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=37e2faa ]

SOLR-11063: Suggesters should accept required freedisk as a hint


> Policy should accept disk space as a hint
> -
>
> Key: SOLR-11063
> URL: https://issues.apache.org/jira/browse/SOLR-11063
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling, SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Noble Paul
> Fix For: master (8.0), 7.3
>
>
> The policy should accept minimum disk space required as a hint and therefore 
> refuse to suggest operations if the hint cannot be satisfied.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-11063) Policy should accept disk space as a hint

2018-01-12 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11063?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul resolved SOLR-11063.
---
Resolution: Fixed

> Policy should accept disk space as a hint
> -
>
> Key: SOLR-11063
> URL: https://issues.apache.org/jira/browse/SOLR-11063
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling, SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Noble Paul
> Fix For: master (8.0), 7.3
>
>
> The policy should accept minimum disk space required as a hint and therefore 
> refuse to suggest operations if the hint cannot be satisfied.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11063) Policy should accept disk space as a hint

2018-01-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11063?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16323903#comment-16323903
 ] 

ASF subversion and git services commented on SOLR-11063:


Commit fe86ab982d14b02d5fc9842259f9d0ae1a949757 in lucene-solr's branch 
refs/heads/master from [~noble.paul]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=fe86ab9 ]

SOLR-11063: Suggesters should accept required freedisk as a hint


> Policy should accept disk space as a hint
> -
>
> Key: SOLR-11063
> URL: https://issues.apache.org/jira/browse/SOLR-11063
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling, SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Noble Paul
> Fix For: master (8.0), 7.3
>
>
> The policy should accept minimum disk space required as a hint and therefore 
> refuse to suggest operations if the hint cannot be satisfied.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-4198) Allow codecs to index term impacts

2018-01-12 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-4198:
-
Attachment: LUCENE-4198.patch

I have taken another approach. Issue with {{setMinCompetitiveScore}} is that it 
usually cannot be efficiently leveraged to speed up eg. conjunctions. So I went 
with implementing ideas from the block-max WAND (BMW) paper 
(http://engineering.nyu.edu/~suel/papers/bmw.pdf): the patch introduces a new 
{{ImpactsEnum}} which extends {{PostingsEnum}} and introduces two APIs instead 
of {{setMinCompetitiveScore}}:
 - {{int advanceShallow(int target)}} to get scoring information for documents 
that start at {{target}}. The benefit compared to {{advance}} is that it only 
advances the skip list reader, which is much cheaper: no decoding is happening.
 - {{float getMaxScore(int upTo)}} wich gives information about scores for doc 
ids between the last target to {{advanceShallow}} and {{upTo}}, both included.

Currently only TermScorer leverages this, but the benefit is that we could add 
these APIs to Scorer as well in a follow-up issue so that WANDScorer and 
ConjunctionScorer could leverage them. I built a prototype already to make sure 
that there is an actual speedup for some queries, but I'm leaving it to a 
follow-up issue as indexing impacts is already challenging on its own. One 
thing that it made me change though is that the new patch also stores all 
impacts on the first level, which is written every 128 documents. This seemed 
important for conjunctions, since the maximum score on a given block is not 
always reached, on the contrary to term queries since they match all documents 
in the block. It makes it more important to have good bounds of the score with 
conjunctions than it is with term queries. The disk overhead is still 
acceptable to me: the wikimedium10 index is only 1.4% larger overall, and 
postings alone (the .doc file) is only 3.1% larger.

Here are the benchmark results:

{noformat}
TaskQPS baseline  StdDev   QPS patch  StdDev
Pct diff
  AndHighLow 1128.91  (3.5%)  875.48  (2.3%)  
-22.4% ( -27% -  -17%)
  AndHighMed  409.67  (2.0%)  331.98  (1.7%)  
-19.0% ( -22% -  -15%)
   OrHighMed  264.99  (3.5%)  229.15  (3.0%)  
-13.5% ( -19% -   -7%)
   OrHighLow  111.47  (4.5%)   98.00  (3.2%)  
-12.1% ( -18% -   -4%)
  OrHighHigh   34.88  (4.2%)   31.69  (4.0%)   
-9.1% ( -16% -   -1%)
OrNotHighLow 1373.74  (5.2%) 1291.72  (4.1%)   
-6.0% ( -14% -3%)
   LowPhrase   78.14  (1.6%)   75.28  (1.2%)   
-3.7% (  -6% -0%)
   MedPhrase   47.49  (1.6%)   45.92  (1.2%)   
-3.3% (  -5% -0%)
 LowSloppyPhrase  208.43  (2.8%)  202.37  (2.7%)   
-2.9% (  -8% -2%)
  Fuzzy1  300.99  (7.7%)  292.78  (8.0%)   
-2.7% ( -17% -   13%)
 LowSpanNear   62.73  (1.4%)   61.09  (1.3%)   
-2.6% (  -5% -0%)
  Fuzzy2  188.37  (7.9%)  184.16  (6.7%)   
-2.2% ( -15% -   13%)
 MedSpanNear   57.41  (1.8%)   56.17  (1.5%)   
-2.2% (  -5% -1%)
 MedSloppyPhrase   23.21  (2.3%)   22.73  (2.3%)   
-2.1% (  -6% -2%)
  HighPhrase   48.75  (3.2%)   47.80  (3.6%)   
-1.9% (  -8% -4%)
HighSpanNear   40.04  (2.9%)   39.35  (2.7%)   
-1.7% (  -7% -4%)
   HighTermMonthSort  228.21  (8.4%)  224.66  (7.9%)   
-1.6% ( -16% -   16%)
HighSloppyPhrase   25.96  (2.8%)   25.61  (3.0%)   
-1.4% (  -6% -4%)
 Respell  284.85  (3.7%)  282.42  (4.0%)   
-0.9% (  -8% -7%)
  IntNRQ   18.87  (5.3%)   18.86  (6.8%)   
-0.1% ( -11% -   12%)
Wildcard   85.50  (5.0%)   86.79  (4.0%)
1.5% (  -7% -   11%)
 Prefix3  137.41  (6.5%)  141.61  (4.9%)
3.1% (  -7% -   15%)
   HighTermDayOfYearSort  116.58  (6.3%)  121.38  (7.2%)
4.1% (  -8% -   18%)
 AndHighHigh   37.64  (1.5%)  118.12  (6.7%)  
213.8% ( 202% -  225%)
 LowTerm  909.13  (2.2%) 3379.38 (11.2%)  
271.7% ( 252% -  291%)
OrNotHighMed  196.21  (1.7%) 1509.92 (28.9%)  
669.6% ( 627% -  712%)
 MedTerm  305.82  (1.7%) 2897.01 (42.5%)  
847.3% ( 789% -  907%)
HighTerm  108.94  (1.7%) 1191.54 (61.3%)  
993.8% ( 915% - 1075%)
OrHighNotMed   81.83  (0.5%) 1082.94 (63.2%) 
1223.5% (1153% - 

[jira] [Updated] (SOLR-11848) User guide update on working with XML updates

2018-01-12 Thread Dariusz Wojtas (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dariusz Wojtas updated SOLR-11848:
--
Component/s: documentation

> User guide update on working with XML updates
> -
>
> Key: SOLR-11848
> URL: https://issues.apache.org/jira/browse/SOLR-11848
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Affects Versions: 7.2
>Reporter: Dariusz Wojtas
>Priority: Minor
>
> I have added some additional information to the user guide in repository:
>   https://github.com/apache/lucene-solr/pull/303
> this covers info about:
> # grouping XML commands with the  element
> # more effective way of working with curl for large files - the original 
> approach described in the guide with _--data-binary_ results in curl out of 
> memory errors for large files being uploaded



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.2-Linux (64bit/jdk1.8.0_144) - Build # 127 - Unstable!

2018-01-12 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.2-Linux/127/
Java: 64bit/jdk1.8.0_144 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  
org.apache.solr.handler.TestReplicationHandler.doTestIndexAndConfigReplication

Error Message:
Index: 0, Size: 0

Stack Trace:
java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
at 
__randomizedtesting.SeedInfo.seed([5F2D2FCDF771B9A5:4B657498D47604BB]:0)
at java.util.ArrayList.rangeCheck(ArrayList.java:653)
at java.util.ArrayList.get(ArrayList.java:429)
at 
org.apache.solr.handler.TestReplicationHandler.doTestIndexAndConfigReplication(TestReplicationHandler.java:561)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)




Build Log:
[...truncated 12814 lines...]
   [junit4] Suite: org.apache.solr.handler.TestReplicationHandler
   [junit4]   2> Creating dataDir: 

[jira] [Updated] (SOLR-11849) Core recovery fails to complete if warmup query fails due to exceeding timeAllowed

2018-01-12 Thread Ere Maijala (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ere Maijala updated SOLR-11849:
---

Related to SOLR-4408 but not the same, I think.

> Core recovery fails to complete if warmup query fails due to exceeding 
> timeAllowed
> --
>
> Key: SOLR-11849
> URL: https://issues.apache.org/jira/browse/SOLR-11849
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search
>Affects Versions: 7.2
>Reporter: Ere Maijala
>Priority: Minor
>
> Core init or recovery never completes if a warmup query fails to complete 
> when timeAllowed is specified as a default parameter in requestHandler 
> settings and the warmup query execution exceeds it. In this case an exception 
> is logged but the recovery never completes. It's of course possible to 
> include another value for timeAllowed in the warmup query, but I believe this 
> could be handled in a more robust manner, such as ignoring timeAllowed for 
> warmup or binging the core online regardless of the timeout.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-11849) Core recovery fails to complete if warmup query fails due to exceeding timeAllowed

2018-01-12 Thread Ere Maijala (JIRA)
Ere Maijala created SOLR-11849:
--

 Summary: Core recovery fails to complete if warmup query fails due 
to exceeding timeAllowed
 Key: SOLR-11849
 URL: https://issues.apache.org/jira/browse/SOLR-11849
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: search
Affects Versions: 7.2
Reporter: Ere Maijala
Priority: Minor


Core init or recovery never completes if a warmup query fails to complete when 
timeAllowed is specified as a default parameter in requestHandler settings and 
the warmup query execution exceeds it. In this case an exception is logged but 
the recovery never completes. It's of course possible to include another value 
for timeAllowed in the warmup query, but I believe this could be handled in a 
more robust manner, such as ignoring timeAllowed for warmup or binging the core 
online regardless of the timeout.




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4722) Highlighter which generates a list of query term position(s) for each item in a list of documents, or returns null if highlighting is disabled.

2018-01-12 Thread Tamer Boz (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16323833#comment-16323833
 ] 

Tamer Boz commented on SOLR-4722:
-

I want to use the PositionsSolrHighlighter with solr version 6.6.2. I managed 
to compile PositionsSolrHighlighter.java in version 6.6.2  without errors.
After that I put the snippet below in in Solrconfig.xml, als I used these 
settings termVectors="true" termPositions="true" termOffsets="true"
Unfortunate I do not get any results about each term's text.
Can someone explain all the steps that are necessary to get results?


 



> Highlighter which generates a list of query term position(s) for each item in 
> a list of documents, or returns null if highlighting is disabled.
> ---
>
> Key: SOLR-4722
> URL: https://issues.apache.org/jira/browse/SOLR-4722
> Project: Solr
>  Issue Type: New Feature
>  Components: highlighter
>Affects Versions: 4.3, 6.0
>Reporter: Tricia Jenkins
>Priority: Minor
> Attachments: PositionsSolrHighlighter.java, SOLR-4722.patch, 
> SOLR-4722.patch, solr-positionshighlighter.jar
>
>
> As an alternative to returning snippets, this highlighter provides the (term) 
> position for query matches.  One usecase for this is to reconcile the term 
> position from the Solr index with 'word' coordinates provided by an OCR 
> process.  In this way we are able to 'highlight' an image, like a page from a 
> book or an article from a newspaper, in the locations that match the user's 
> query.
> This is based on the FastVectorHighlighter and requires that termVectors, 
> termOffsets and termPositions be stored.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-11848) User guide update on working with XML updates

2018-01-12 Thread Dariusz Wojtas (JIRA)
Dariusz Wojtas created SOLR-11848:
-

 Summary: User guide update on working with XML updates
 Key: SOLR-11848
 URL: https://issues.apache.org/jira/browse/SOLR-11848
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Affects Versions: 7.2
Reporter: Dariusz Wojtas
Priority: Minor


I have added some additional information to the user guide in repository:
  https://github.com/apache/lucene-solr/pull/303

this covers info about:
# grouping XML commands with the  element
# more effective way of working with curl for large files - the original 
approach described in the guide with _--data-binary_ results in curl out of 
memory errors for large files being uploaded



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >