[JENKINS] Lucene-Solr-NightlyTests-master - Build # 1201 - Still Unstable

2017-01-02 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1201/

7 tests failed.
FAILED:  org.apache.lucene.search.TestFuzzyQuery.testRandom

Error Message:
Test abandoned because suite timeout was reached.

Stack Trace:
java.lang.Exception: Test abandoned because suite timeout was reached.
at __randomizedtesting.SeedInfo.seed([7109AA75AE1BFC94]:0)


FAILED:  junit.framework.TestSuite.org.apache.lucene.search.TestFuzzyQuery

Error Message:
Suite timeout exceeded (>= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (>= 720 msec).
at __randomizedtesting.SeedInfo.seed([7109AA75AE1BFC94]:0)


FAILED:  
org.apache.solr.cloud.CdcrReplicationDistributedZkTest.testReplicationAfterLeaderChange

Error Message:
Timeout waiting for CDCR replication to complete @source_collection:shard1

Stack Trace:
java.lang.RuntimeException: Timeout waiting for CDCR replication to complete 
@source_collection:shard1
at 
__randomizedtesting.SeedInfo.seed([777195C9C3D3929C:A581D92A9D7C34AE]:0)
at 
org.apache.solr.cloud.BaseCdcrDistributedZkTest.waitForReplicationToComplete(BaseCdcrDistributedZkTest.java:795)
at 
org.apache.solr.cloud.CdcrReplicationDistributedZkTest.testReplicationAfterLeaderChange(CdcrReplicationDistributedZkTest.java:305)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-9906) Use better check to validate if node recovered via PeerSync or Replication

2017-01-02 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15794381#comment-15794381
 ] 

ASF subversion and git services commented on SOLR-9906:
---

Commit 3988532d26a50b1f3cf51e1d0009a0754cfd6b57 in lucene-solr's branch 
refs/heads/master from [~noble.paul]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=3988532 ]

SOLR-9906-Use better check to validate if node recovered via PeerSync or 
Replication


> Use better check to validate if node recovered via PeerSync or Replication
> --
>
> Key: SOLR-9906
> URL: https://issues.apache.org/jira/browse/SOLR-9906
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Pushkar Raste
>Priority: Minor
> Attachments: SOLR-9906.patch, SOLR-9906.patch, 
> SOLR-PeerSyncVsReplicationTest.diff
>
>
> Tests {{LeaderFailureAfterFreshStartTest}} and {{PeerSyncReplicationTest}} 
> currently rely on number of requests made to the leader's replication handler 
> to check if node recovered via PeerSync or replication. This check is not 
> very reliable and we have seen failures in the past. 
> While tinkering with different way to write a better test I found 
> [SOLR-9859|SOLR-9859]. Now that SOLR-9859 is fixed, here is idea for better 
> way to distinguish recovery via PeerSync vs Replication. 
> * For {{PeerSyncReplicationTest}}, if node successfully recovers via 
> PeerSync, then file {{replication.properties}} should not exist
> For {{LeaderFailureAfterFreshStartTest}}, if the freshly replicated node does 
> not go into replication recovery after the leader failure, contents 
> {{replication.properties}} should not change 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_112) - Build # 18685 - Failure!

2017-01-02 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/18685/
Java: 32bit/jdk1.8.0_112 -server -XX:+UseG1GC

All tests passed

Build Log:
[...truncated 6806 lines...]
   [junit4] ERROR: JVM J2 ended with an exception, command line: 
/home/jenkins/tools/java/32bit/jdk1.8.0_112/jre/bin/java -server -XX:+UseG1GC 
-XX:+HeapDumpOnOutOfMemoryError 
-XX:HeapDumpPath=/home/jenkins/workspace/Lucene-Solr-master-Linux/heapdumps -ea 
-esa -Dtests.prefix=tests -Dtests.seed=2EA6462D79AD03A4 -Xmx512M -Dtests.iters= 
-Dtests.verbose=false -Dtests.infostream=false -Dtests.codec=random 
-Dtests.postingsformat=random -Dtests.docvaluesformat=random 
-Dtests.locale=random -Dtests.timezone=random -Dtests.directory=random 
-Dtests.linedocsfile=europarl.lines.txt.gz -Dtests.luceneMatchVersion=7.0.0 
-Dtests.cleanthreads=perMethod 
-Djava.util.logging.config.file=/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/tools/junit4/logging.properties
 -Dtests.nightly=false -Dtests.weekly=false -Dtests.monster=false 
-Dtests.slow=true -Dtests.asserts=true -Dtests.multiplier=3 -DtempDir=./temp 
-Djava.io.tmpdir=./temp 
-Djunit4.tempDir=/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/facet/test/temp
 -Dcommon.dir=/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene 
-Dclover.db.dir=/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/clover/db
 
-Djava.security.policy=/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/tools/junit4/tests.policy
 -Dtests.LUCENE_VERSION=7.0.0 -Djetty.testMode=1 -Djetty.insecurerandom=1 
-Dsolr.directoryFactory=org.apache.solr.core.MockDirectoryFactory 
-Djava.awt.headless=true -Djdk.map.althashing.threshold=0 
-Djunit4.childvm.cwd=/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/facet/test/J2
 -Djunit4.childvm.id=2 -Djunit4.childvm.count=3 -Dtests.leaveTemporary=false 
-Dtests.filterstacks=true -Dtests.disableHdfs=true 
-Djava.security.manager=org.apache.lucene.util.TestSecurityManager -classpath 
/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/facet/classes/test:/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/test-framework/classes/java:/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/codecs/classes/java:/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/core/classes/java:/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/queries/lucene-queries-7.0.0-SNAPSHOT.jar:/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/test-framework/lib/junit-4.10.jar:/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/test-framework/lib/randomizedtesting-runner-2.4.0.jar:/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/facet/classes/java:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-launcher.jar:/home/jenkins/.ant/lib/ivy-2.3.0.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-jdepend.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-apache-bcel.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-jmf.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-junit4.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-apache-xalan2.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-javamail.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-jai.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-apache-bsf.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-commons-logging.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-commons-net.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-apache-resolver.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-apache-log4j.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-junit.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-apache-oro.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-antlr.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-jsch.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-apache-regexp.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-swing.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-testutil.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-netrexx.jar:/home/jenkins/tools/java/32bit/jdk1.8.0_112/lib/tools.jar:/home/jenkins/.ivy2/cache/com.carrotsearch.randomizedtesting/junit4-ant/jars/junit4-ant-2.4.0.jar
 com.carrotsearch.ant.tasks.junit4.slave.SlaveMainSafe -eventsfile 
/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/facet/test/temp/junit4-J2-20170103_065531_673.events
 

[JENKINS] Lucene-Solr-master-Windows (64bit/jdk1.8.0_112) - Build # 6330 - Unstable!

2017-01-02 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/6330/
Java: 64bit/jdk1.8.0_112 -XX:-UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  
org.apache.solr.cloud.TestSolrCloudWithDelegationTokens.testDelegationTokenCancelFail

Error Message:
expected:<200> but was:<404>

Stack Trace:
java.lang.AssertionError: expected:<200> but was:<404>
at 
__randomizedtesting.SeedInfo.seed([521434FFBEAD03FB:3AAB01D56E371117]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.cloud.TestSolrCloudWithDelegationTokens.cancelDelegationToken(TestSolrCloudWithDelegationTokens.java:140)
at 
org.apache.solr.cloud.TestSolrCloudWithDelegationTokens.testDelegationTokenCancelFail(TestSolrCloudWithDelegationTokens.java:294)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS] Lucene-Solr-6.x-Solaris (64bit/jdk1.8.0) - Build # 592 - Still Unstable!

2017-01-02 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Solaris/592/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.hdfs.HdfsRecoveryZkTest

Error Message:
ObjectTracker found 1 object(s) that were not released!!! [HdfsTransactionLog] 
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:43)
  at 
org.apache.solr.update.HdfsTransactionLog.(HdfsTransactionLog.java:130)  
at org.apache.solr.update.HdfsUpdateLog.init(HdfsUpdateLog.java:202)  at 
org.apache.solr.update.UpdateHandler.(UpdateHandler.java:137)  at 
org.apache.solr.update.UpdateHandler.(UpdateHandler.java:94)  at 
org.apache.solr.update.DirectUpdateHandler2.(DirectUpdateHandler2.java:102)
  at sun.reflect.GeneratedConstructorAccessor140.newInstance(Unknown Source)  
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
  at java.lang.reflect.Constructor.newInstance(Constructor.java:423)  at 
org.apache.solr.core.SolrCore.createInstance(SolrCore.java:729)  at 
org.apache.solr.core.SolrCore.createUpdateHandler(SolrCore.java:791)  at 
org.apache.solr.core.SolrCore.initUpdateHandler(SolrCore.java:1042)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:907)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:799)  at 
org.apache.solr.core.CoreContainer.create(CoreContainer.java:877)  at 
org.apache.solr.core.CoreContainer.lambda$load$3(CoreContainer.java:529)  at 
java.util.concurrent.FutureTask.run(FutureTask.java:266)  at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
 at java.lang.Thread.run(Thread.java:745)  

Stack Trace:
java.lang.AssertionError: ObjectTracker found 1 object(s) that were not 
released!!! [HdfsTransactionLog]
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException
at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:43)
at 
org.apache.solr.update.HdfsTransactionLog.(HdfsTransactionLog.java:130)
at org.apache.solr.update.HdfsUpdateLog.init(HdfsUpdateLog.java:202)
at org.apache.solr.update.UpdateHandler.(UpdateHandler.java:137)
at org.apache.solr.update.UpdateHandler.(UpdateHandler.java:94)
at 
org.apache.solr.update.DirectUpdateHandler2.(DirectUpdateHandler2.java:102)
at sun.reflect.GeneratedConstructorAccessor140.newInstance(Unknown 
Source)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.solr.core.SolrCore.createInstance(SolrCore.java:729)
at org.apache.solr.core.SolrCore.createUpdateHandler(SolrCore.java:791)
at org.apache.solr.core.SolrCore.initUpdateHandler(SolrCore.java:1042)
at org.apache.solr.core.SolrCore.(SolrCore.java:907)
at org.apache.solr.core.SolrCore.(SolrCore.java:799)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:877)
at 
org.apache.solr.core.CoreContainer.lambda$load$3(CoreContainer.java:529)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)


at __randomizedtesting.SeedInfo.seed([68C1A245B5C2960B]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNull(Assert.java:551)
at 
org.apache.solr.SolrTestCaseJ4.teardownTestCases(SolrTestCaseJ4.java:266)
at sun.reflect.GeneratedMethodAccessor25.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:870)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 

[JENKINS-EA] Lucene-Solr-6.x-Linux (32bit/jdk-9-ea+147) - Build # 2576 - Unstable!

2017-01-02 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/2576/
Java: 32bit/jdk-9-ea+147 -client -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.cloud.ShardSplitTest.testSplitAfterFailedSplit

Error Message:
expected:<1> but was:<2>

Stack Trace:
java.lang.AssertionError: expected:<1> but was:<2>
at 
__randomizedtesting.SeedInfo.seed([7FE28DB4CFDE2920:86AF1E1BF3AB64AA]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.cloud.ShardSplitTest.testSplitAfterFailedSplit(ShardSplitTest.java:280)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:538)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (LUCENE-998) BooleanQuery.setMaxClauseCount(int) is static

2017-01-02 Thread Trejkaz (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15793976#comment-15793976
 ] 

Trejkaz commented on LUCENE-998:


Won't fix? I was about to file this one. 

I don't think this setting should be static, because we have another library on 
our classpath which uses Lucene, and we hit a situation where we were calling 
it but they were setting it back to something else. So now we have to call it 
after they call it, but are you fricking serious? That is not a solution, and 
people who introduce mutable static fields should feel bad for what they have 
done.


> BooleanQuery.setMaxClauseCount(int) is static
> -
>
> Key: LUCENE-998
> URL: https://issues.apache.org/jira/browse/LUCENE-998
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Affects Versions: 2.1
>Reporter: Tim Lebedkov
> Attachments: lucene-998.patch
>
>
> BooleanQuery.setMaxClauseCount(int) is static. It does not allow searching in 
> multiple indices from different threads using different settings. This 
> setting should be probably moved in to the IndexSearcher.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9835) Create another replication mode for SolrCloud

2017-01-02 Thread Cao Manh Dat (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cao Manh Dat updated SOLR-9835:
---
Attachment: SOLR-9835.patch

Updated patch for this issues, the changes are pretty solid now. 

The main difference between {{onlyLeaderIndexes}} mode and current mode is in 
{{onlyLeaderIndexes}} mode we can serve stale data. So I modified TestInjection 
to make replicas wait for indexFetcher finish upon receiving commit request, 
then we can reuse existing tests for SolrCloud to test for 
{{onlyLeaderIndexes}} mode. These are failed tests (5/206 tests of SolrCloud)
- CdcrVersionReplicationTest, ShardSplitTest, SyncSliceTest: we can notify to 
users that {{onlyLeaderIndexes}} hasn't  supported for CDCR, ShardSplit and 
SyncSlice yet.
- LeaderFailureAfterFreshStartTest, PeerSyncReplicationTest : we don't support 
peersync yet.
I think all these tests can be ignored for this issue, we can tackle these 
failed on other tickets.

I also run the jepsen tests for this mode ( 
https://lucidworks.com/blog/2014/12/10/call-maybe-solrcloud-jepsen-flaky-networks/
 ). The tests are passed so I think we can pretty sure that new mode is 
consistency and partition tolerance.

> Create another replication mode for SolrCloud
> -
>
> Key: SOLR-9835
> URL: https://issues.apache.org/jira/browse/SOLR-9835
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Assignee: Shalin Shekhar Mangar
> Attachments: SOLR-9835.patch, SOLR-9835.patch, SOLR-9835.patch, 
> SOLR-9835.patch
>
>
> The current replication mechanism of SolrCloud is called state machine, which 
> replicas start in same initial state and for each input, the input is 
> distributed across replicas so all replicas will end up with same next state. 
> But this type of replication have some drawbacks
> - The commit (which costly) have to run on all replicas
> - Slow recovery, because if replica miss more than N updates on its down 
> time, the replica have to download entire index from its leader.
> So we create create another replication mode for SolrCloud called state 
> transfer, which acts like master/slave replication. In basically
> - Leader distribute the update to other replicas, but the leader only apply 
> the update to IW, other replicas just store the update to UpdateLog (act like 
> replication).
> - Replicas frequently polling the latest segments from leader.
> Pros:
> - Lightweight for indexing, because only leader are running the commit, 
> updates.
> - Very fast recovery, replicas just have to download the missing segments.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7616) QueryNode#toQueryString says it produces a string in the syntax understood by "the query parser", but cannot possibly know how

2017-01-02 Thread Trejkaz (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15793923#comment-15793923
 ] 

Trejkaz commented on LUCENE-7616:
-

As an additional nasty point, sometimes this method is called from places like 
BooleanQueryNodeBuilder, where it goes into an error message to show the user. 
So this error message also shows the wrong syntax, but it also isn't 
immediately clear how a QueryNodeBuilder would know what syntax was used to 
create the QueryNode it has been passed...

> QueryNode#toQueryString says it produces a string in the syntax understood by 
> "the query parser", but cannot possibly know how
> --
>
> Key: LUCENE-7616
> URL: https://issues.apache.org/jira/browse/LUCENE-7616
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/queryparser
>Affects Versions: 6.3
>Reporter: Trejkaz
>
> (Not an implementation "bug" so much as a design error, but working within 
> the confines of JIRA here.)
> The "flexible" query parser framework allows custom query syntaxes to be 
> implemented on top of the existing query node, processor and builder classes.
> Now, QueryNode has a toQueryString method which ostensibly converts the node 
> back into a string appropriate for passing back through the parser. However, 
> in practice, this method is implemented to return a syntax only appropriate 
> for passing back to StandardQueryParser, *not* the parser you got the node 
> from. The node itself has no idea what parser it came from, so it makes sense 
> that this method could never work as currently designed.
> I don't really know what the right way to fix this is.
> Option A: Make QueryNode aware of which parser it came from, and add methods 
> into the parser to format queries back into a string, so that this method can 
> be implemented correctly. Sounds fine, except programmatically creating 
> QueryNode objects directly becomes a hassle.
> Option B: Deprecate toQueryString and introduce a new SyntaxFormatter 
> interface which converts QueryNode to CharSequence and provide an appropriate 
> implementation for each existing SyntaxParser. Seems sensible and the most 
> flexible option, but requires a lot of tiny classes to be implemented.
> Are there any other options?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9906) Use better check to validate if node recovered via PeerSync or Replication

2017-01-02 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-9906:
-
Attachment: SOLR-9906.patch

> Use better check to validate if node recovered via PeerSync or Replication
> --
>
> Key: SOLR-9906
> URL: https://issues.apache.org/jira/browse/SOLR-9906
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Pushkar Raste
>Priority: Minor
> Attachments: SOLR-9906.patch, SOLR-9906.patch, 
> SOLR-PeerSyncVsReplicationTest.diff
>
>
> Tests {{LeaderFailureAfterFreshStartTest}} and {{PeerSyncReplicationTest}} 
> currently rely on number of requests made to the leader's replication handler 
> to check if node recovered via PeerSync or replication. This check is not 
> very reliable and we have seen failures in the past. 
> While tinkering with different way to write a better test I found 
> [SOLR-9859|SOLR-9859]. Now that SOLR-9859 is fixed, here is idea for better 
> way to distinguish recovery via PeerSync vs Replication. 
> * For {{PeerSyncReplicationTest}}, if node successfully recovers via 
> PeerSync, then file {{replication.properties}} should not exist
> For {{LeaderFailureAfterFreshStartTest}}, if the freshly replicated node does 
> not go into replication recovery after the leader failure, contents 
> {{replication.properties}} should not change 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-7616) QueryNode#toQueryString says it produces a string in the syntax understood by "the query parser", but cannot possibly know how

2017-01-02 Thread Trejkaz (JIRA)
Trejkaz created LUCENE-7616:
---

 Summary: QueryNode#toQueryString says it produces a string in the 
syntax understood by "the query parser", but cannot possibly know how
 Key: LUCENE-7616
 URL: https://issues.apache.org/jira/browse/LUCENE-7616
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/queryparser
Affects Versions: 6.3
Reporter: Trejkaz


(Not an implementation "bug" so much as a design error, but working within the 
confines of JIRA here.)

The "flexible" query parser framework allows custom query syntaxes to be 
implemented on top of the existing query node, processor and builder classes.

Now, QueryNode has a toQueryString method which ostensibly converts the node 
back into a string appropriate for passing back through the parser. However, in 
practice, this method is implemented to return a syntax only appropriate for 
passing back to StandardQueryParser, *not* the parser you got the node from. 
The node itself has no idea what parser it came from, so it makes sense that 
this method could never work as currently designed.

I don't really know what the right way to fix this is.

Option A: Make QueryNode aware of which parser it came from, and add methods 
into the parser to format queries back into a string, so that this method can 
be implemented correctly. Sounds fine, except programmatically creating 
QueryNode objects directly becomes a hassle.

Option B: Deprecate toQueryString and introduce a new SyntaxFormatter interface 
which converts QueryNode to CharSequence and provide an appropriate 
implementation for each existing SyntaxParser. Seems sensible and the most 
flexible option, but requires a lot of tiny classes to be implemented.

Are there any other options?




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9906) Use better check to validate if node recovered via PeerSync or Replication

2017-01-02 Thread Pushkar Raste (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pushkar Raste updated SOLR-9906:

Attachment: SOLR-9906.patch

> Use better check to validate if node recovered via PeerSync or Replication
> --
>
> Key: SOLR-9906
> URL: https://issues.apache.org/jira/browse/SOLR-9906
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Pushkar Raste
>Priority: Minor
> Attachments: SOLR-9906.patch, SOLR-PeerSyncVsReplicationTest.diff
>
>
> Tests {{LeaderFailureAfterFreshStartTest}} and {{PeerSyncReplicationTest}} 
> currently rely on number of requests made to the leader's replication handler 
> to check if node recovered via PeerSync or replication. This check is not 
> very reliable and we have seen failures in the past. 
> While tinkering with different way to write a better test I found 
> [SOLR-9859|SOLR-9859]. Now that SOLR-9859 is fixed, here is idea for better 
> way to distinguish recovery via PeerSync vs Replication. 
> * For {{PeerSyncReplicationTest}}, if node successfully recovers via 
> PeerSync, then file {{replication.properties}} should not exist
> For {{LeaderFailureAfterFreshStartTest}}, if the freshly replicated node does 
> not go into replication recovery after the leader failure, contents 
> {{replication.properties}} should not change 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-SmokeRelease-master - Build # 664 - Failure

2017-01-02 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-master/664/

No tests ran.

Build Log:
[...truncated 41962 lines...]
prepare-release-no-sign:
[mkdir] Created dir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist
 [copy] Copying 476 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/lucene
 [copy] Copying 260 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/solr
   [smoker] Java 1.8 JAVA_HOME=/home/jenkins/tools/java/latest1.8
   [smoker] NOTE: output encoding is UTF-8
   [smoker] 
   [smoker] Load release URL 
"file:/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/"...
   [smoker] 
   [smoker] Test Lucene...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.01 sec (33.8 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download lucene-7.0.0-src.tgz...
   [smoker] 30.5 MB in 0.03 sec (1213.3 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-7.0.0.tgz...
   [smoker] 65.0 MB in 0.05 sec (1250.2 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-7.0.0.zip...
   [smoker] 75.8 MB in 0.06 sec (1189.8 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack lucene-7.0.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6164 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-7.0.0.zip...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6164 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-7.0.0-src.tgz...
   [smoker] make sure no JARs/WARs in src dist...
   [smoker] run "ant validate"
   [smoker] run tests w/ Java 8 and testArgs='-Dtests.slow=false'...
   [smoker] test demo with 1.8...
   [smoker]   got 215 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] generate javadocs w/ Java 8...
   [smoker] 
   [smoker] Crawl/parse...
   [smoker] 
   [smoker] Verify...
   [smoker]   confirm all releases have coverage in TestBackwardsCompatibility
   [smoker] find all past Lucene releases...
   [smoker] run TestBackwardsCompatibility..
   [smoker] success!
   [smoker] 
   [smoker] Test Solr...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.00 sec (268.7 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download solr-7.0.0-src.tgz...
   [smoker] 40.0 MB in 0.04 sec (1079.7 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-7.0.0.tgz...
   [smoker] 140.4 MB in 0.13 sec (1095.4 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-7.0.0.zip...
   [smoker] 150.0 MB in 0.13 sec (.5 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack solr-7.0.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] unpack lucene-7.0.0.tgz...
   [smoker]   **WARNING**: skipping check of 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-7.0.0/contrib/dataimporthandler-extras/lib/javax.mail-1.5.1.jar:
 it has javax.* classes
   [smoker]   **WARNING**: skipping check of 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-7.0.0/contrib/dataimporthandler-extras/lib/activation-1.1.1.jar:
 it has javax.* classes
   [smoker] copying unpacked distribution for Java 8 ...
   [smoker] test solr example w/ Java 8...
   [smoker]   start Solr instance 
(log=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-7.0.0-java8/solr-example.log)...
   [smoker] No process found for Solr node running on port 8983
   [smoker]   Running techproducts example on port 8983 from 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-7.0.0-java8
   [smoker] Creating Solr home directory 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-7.0.0-java8/example/techproducts/solr
   [smoker] 
   [smoker] Starting up Solr on port 8983 using command:
   [smoker] bin/solr start -p 8983 -s "example/techproducts/solr"
   [smoker] 
   [smoker] Waiting up to 180 seconds to see Solr running on port 8983 [|]  
 [/]   [-]   [\]  
   [smoker] Started Solr server on port 8983 (pid=11239). Happy searching!
 

[JENKINS] Lucene-Solr-6.x-Windows (32bit/jdk1.8.0_112) - Build # 663 - Unstable!

2017-01-02 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Windows/663/
Java: 32bit/jdk1.8.0_112 -client -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  
org.apache.solr.handler.TestReplicationHandler.doTestIndexAndConfigAliasReplication

Error Message:
expected:<1> but was:<0>

Stack Trace:
java.lang.AssertionError: expected:<1> but was:<0>
at 
__randomizedtesting.SeedInfo.seed([1B01B0E4321BDD18:EC725EBCF4F372FE]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.handler.TestReplicationHandler.doTestIndexAndConfigAliasReplication(TestReplicationHandler.java:1331)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 11814 lines...]
   [junit4] Suite: 

Re: MemoryIndex and query not matching a Long value

2017-01-02 Thread Dennis Gove
Thank you. This isn't a situation where I have access to a schema so I
don't think I can make use of the FieldType methods. I'm implementing a
Match stream as part of the streaming api (our discussion in
https://issues.apache.org/jira/browse/SOLR-8530).

An arbitrary tuple can come in with calculated values so the types of the
values can't necessarily be determined from a schema. Due to this, I'm
taking all fields in the tuple and constructing a document (see
https://github.com/dennisgove/lucene-solr/blob/SolrMatch/solr/core/src/java/org/apache/solr/handler/LuceneMatchStream.java#L195
).

(side note: the working name is LuceneMatchStream because atm it only
accepts Lucene syntax for the queries)

- Dennis


On Mon, Jan 2, 2017 at 4:45 PM, David Smiley 
wrote:

> LongPoint uses the Points API.  If you are using a Solr QParserPlugin,
> it's not going to use that API. Assuming you're in Solr land, I think you
> should be using utility methods on FieldType (lookup from schema) which can
> create the Field instances to be put on the document.
>
> ~ David
>
> > On Jan 2, 2017, at 4:33 PM, Dennis Gove  wrote:
> >
> > I'm messing around with a MemoryIndex and am indexing a field of type
> Long. From everything I can tell, this should be added into a Document as
> type org.apache.lucene.document.LongPoint. However, when I try to match
> it with a query of form "a_i:1" it doesn't match.
> >
> > For example, full document is
> > {
> >   a_s:"hello1",
> >   a_i:1
> > }
> > with Query object created from
> > "a_i:1"
> >
> > the return in call to
> > index.search(query)
> >
> > is 0 (ie, a non-match)
> >
> > The only thing I can think of is that the document field should actually
> be something else, or that the creation of a Query object from "a_i:1"
> isn't going to match a LongPoint value.
> >
> > Thanks!
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


Re: MemoryIndex and query not matching a Long value

2017-01-02 Thread David Smiley
LongPoint uses the Points API.  If you are using a Solr QParserPlugin, it's not 
going to use that API. Assuming you're in Solr land, I think you should be 
using utility methods on FieldType (lookup from schema) which can create the 
Field instances to be put on the document.

~ David

> On Jan 2, 2017, at 4:33 PM, Dennis Gove  wrote:
> 
> I'm messing around with a MemoryIndex and am indexing a field of type Long. 
> From everything I can tell, this should be added into a Document as type 
> org.apache.lucene.document.LongPoint. However, when I try to match it with a 
> query of form "a_i:1" it doesn't match.
> 
> For example, full document is
> {
>   a_s:"hello1",
>   a_i:1
> }
> with Query object created from
> "a_i:1"
> 
> the return in call to 
> index.search(query)
> 
> is 0 (ie, a non-match)
> 
> The only thing I can think of is that the document field should actually be 
> something else, or that the creation of a Query object from "a_i:1" isn't 
> going to match a LongPoint value.
> 
> Thanks!


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



MemoryIndex and query not matching a Long value

2017-01-02 Thread Dennis Gove
I'm messing around with a MemoryIndex and am indexing a field of type Long.
>From everything I can tell, this should be added into a Document as type
org.apache.lucene.document.LongPoint. However, when I try to match it with
a query of form "a_i:1" it doesn't match.

For example, full document is
{
  a_s:"hello1",
  a_i:1
}
with Query object created from
"a_i:1"

the return in call to
index.search(query)

is 0 (ie, a non-match)

The only thing I can think of is that the document field should actually be
something else, or that the creation of a Query object from "a_i:1" isn't
going to match a LongPoint value.

Thanks!


[JENKINS] Lucene-Solr-Tests-6.x - Build # 643 - Still Unstable

2017-01-02 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-6.x/643/

1 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.AsyncCallRequestStatusResponseTest

Error Message:
1 thread leaked from SUITE scope at 
org.apache.solr.cloud.AsyncCallRequestStatusResponseTest: 1) 
Thread[id=10045, 
name=OverseerHdfsCoreFailoverThread-97215309130235910-127.0.0.1:37421_solr-n_01,
 state=TIMED_WAITING, group=Overseer Hdfs SolrCore Failover Thread.] at 
java.lang.Thread.sleep(Native Method) at 
org.apache.solr.cloud.OverseerAutoReplicaFailoverThread.run(OverseerAutoReplicaFailoverThread.java:139)
 at java.lang.Thread.run(Thread.java:745)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.cloud.AsyncCallRequestStatusResponseTest: 
   1) Thread[id=10045, 
name=OverseerHdfsCoreFailoverThread-97215309130235910-127.0.0.1:37421_solr-n_01,
 state=TIMED_WAITING, group=Overseer Hdfs SolrCore Failover Thread.]
at java.lang.Thread.sleep(Native Method)
at 
org.apache.solr.cloud.OverseerAutoReplicaFailoverThread.run(OverseerAutoReplicaFailoverThread.java:139)
at java.lang.Thread.run(Thread.java:745)
at __randomizedtesting.SeedInfo.seed([EEB85D2319B1BB38]:0)




Build Log:
[...truncated 12239 lines...]
   [junit4] Suite: org.apache.solr.cloud.AsyncCallRequestStatusResponseTest
   [junit4]   2> Creating dataDir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-6.x/solr/build/solr-core/test/J1/temp/solr.cloud.AsyncCallRequestStatusResponseTest_EEB85D2319B1BB38-001/init-core-data-001
   [junit4]   2> 1074440 INFO  
(SUITE-AsyncCallRequestStatusResponseTest-seed#[EEB85D2319B1BB38]-worker) [
] o.a.s.SolrTestCaseJ4 Randomized ssl (true) and clientAuth (false) via: 
@org.apache.solr.util.RandomizeSSL(reason=, value=NaN, ssl=NaN, clientAuth=NaN)
   [junit4]   2> 1074440 INFO  
(SUITE-AsyncCallRequestStatusResponseTest-seed#[EEB85D2319B1BB38]-worker) [
] o.a.s.c.MiniSolrCloudCluster Starting cluster of 2 servers in 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-6.x/solr/build/solr-core/test/J1/temp/solr.cloud.AsyncCallRequestStatusResponseTest_EEB85D2319B1BB38-001/tempDir-001
   [junit4]   2> 1074441 INFO  
(SUITE-AsyncCallRequestStatusResponseTest-seed#[EEB85D2319B1BB38]-worker) [
] o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2> 1074441 INFO  (Thread-2837) [] o.a.s.c.ZkTestServer client 
port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 1074441 INFO  (Thread-2837) [] o.a.s.c.ZkTestServer 
Starting server
   [junit4]   2> 1074541 INFO  
(SUITE-AsyncCallRequestStatusResponseTest-seed#[EEB85D2319B1BB38]-worker) [
] o.a.s.c.ZkTestServer start zk server on port:53864
   [junit4]   2> 1074549 INFO  (jetty-launcher-1424-thread-1) [] 
o.e.j.s.Server jetty-9.3.14.v20161028
   [junit4]   2> 1074550 INFO  (jetty-launcher-1424-thread-2) [] 
o.e.j.s.Server jetty-9.3.14.v20161028
   [junit4]   2> 1074551 INFO  (jetty-launcher-1424-thread-1) [] 
o.e.j.s.h.ContextHandler Started 
o.e.j.s.ServletContextHandler@1b7d929d{/solr,null,AVAILABLE}
   [junit4]   2> 1074553 INFO  (jetty-launcher-1424-thread-1) [] 
o.e.j.s.AbstractConnector Started ServerConnector@7d2b0764{SSL,[ssl, 
http/1.1]}{127.0.0.1:52510}
   [junit4]   2> 1074553 INFO  (jetty-launcher-1424-thread-1) [] 
o.e.j.s.Server Started @1078641ms
   [junit4]   2> 1074553 INFO  (jetty-launcher-1424-thread-1) [] 
o.a.s.c.s.e.JettySolrRunner Jetty properties: {hostContext=/solr, 
hostPort=52510}
   [junit4]   2> 1074553 ERROR (jetty-launcher-1424-thread-1) [] 
o.a.s.s.StartupLoggingUtils Missing Java Option solr.log.dir. Logging may be 
missing or incomplete.
   [junit4]   2> 1074553 INFO  (jetty-launcher-1424-thread-1) [] 
o.a.s.s.SolrDispatchFilter  ___  _   Welcome to Apache Solr™ version 
6.4.0
   [junit4]   2> 1074553 INFO  (jetty-launcher-1424-thread-1) [] 
o.a.s.s.SolrDispatchFilter / __| ___| |_ _   Starting in cloud mode on port null
   [junit4]   2> 1074553 INFO  (jetty-launcher-1424-thread-1) [] 
o.a.s.s.SolrDispatchFilter \__ \/ _ \ | '_|  Install dir: null
   [junit4]   2> 1074553 INFO  (jetty-launcher-1424-thread-1) [] 
o.a.s.s.SolrDispatchFilter |___/\___/_|_|Start time: 
2017-01-02T20:11:35.775Z
   [junit4]   2> 1074555 INFO  (jetty-launcher-1424-thread-2) [] 
o.e.j.s.h.ContextHandler Started 
o.e.j.s.ServletContextHandler@53c6a850{/solr,null,AVAILABLE}
   [junit4]   2> 1074556 INFO  (jetty-launcher-1424-thread-1) [] 
o.a.s.s.SolrDispatchFilter solr.xml found in ZooKeeper. Loading...
   [junit4]   2> 1074559 INFO  (jetty-launcher-1424-thread-2) [] 
o.e.j.s.AbstractConnector Started ServerConnector@32d06e95{SSL,[ssl, 
http/1.1]}{127.0.0.1:37421}
   [junit4]   2> 1074559 INFO  (jetty-launcher-1424-thread-2) [] 
o.e.j.s.Server Started @1078646ms
   [junit4]   2> 1074559 INFO  (jetty-launcher-1424-thread-2) [] 

[jira] [Updated] (SOLR-9896) Instrument and collect metrics from thread pools

2017-01-02 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9896?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-9896:

Attachment: SOLR-9896.patch

Patch which instruments the following thread pools:
# UpdateShardHandler's updateExecutor and recoveryExecutor at the path 
{{solr.http/updateShardHandler.threadPool.updateExecutor}} and 
{{solr.http/updateShardHandler.threadPool.recoveryExecutor}}
# HttpShardHandler's httpShardExecutor at 
{{solr.http/httpShardHandler.threadPool.httpShardExecutor}}
# CoreAdminHandler's parallelCoreAdminExecutor at 
{{solr.node/QUERYHANDLER./admin/cores.threadPool.parallelCoreAdminExecutor}}
# CoreContainer's coreContainerWorkExecutor and coreLoadExecutor at 
{{solr.node/threadPool.coreContainerWorkExecutor}} and 
{{solr.node/coreLoadExecutor}}

There are still other thread pools in IndexFetcher and CDCR components which 
aren't instrumented but this is a good start. We can add instrumentation if 
people find them interesting.

> Instrument and collect metrics from thread pools
> 
>
> Key: SOLR-9896
> URL: https://issues.apache.org/jira/browse/SOLR-9896
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
> Fix For: master (7.0), 6.4
>
> Attachments: SOLR-9896.patch
>
>
> The metrics-core library has a InstrumentedExecutorService which collects 
> stats on submitted, running, completed tasks and durations. This issue will 
> expose such stats for all important thread pools in solr.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9854) Collect metrics for index merges and index store IO

2017-01-02 Thread Andrzej Bialecki (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9854?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrzej Bialecki  updated SOLR-9854:

Attachment: SOLR-9854.patch

Current patch. I think this is ready.

> Collect metrics for index merges and index store IO
> ---
>
> Key: SOLR-9854
> URL: https://issues.apache.org/jira/browse/SOLR-9854
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Affects Versions: master (7.0), 6.4
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
>Priority: Minor
> Attachments: SOLR-9854.patch, SOLR-9854.patch
>
>
> Using API for metrics management developed in SOLR-4735 we should also start 
> collecting metrics for major aspects of {{IndexWriter}} operation, such as 
> read / write IO rates, number of minor and major merges and IO during these 
> operations, etc.
> This will provide a better insight into resource consumption and load at the 
> IO level.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-Solaris (64bit/jdk1.8.0) - Build # 591 - Unstable!

2017-01-02 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Solaris/591/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseSerialGC

3 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.core.snapshots.TestSolrCloudSnapshots

Error Message:
java.util.concurrent.TimeoutException: Could not connect to ZooKeeper 
127.0.0.1:61723 within 3 ms

Stack Trace:
org.apache.solr.common.SolrException: java.util.concurrent.TimeoutException: 
Could not connect to ZooKeeper 127.0.0.1:61723 within 3 ms
at __randomizedtesting.SeedInfo.seed([3AADE9C94FAAFC73]:0)
at 
org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:182)
at 
org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:116)
at 
org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:111)
at 
org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:98)
at 
org.apache.solr.cloud.MiniSolrCloudCluster.waitForAllNodes(MiniSolrCloudCluster.java:269)
at 
org.apache.solr.cloud.MiniSolrCloudCluster.(MiniSolrCloudCluster.java:263)
at 
org.apache.solr.cloud.SolrCloudTestCase$Builder.configure(SolrCloudTestCase.java:188)
at 
org.apache.solr.core.snapshots.TestSolrCloudSnapshots.setupClass(TestSolrCloudSnapshots.java:70)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:847)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.util.concurrent.TimeoutException: Could not connect to 
ZooKeeper 127.0.0.1:61723 within 3 ms
at 
org.apache.solr.common.cloud.ConnectionManager.waitForConnected(ConnectionManager.java:233)
at 
org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:174)
... 31 more


FAILED:  
junit.framework.TestSuite.org.apache.solr.core.snapshots.TestSolrCloudSnapshots

Error Message:
51 threads leaked from SUITE scope at 
org.apache.solr.core.snapshots.TestSolrCloudSnapshots: 1) Thread[id=24457, 
name=solr-idle-connections-evictor, state=TIMED_WAITING, 
group=TGRP-TestSolrCloudSnapshots] at java.lang.Thread.sleep(Native 
Method) at 
org.apache.solr.update.UpdateShardHandler$IdleConnectionsEvictor$1.run(UpdateShardHandler.java:187)
 at java.lang.Thread.run(Thread.java:745)2) Thread[id=24407, 
name=qtp591018423-24407, state=TIMED_WAITING, 
group=TGRP-TestSolrCloudSnapshots] at sun.misc.Unsafe.park(Native 
Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at 

[jira] [Resolved] (SOLR-9154) Config API does not work when adding a component with DirectSolrSpellChecker

2017-01-02 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta resolved SOLR-9154.

   Resolution: Fixed
Fix Version/s: (was: 6.2)
   6.4

> Config API does not work when adding a component with DirectSolrSpellChecker
> 
>
> Key: SOLR-9154
> URL: https://issues.apache.org/jira/browse/SOLR-9154
> Project: Solr
>  Issue Type: Bug
>  Components: config-api, spellchecker
>Reporter: Anshum Gupta
>Assignee: Anshum Gupta
> Fix For: master (7.0), 6.4
>
> Attachments: SOLR-9154.patch, SOLR-9154.patch
>
>
> When trying to add a DirectSolrSpellchecker using the Config API (JSON), 
> there seems to be a loss of information w.r.t the param types. The accuracy 
> field, only in this specific case needs to be defined as a float it seems. 
> While this is possible when updating the solrconfig,xml directly, the field 
> type (float) can not be specified using JSON. 
> Here are the steps to reproduce this issue:
> {code}
> #Bootstrapping
> bin/solr start -c
> bin/solr create -c foo
> bin/post -c foo example/exampledocs/books.csv
> #Add spell checker - This would hang and the logs contain recurring 
> exceptions as mentioned below
> curl http://localhost:8983/solr/foo/config -H 'Content-type:application/json' 
> -d '{"update-searchcomponent": {"name":"spellcheck",   
> "class":"solr.SpellCheckComponent",   "spellchecker":[ { 
> "name":"text_index_dictionary", "field":"text", 
> "classname":"solr.DirectSolrSpellChecker", 
> "distanceMeasure":"org.apache.lucene.search.spell.LevensteinDistance", 
> "accuracy":0.5, "maxEdits":2, "minPrefix":1, "maxInspections":5, 
> "minQueryLength":4, "maxQueryFrequency":0.001, 
> "thresholdTokenFrequency":0.01}]}}'
> {code}
> Log:
> {code}
> 2016-05-24 04:08:44.371 ERROR (SolrConfigHandler-refreshconf) [c:foo s:shard1 
> r:core_node1 x:foo_shard1_replica1] o.a.s.h.SolrConfigHandler Unable to 
> refresh conf 
> org.apache.solr.common.SolrException: Unable to reload core 
> [foo_shard1_replica1]
>   at org.apache.solr.core.CoreContainer.reload(CoreContainer.java:944)
>   at 
> org.apache.solr.core.SolrCore.lambda$getConfListener$7(SolrCore.java:2510)
>   at 
> org.apache.solr.handler.SolrConfigHandler$Command$1.run(SolrConfigHandler.java:218)
> Caused by: org.apache.solr.common.SolrException: java.lang.Double cannot be 
> cast to java.lang.Float
>   at org.apache.solr.core.SolrCore.(SolrCore.java:773)
>   at org.apache.solr.core.SolrCore.reload(SolrCore.java:462)
>   at org.apache.solr.core.CoreContainer.reload(CoreContainer.java:938)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9154) Config API does not work when adding a component with DirectSolrSpellChecker

2017-01-02 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15793296#comment-15793296
 ] 

ASF subversion and git services commented on SOLR-9154:
---

Commit fb39e397dbd17fea68f5d46baf80f5af8f5b59d0 in lucene-solr's branch 
refs/heads/branch_6x from anshum
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=fb39e39 ]

SOLR-9154: Fix DirectSolrSpellChecker to work when added through the Config API


> Config API does not work when adding a component with DirectSolrSpellChecker
> 
>
> Key: SOLR-9154
> URL: https://issues.apache.org/jira/browse/SOLR-9154
> Project: Solr
>  Issue Type: Bug
>  Components: config-api, spellchecker
>Reporter: Anshum Gupta
>Assignee: Anshum Gupta
> Fix For: master (7.0), 6.4
>
> Attachments: SOLR-9154.patch, SOLR-9154.patch
>
>
> When trying to add a DirectSolrSpellchecker using the Config API (JSON), 
> there seems to be a loss of information w.r.t the param types. The accuracy 
> field, only in this specific case needs to be defined as a float it seems. 
> While this is possible when updating the solrconfig,xml directly, the field 
> type (float) can not be specified using JSON. 
> Here are the steps to reproduce this issue:
> {code}
> #Bootstrapping
> bin/solr start -c
> bin/solr create -c foo
> bin/post -c foo example/exampledocs/books.csv
> #Add spell checker - This would hang and the logs contain recurring 
> exceptions as mentioned below
> curl http://localhost:8983/solr/foo/config -H 'Content-type:application/json' 
> -d '{"update-searchcomponent": {"name":"spellcheck",   
> "class":"solr.SpellCheckComponent",   "spellchecker":[ { 
> "name":"text_index_dictionary", "field":"text", 
> "classname":"solr.DirectSolrSpellChecker", 
> "distanceMeasure":"org.apache.lucene.search.spell.LevensteinDistance", 
> "accuracy":0.5, "maxEdits":2, "minPrefix":1, "maxInspections":5, 
> "minQueryLength":4, "maxQueryFrequency":0.001, 
> "thresholdTokenFrequency":0.01}]}}'
> {code}
> Log:
> {code}
> 2016-05-24 04:08:44.371 ERROR (SolrConfigHandler-refreshconf) [c:foo s:shard1 
> r:core_node1 x:foo_shard1_replica1] o.a.s.h.SolrConfigHandler Unable to 
> refresh conf 
> org.apache.solr.common.SolrException: Unable to reload core 
> [foo_shard1_replica1]
>   at org.apache.solr.core.CoreContainer.reload(CoreContainer.java:944)
>   at 
> org.apache.solr.core.SolrCore.lambda$getConfListener$7(SolrCore.java:2510)
>   at 
> org.apache.solr.handler.SolrConfigHandler$Command$1.run(SolrConfigHandler.java:218)
> Caused by: org.apache.solr.common.SolrException: java.lang.Double cannot be 
> cast to java.lang.Float
>   at org.apache.solr.core.SolrCore.(SolrCore.java:773)
>   at org.apache.solr.core.SolrCore.reload(SolrCore.java:462)
>   at org.apache.solr.core.CoreContainer.reload(CoreContainer.java:938)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8110) Start enforcing field naming recomendations in next X.0 release?

2017-01-02 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8110?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15793258#comment-15793258
 ] 

Erick Erickson commented on SOLR-8110:
--

[~hossman_luc...@fucit.org] Hoss: WDYT about putting this in trunk?

> Start enforcing field naming recomendations in next X.0 release?
> 
>
> Key: SOLR-8110
> URL: https://issues.apache.org/jira/browse/SOLR-8110
> Project: Solr
>  Issue Type: Improvement
>Reporter: Hoss Man
> Attachments: SOLR-8110.patch, SOLR-8110.patch
>
>
> For a very long time now, Solr has made the following "recommendation" 
> regarding field naming conventions...
> bq. field names should consist of alphanumeric or underscore characters only 
> and not start with a digit.  This is not currently strictly enforced, but 
> other field names will not have first class support from all components and 
> back compatibility is not guaranteed.  ...
> I'm opening this issue to track discussion about if/how we should start 
> enforcing this as a rule instead (instead of just a "recommendation") in our 
> next/future X.0 (ie: major) release.
> The goals of doing so being:
> * simplify some existing code/apis that currently use hueristics to deal with 
> lists of field and produce strange errors when the huerstic fails (example: 
> ReturnFields.add)
> * reduce confusion/pain for new users who might start out unaware of the 
> recommended conventions and then only later encountering a situation where 
> their field names are not supported by some feature and get frustrated 
> because they have to change their schema, reindex, update index/query client 
> expectations, etc...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4983) Problematic core naming by collection create API

2017-01-02 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15793252#comment-15793252
 ] 

Erick Erickson commented on SOLR-4983:
--

Ran across this looking for something else.

I think the core naming questions are taken care of.

As far as joins are concerned, between the "cross collection join" where the 
"from" collection must exist in toto on each replica and the Streaming stuff, 
can this be closed?

[~markrmil...@gmail.com] [~noble.paul] any opinions?

> Problematic core naming by collection create API 
> -
>
> Key: SOLR-4983
> URL: https://issues.apache.org/jira/browse/SOLR-4983
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Reporter: Chris Toomey
>
> The SolrCloud collection create API creates cores named 
> "foo_shard_replica" when asked to create collection "foo".
> This is problematic for at least 2 reasons: 
> 1) these ugly core names show up in the core admin UI, and will vary 
> depending on which node is being used,
> 2) it prevents collections from being used in SolrCloud joins, since join 
> takes a core name as the fromIndex parameter and there's no single core name 
> for the collection.  As I've documented in 
> https://issues.apache.org/jira/browse/SOLR-4905 and 
> http://lucene.472066.n3.nabble.com/Joins-with-SolrCloud-tp4073199p4074038.html,
>  SolrCloud join does work when the inner collection (fromIndex) is not 
> sharded, assuming that collection is available and initialized at SolrCloud 
> bootstrap time.
> Could this be changed to instead use the collection name for the core name?  
> Or at least add a core-name option to the API?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-6.x - Build # 245 - Still Unstable

2017-01-02 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-6.x/245/

5 tests failed.
FAILED:  org.apache.lucene.search.TestFuzzyQuery.testRandom

Error Message:
Test abandoned because suite timeout was reached.

Stack Trace:
java.lang.Exception: Test abandoned because suite timeout was reached.
at __randomizedtesting.SeedInfo.seed([B5889E1040D864B3]:0)


FAILED:  junit.framework.TestSuite.org.apache.lucene.search.TestFuzzyQuery

Error Message:
Suite timeout exceeded (>= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (>= 720 msec).
at __randomizedtesting.SeedInfo.seed([B5889E1040D864B3]:0)


FAILED:  org.apache.solr.cloud.hdfs.HdfsRestartWhileUpdatingTest.test

Error Message:
There are still nodes recoverying - waited for 320 seconds

Stack Trace:
java.lang.AssertionError: There are still nodes recoverying - waited for 320 
seconds
at 
__randomizedtesting.SeedInfo.seed([CC4C7E9864714E50:44184142CA8D23A8]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:184)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForRecoveriesToFinish(AbstractFullDistribZkTestBase.java:862)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForThingsToLevelOut(AbstractFullDistribZkTestBase.java:1418)
at 
org.apache.solr.cloud.RestartWhileUpdatingTest.test(RestartWhileUpdatingTest.java:144)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-8530) Add HavingStream to Streaming API and StreamingExpressions

2017-01-02 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15793146#comment-15793146
 ] 

Joel Bernstein commented on SOLR-8530:
--

We ran into the same problem when we implemented the classify() function which 
needed access to the analyzers. We ended placing the ClassifyStream in core: 
org.apache.solr.handler.

This means the classify() function can only be run via the /stream handler 
rather then as a stand alone solrj client. But in scenarios where we have 
functions that require integration with Solr core classes I think this makes 
senses. 




> Add HavingStream to Streaming API and StreamingExpressions
> --
>
> Key: SOLR-8530
> URL: https://issues.apache.org/jira/browse/SOLR-8530
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrJ
>Affects Versions: 6.0
>Reporter: Dennis Gove
>Priority: Minor
>
> The goal here is to support something similar to SQL's HAVING clause where 
> one can filter documents based on data that is not available in the index. 
> For example, filter the output of a reduce() based on the calculated 
> metrics.
> {code}
> having(
>   reduce(
> search(.),
> sum(cost),
> on=customerId
>   ),
>   q="sum(cost):[500 TO *]"
> )
> {code}
> This example would return all where the total spent by each distinct customer 
> is >= 500. The total spent is calculated via the sum(cost) metric in the 
> reduce stream.
> The intent is to support as the filters in the having(...) clause the full 
> query syntax of a search(...) clause. I see this being possible in one of two 
> ways. 
> 1. Use Lucene's MemoryIndex and as each tuple is read out of the underlying 
> stream creating an instance of MemoryIndex and apply the query to it. If the 
> result of that is >0 then the tuple should be returned from the HavingStream.
> 2. Create an in-memory solr index via something like RamDirectory, read all 
> tuples into that in-memory index using the UpdateStream, and then stream out 
> of that all the matching tuples from the query.
> There are benefits to each approach but I think the easiest and most direct 
> one is the MemoryIndex approach. With MemoryIndex it isn't necessary to read 
> all incoming tuples before returning a single tuple. With a MemoryIndex there 
> is a need to parse the solr query parameters and create a valid Lucene query 
> but I suspect that can be done using existing QParser implementations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-8530) Add HavingStream to Streaming API and StreamingExpressions

2017-01-02 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15793146#comment-15793146
 ] 

Joel Bernstein edited comment on SOLR-8530 at 1/2/17 4:42 PM:
--

We ran into the same problem when we implemented the classify() function which 
needed access to the analyzers. We ended up placing the ClassifyStream in core: 
org.apache.solr.handler.

This means the classify() function can only be run via the /stream handler 
rather then as a stand alone solrj client. But in scenarios where we have 
functions that require integration with Solr core classes I think this makes 
senses. 





was (Author: joel.bernstein):
We ran into the same problem when we implemented the classify() function which 
needed access to the analyzers. We ended placing the ClassifyStream in core: 
org.apache.solr.handler.

This means the classify() function can only be run via the /stream handler 
rather then as a stand alone solrj client. But in scenarios where we have 
functions that require integration with Solr core classes I think this makes 
senses. 




> Add HavingStream to Streaming API and StreamingExpressions
> --
>
> Key: SOLR-8530
> URL: https://issues.apache.org/jira/browse/SOLR-8530
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrJ
>Affects Versions: 6.0
>Reporter: Dennis Gove
>Priority: Minor
>
> The goal here is to support something similar to SQL's HAVING clause where 
> one can filter documents based on data that is not available in the index. 
> For example, filter the output of a reduce() based on the calculated 
> metrics.
> {code}
> having(
>   reduce(
> search(.),
> sum(cost),
> on=customerId
>   ),
>   q="sum(cost):[500 TO *]"
> )
> {code}
> This example would return all where the total spent by each distinct customer 
> is >= 500. The total spent is calculated via the sum(cost) metric in the 
> reduce stream.
> The intent is to support as the filters in the having(...) clause the full 
> query syntax of a search(...) clause. I see this being possible in one of two 
> ways. 
> 1. Use Lucene's MemoryIndex and as each tuple is read out of the underlying 
> stream creating an instance of MemoryIndex and apply the query to it. If the 
> result of that is >0 then the tuple should be returned from the HavingStream.
> 2. Create an in-memory solr index via something like RamDirectory, read all 
> tuples into that in-memory index using the UpdateStream, and then stream out 
> of that all the matching tuples from the query.
> There are benefits to each approach but I think the easiest and most direct 
> one is the MemoryIndex approach. With MemoryIndex it isn't necessary to read 
> all incoming tuples before returning a single tuple. With a MemoryIndex there 
> is a need to parse the solr query parameters and create a valid Lucene query 
> but I suspect that can be done using existing QParser implementations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9684) Add priority Streaming Expression

2017-01-02 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15793128#comment-15793128
 ] 

ASF subversion and git services commented on SOLR-9684:
---

Commit dc289bdacf1a5731839132d6aa019b9e23122031 in lucene-solr's branch 
refs/heads/branch_6x from [~joel.bernstein]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=dc289bd ]

SOLR-9684: Rename schedule function to priority


> Add priority Streaming Expression
> -
>
> Key: SOLR-9684
> URL: https://issues.apache.org/jira/browse/SOLR-9684
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: master (7.0), 6.4
>
> Attachments: SOLR-9684.patch, SOLR-9684.patch, SOLR-9684.patch
>
>
> SOLR-9559 adds a general purpose *parallel task executor* for streaming 
> expressions. The executor() function executes a stream of tasks and doesn't 
> have any concept of task priority.
> The priority() function wraps two streams, a high priority stream and a low 
> priority stream. The priority function emits tuples from the high priority 
> stream first, and then the low priority stream.
> The executor() function can then wrap the priority function to see tasks in 
> priority order.
> Pseudo syntax:
> {code}
> daemon(executor(priority(topic(tasks, q="priority:high"), topic(tasks, 
> q="priority:low"
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-9684) Add priority Streaming Expression

2017-01-02 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9684?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein resolved SOLR-9684.
--
Resolution: Resolved

> Add priority Streaming Expression
> -
>
> Key: SOLR-9684
> URL: https://issues.apache.org/jira/browse/SOLR-9684
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: master (7.0), 6.4
>
> Attachments: SOLR-9684.patch, SOLR-9684.patch, SOLR-9684.patch
>
>
> SOLR-9559 adds a general purpose *parallel task executor* for streaming 
> expressions. The executor() function executes a stream of tasks and doesn't 
> have any concept of task priority.
> The priority() function wraps two streams, a high priority stream and a low 
> priority stream. The priority function emits tuples from the high priority 
> stream first, and then the low priority stream.
> The executor() function can then wrap the priority function to see tasks in 
> priority order.
> Pseudo syntax:
> {code}
> daemon(executor(priority(topic(tasks, q="priority:high"), topic(tasks, 
> q="priority:low"
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9684) Add priority Streaming Expression

2017-01-02 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9684?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-9684:
-
Description: 
SOLR-9559 adds a general purpose *parallel task executor* for streaming 
expressions. The executor() function executes a stream of tasks and doesn't 
have any concept of task priority.

The priority() function wraps two streams, a high priority stream and a low 
priority stream. The priority function emits tuples from the high priority 
stream first, and then the low priority stream.

The executor() function can then wrap the priority function to see tasks in 
priority order.

Pseudo syntax:
{code}
daemon(executor(priority(topic(tasks, q="priority:high"), topic(tasks, 
q="priority:low"
{code}








  was:
SOLR-9559 adds a general purpose *parallel task executor* for streaming 
expressions. The executor() function executes a stream of tasks and doesn't 
have any concept of task priority.

The scheduler() function wraps two streams, a high priority stream and a low 
priority stream. The scheduler function emits tuples from the high priority 
stream first, and then the low priority stream.

The executor() function can then wrap the scheduler function to see tasks in 
priority order.

Pseudo syntax:
{code}
daemon(executor(schedule(topic(tasks, q="priority:high"), topic(tasks, 
q="priority:low"
{code}









> Add priority Streaming Expression
> -
>
> Key: SOLR-9684
> URL: https://issues.apache.org/jira/browse/SOLR-9684
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: master (7.0), 6.4
>
> Attachments: SOLR-9684.patch, SOLR-9684.patch, SOLR-9684.patch
>
>
> SOLR-9559 adds a general purpose *parallel task executor* for streaming 
> expressions. The executor() function executes a stream of tasks and doesn't 
> have any concept of task priority.
> The priority() function wraps two streams, a high priority stream and a low 
> priority stream. The priority function emits tuples from the high priority 
> stream first, and then the low priority stream.
> The executor() function can then wrap the priority function to see tasks in 
> priority order.
> Pseudo syntax:
> {code}
> daemon(executor(priority(topic(tasks, q="priority:high"), topic(tasks, 
> q="priority:low"
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9684) Add priority Streaming Expression

2017-01-02 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9684?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-9684:
-
Summary: Add priority Streaming Expression  (was: Add schedule Streaming 
Expression)

> Add priority Streaming Expression
> -
>
> Key: SOLR-9684
> URL: https://issues.apache.org/jira/browse/SOLR-9684
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: master (7.0), 6.4
>
> Attachments: SOLR-9684.patch, SOLR-9684.patch, SOLR-9684.patch
>
>
> SOLR-9559 adds a general purpose *parallel task executor* for streaming 
> expressions. The executor() function executes a stream of tasks and doesn't 
> have any concept of task priority.
> The scheduler() function wraps two streams, a high priority stream and a low 
> priority stream. The scheduler function emits tuples from the high priority 
> stream first, and then the low priority stream.
> The executor() function can then wrap the scheduler function to see tasks in 
> priority order.
> Pseudo syntax:
> {code}
> daemon(executor(schedule(topic(tasks, q="priority:high"), topic(tasks, 
> q="priority:low"
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9684) Add schedule Streaming Expression

2017-01-02 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15793110#comment-15793110
 ] 

ASF subversion and git services commented on SOLR-9684:
---

Commit 0999f6779a3341af072d31162a2c88cf1eb8c5d4 in lucene-solr's branch 
refs/heads/master from [~joel.bernstein]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=0999f67 ]

SOLR-9684: Rename schedule function to priority


> Add schedule Streaming Expression
> -
>
> Key: SOLR-9684
> URL: https://issues.apache.org/jira/browse/SOLR-9684
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: master (7.0), 6.4
>
> Attachments: SOLR-9684.patch, SOLR-9684.patch, SOLR-9684.patch
>
>
> SOLR-9559 adds a general purpose *parallel task executor* for streaming 
> expressions. The executor() function executes a stream of tasks and doesn't 
> have any concept of task priority.
> The scheduler() function wraps two streams, a high priority stream and a low 
> priority stream. The scheduler function emits tuples from the high priority 
> stream first, and then the low priority stream.
> The executor() function can then wrap the scheduler function to see tasks in 
> priority order.
> Pseudo syntax:
> {code}
> daemon(executor(schedule(topic(tasks, q="priority:high"), topic(tasks, 
> q="priority:low"
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9885) Make log management configurable

2017-01-02 Thread Mano Kovacs (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15793100#comment-15793100
 ] 

Mano Kovacs commented on SOLR-9885:
---

Any review would be greatly appreciated!

> Make log management configurable
> 
>
> Key: SOLR-9885
> URL: https://issues.apache.org/jira/browse/SOLR-9885
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mano Kovacs
> Attachments: SOLR-9885.patch, SOLR-9885.patch
>
>
> There is log rotation and log archiver in solr starter script, which is 
> failing if solr is deployed with custom log configuration (different log 
> filename). Also inconvenient if using custom log rotation/management.
> https://github.com/apache/lucene-solr/blob/master/solr/bin/solr#L1464
> Proposing an environment setting, something like {{SOLR_LOG_ROTATION}} (with 
> default {{true}}), that makes the execution of the four lines configurable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8530) Add HavingStream to Streaming API and StreamingExpressions

2017-01-02 Thread Dennis Gove (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15793092#comment-15793092
 ] 

Dennis Gove commented on SOLR-8530:
---

One problem I ran into when I was approaching the Match (or SolrMatch, I like 
David's idea about naming) implementation was that the classes needed for 
in-memory index don't exist in the SolrJ library. This means it would create a 
dependency on something outside SolrJ. If I remember correctly, the specific 
pieces I was trying to implement was the parsing of a Solr query to a Lucene 
compatible query. This is because the in-memory index requires Lucene syntax 
while I wanted the SolrMatch to accept Solr syntax.

> Add HavingStream to Streaming API and StreamingExpressions
> --
>
> Key: SOLR-8530
> URL: https://issues.apache.org/jira/browse/SOLR-8530
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrJ
>Affects Versions: 6.0
>Reporter: Dennis Gove
>Priority: Minor
>
> The goal here is to support something similar to SQL's HAVING clause where 
> one can filter documents based on data that is not available in the index. 
> For example, filter the output of a reduce() based on the calculated 
> metrics.
> {code}
> having(
>   reduce(
> search(.),
> sum(cost),
> on=customerId
>   ),
>   q="sum(cost):[500 TO *]"
> )
> {code}
> This example would return all where the total spent by each distinct customer 
> is >= 500. The total spent is calculated via the sum(cost) metric in the 
> reduce stream.
> The intent is to support as the filters in the having(...) clause the full 
> query syntax of a search(...) clause. I see this being possible in one of two 
> ways. 
> 1. Use Lucene's MemoryIndex and as each tuple is read out of the underlying 
> stream creating an instance of MemoryIndex and apply the query to it. If the 
> result of that is >0 then the tuple should be returned from the HavingStream.
> 2. Create an in-memory solr index via something like RamDirectory, read all 
> tuples into that in-memory index using the UpdateStream, and then stream out 
> of that all the matching tuples from the query.
> There are benefits to each approach but I think the easiest and most direct 
> one is the MemoryIndex approach. With MemoryIndex it isn't necessary to read 
> all incoming tuples before returning a single tuple. With a MemoryIndex there 
> is a need to parse the solr query parameters and create a valid Lucene query 
> but I suspect that can be done using existing QParser implementations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8530) Add HavingStream to Streaming API and StreamingExpressions

2017-01-02 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15793050#comment-15793050
 ] 

David Smiley commented on SOLR-8530:


Nice plan Joel.

RE naming... maybe include the string "solr" in some way, e.g. "solrMatch"?  or 
"solrPredicate"?  "match" by itself seems too generic/ambiguous to me.

> Add HavingStream to Streaming API and StreamingExpressions
> --
>
> Key: SOLR-8530
> URL: https://issues.apache.org/jira/browse/SOLR-8530
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrJ
>Affects Versions: 6.0
>Reporter: Dennis Gove
>Priority: Minor
>
> The goal here is to support something similar to SQL's HAVING clause where 
> one can filter documents based on data that is not available in the index. 
> For example, filter the output of a reduce() based on the calculated 
> metrics.
> {code}
> having(
>   reduce(
> search(.),
> sum(cost),
> on=customerId
>   ),
>   q="sum(cost):[500 TO *]"
> )
> {code}
> This example would return all where the total spent by each distinct customer 
> is >= 500. The total spent is calculated via the sum(cost) metric in the 
> reduce stream.
> The intent is to support as the filters in the having(...) clause the full 
> query syntax of a search(...) clause. I see this being possible in one of two 
> ways. 
> 1. Use Lucene's MemoryIndex and as each tuple is read out of the underlying 
> stream creating an instance of MemoryIndex and apply the query to it. If the 
> result of that is >0 then the tuple should be returned from the HavingStream.
> 2. Create an in-memory solr index via something like RamDirectory, read all 
> tuples into that in-memory index using the UpdateStream, and then stream out 
> of that all the matching tuples from the query.
> There are benefits to each approach but I think the easiest and most direct 
> one is the MemoryIndex approach. With MemoryIndex it isn't necessary to read 
> all incoming tuples before returning a single tuple. With a MemoryIndex there 
> is a need to parse the solr query parameters and create a valid Lucene query 
> but I suspect that can be done using existing QParser implementations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Welcome Jim Ferenczi as a Lucene/Solr committer

2017-01-02 Thread David Smiley
Congrats and a warm welcome Jim!
~ David

On Sun, Jan 1, 2017 at 1:47 PM jim ferenczi  wrote:

> Hi,
>
> Thanks all for this warm welcome !
>
> I am very happy to join you as a commiter especially in this period of the
> year, I received the invitation the 24th of December and this is now
> officially the best gift I ever received for Christmas ;).
>
> I am in the search engine area for a a decade now. I’ve started with
> crawler and web search at Exalead, a french company specialized in
> entreprise search. Then I joined Rakuten where I worked on an e-commerce
> platform. I’ve started to use Lucene/Solr during this time trying to find
> solution to handle thousands of Japanese queries with spans and ngrams  per
> second. Then I joined Elastic where I am currently working.
>
> I am based in Paris and my family comes from Hungary (sorry Martin I am
> not Italian, the « c » makes all the difference in my name ;) ).
>
> When I am not working I try to keep my energetic baby happy and that takes
> basically all my free time !
>
> Thanks again for the invitation and see you in JIRA/conferences/real life
> very soon !
>
>
> Jim Ferenczi
>
> 2017-01-01 18:30 GMT+01:00 Steve Rowe :
>
> Welcome Jim!
>
> --
> Steve
> www.lucidworks.com
>
> > On Jan 1, 2017, at 5:04 AM, Michael McCandless <
> luc...@mikemccandless.com> wrote:
> >
> > I'm pleased to announce that Jim Ferenczi has accepted the Lucene
> > PMC's invitation to become a committer.
> >
> > Jim, it's tradition that you introduce yourself with a brief bio.
> >
> > Your handle "jimczi" has been added to the “lucene" LDAP group, so you
> > now have commit privileges. Please test this by adding yourself to the
> > committers section of the Who We Are page on the website:
> >  (instructions here
> > ).
> >
> > The ASF dev page also has lots of useful links: <
> http://www.apache.org/dev/>.
> >
> > Congratulations and welcome and Happy New Year,
> >
> > Mike McCandless
> >
> > http://blog.mikemccandless.com
> >
> > -
> > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> > For additional commands, e-mail: dev-h...@lucene.apache.org
> >
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>
> --
Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
http://www.solrenterprisesearchserver.com


[jira] [Created] (SOLR-9910) Allow setting of additional jetty options in bin/solr and bin/solr.cmd

2017-01-02 Thread Mano Kovacs (JIRA)
Mano Kovacs created SOLR-9910:
-

 Summary: Allow setting of additional jetty options in bin/solr and 
bin/solr.cmd
 Key: SOLR-9910
 URL: https://issues.apache.org/jira/browse/SOLR-9910
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Mano Kovacs


Command line tools allow the option {{-a}} to add JVM options to start command. 
Proposing to add {{-j}} option to add additional config for jetty (the part 
after {{start.jar}}).

Motivation: jetty can be configured with start.ini in server directory. Running 
multiple Solr instances, however, requires the configuration per instance that 
cannot share the start.ini with other instances.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Welcome Jim Ferenczi as a Lucene/Solr committer

2017-01-02 Thread Tomás Fernández Löbbe
Welcome Jim!

On Mon, Jan 2, 2017 at 5:12 AM, Tommaso Teofili 
wrote:

> Welcome Jim!
>
> Tommaso
>
> Il giorno lun 2 gen 2017 alle ore 02:53 Joel Bernstein 
> ha scritto:
>
>> Welcome Jim!
>>
>> Joel Bernstein
>> http://joelsolr.blogspot.com/
>>
>> On Sun, Jan 1, 2017 at 2:43 PM, Uwe Schindler  wrote:
>>
>> Welcome Jim!
>>
>> Uwe
>>
>> Am 1. Januar 2017 18:30:34 MEZ schrieb Steve Rowe :
>>
>> Welcome Jim!
>>
>> --
>> Steve
>> www.lucidworks.com
>>
>>  On Jan 1, 2017, at 5:04 AM, Michael McCandless  
>> wrote:
>>
>>  I'm pleased to announce that Jim Ferenczi has accepted the Lucene
>>  PMC's invitation to become a committer.
>>
>>  Jim, it's tradition that you introduce yourself with a brief bio.
>>
>>  Your handle "jimczi" has been added to the “lucene" LDAP group, so you
>>  now have commit privileges. Please test this by adding yourself to the
>>  committers section of the Who We Are page on the website:
>>   (instructions here
>>  ).
>>
>>  The ASF dev page also has lots of useful links: 
>> .
>>
>>  Congratulations and welcome and Happy New Year,
>>
>>  Mike McCandless
>>
>>  http://blog.mikemccandless.com
>>
>> --
>>
>>  To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>>  For additional commands, e-mail: dev-h...@lucene.apache.org
>>
>>
>>
>> --
>>
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>
>>
>> --
>> Uwe Schindler
>> Achterdiek 19, 28357 Bremen
>> https://www.thetaphi.de
>>
>>
>>


[jira] [Updated] (SOLR-9909) Nuke one of DefaultSolrThreadFactory and SolrjNamedThreadFactory

2017-01-02 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-9909:

Issue Type: Task  (was: Test)

> Nuke one of DefaultSolrThreadFactory and SolrjNamedThreadFactory
> 
>
> Key: SOLR-9909
> URL: https://issues.apache.org/jira/browse/SOLR-9909
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Shalin Shekhar Mangar
>Priority: Trivial
> Fix For: master (7.0), 6.4
>
>
> DefaultSolrThreadFactory and SolrjNamedThreadFactory have exactly the same 
> code. Let's remove one of them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-9909) Nuke one of DefaultSolrThreadFactory and SolrjNamedThreadFactory

2017-01-02 Thread Shalin Shekhar Mangar (JIRA)
Shalin Shekhar Mangar created SOLR-9909:
---

 Summary: Nuke one of DefaultSolrThreadFactory and 
SolrjNamedThreadFactory
 Key: SOLR-9909
 URL: https://issues.apache.org/jira/browse/SOLR-9909
 Project: Solr
  Issue Type: Test
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Shalin Shekhar Mangar
Priority: Trivial
 Fix For: master (7.0), 6.4


DefaultSolrThreadFactory and SolrjNamedThreadFactory have exactly the same 
code. Let's remove one of them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7615) SpanSynonymQuery

2017-01-02 Thread Paul Elschot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15792842#comment-15792842
 ] 

Paul Elschot commented on LUCENE-7615:
--

Some plans for using this:

In LUCENE-7580 to get real synonym scoring behaviour.

In Surround to score truncations.

> SpanSynonymQuery
> 
>
> Key: LUCENE-7615
> URL: https://issues.apache.org/jira/browse/LUCENE-7615
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/search
>Affects Versions: master (7.0)
>Reporter: Paul Elschot
>Priority: Minor
> Attachments: LUCENE-7615.patch
>
>
> A SpanQuery that tries to score as SynonymQuery.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7615) SpanSynonymQuery

2017-01-02 Thread Paul Elschot (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7615?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Elschot updated LUCENE-7615:
-
Attachment: LUCENE-7615.patch

Patch of 2 Jan 2017.

This can be used as proximity subquery whenever SynonymQuery is used now, i.e. 
for synonym terms.

I think this improves span scoring somewhat, see the tests and the test output 
when uncommenting showQueryResults for the test cases with two terms.

Implementation:
SynonymQuery exposes new methods getField() and SynonymWeight.getSimScorer() 
for use in SpanSynonymQuery.

Improved use of o.a.l.index.Terms and TermsEnum in SynonymQuery, at most a 
single TermsEnum will be used.
Aside: how about renaming Terms to FieldTerms?

This takes DisjunctionSpans out of SpanOrQuery.
This adds SynonymSpans as (an almost empty) subclass of DisjunctionSpans, for 
later further scoring improvement.

PHRASE_TO_SPAN_TERM_POSITIONS_COST is used from SpanTermQuery and made package 
private there.


> SpanSynonymQuery
> 
>
> Key: LUCENE-7615
> URL: https://issues.apache.org/jira/browse/LUCENE-7615
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/search
>Affects Versions: master (7.0)
>Reporter: Paul Elschot
>Priority: Minor
> Attachments: LUCENE-7615.patch
>
>
> A SpanQuery that tries to score as SynonymQuery.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-7615) SpanSynonymQuery

2017-01-02 Thread Paul Elschot (JIRA)
Paul Elschot created LUCENE-7615:


 Summary: SpanSynonymQuery
 Key: LUCENE-7615
 URL: https://issues.apache.org/jira/browse/LUCENE-7615
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/search
Affects Versions: master (7.0)
Reporter: Paul Elschot
Priority: Minor


A SpanQuery that tries to score as SynonymQuery.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-6.x - Build # 642 - Still Unstable

2017-01-02 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-6.x/642/

1 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.hdfs.HdfsRecoveryZkTest

Error Message:
ObjectTracker found 1 object(s) that were not released!!! [HdfsTransactionLog] 
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:43)
  at 
org.apache.solr.update.HdfsTransactionLog.(HdfsTransactionLog.java:130)  
at org.apache.solr.update.HdfsUpdateLog.init(HdfsUpdateLog.java:202)  at 
org.apache.solr.update.UpdateHandler.(UpdateHandler.java:137)  at 
org.apache.solr.update.UpdateHandler.(UpdateHandler.java:94)  at 
org.apache.solr.update.DirectUpdateHandler2.(DirectUpdateHandler2.java:102)
  at sun.reflect.GeneratedConstructorAccessor149.newInstance(Unknown Source)  
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
  at java.lang.reflect.Constructor.newInstance(Constructor.java:423)  at 
org.apache.solr.core.SolrCore.createInstance(SolrCore.java:729)  at 
org.apache.solr.core.SolrCore.createUpdateHandler(SolrCore.java:791)  at 
org.apache.solr.core.SolrCore.initUpdateHandler(SolrCore.java:1042)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:907)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:799)  at 
org.apache.solr.core.CoreContainer.create(CoreContainer.java:877)  at 
org.apache.solr.core.CoreContainer.lambda$load$3(CoreContainer.java:529)  at 
java.util.concurrent.FutureTask.run(FutureTask.java:266)  at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
 at java.lang.Thread.run(Thread.java:745)  

Stack Trace:
java.lang.AssertionError: ObjectTracker found 1 object(s) that were not 
released!!! [HdfsTransactionLog]
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException
at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:43)
at 
org.apache.solr.update.HdfsTransactionLog.(HdfsTransactionLog.java:130)
at org.apache.solr.update.HdfsUpdateLog.init(HdfsUpdateLog.java:202)
at org.apache.solr.update.UpdateHandler.(UpdateHandler.java:137)
at org.apache.solr.update.UpdateHandler.(UpdateHandler.java:94)
at 
org.apache.solr.update.DirectUpdateHandler2.(DirectUpdateHandler2.java:102)
at sun.reflect.GeneratedConstructorAccessor149.newInstance(Unknown 
Source)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.solr.core.SolrCore.createInstance(SolrCore.java:729)
at org.apache.solr.core.SolrCore.createUpdateHandler(SolrCore.java:791)
at org.apache.solr.core.SolrCore.initUpdateHandler(SolrCore.java:1042)
at org.apache.solr.core.SolrCore.(SolrCore.java:907)
at org.apache.solr.core.SolrCore.(SolrCore.java:799)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:877)
at 
org.apache.solr.core.CoreContainer.lambda$load$3(CoreContainer.java:529)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)


at __randomizedtesting.SeedInfo.seed([B7CD5B1D2994AEF8]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNull(Assert.java:551)
at 
org.apache.solr.SolrTestCaseJ4.teardownTestCases(SolrTestCaseJ4.java:266)
at sun.reflect.GeneratedMethodAccessor27.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:870)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS-EA] Lucene-Solr-6.x-Linux (64bit/jdk-9-ea+147) - Build # 2570 - Unstable!

2017-01-02 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/2570/
Java: 64bit/jdk-9-ea+147 -XX:+UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.cloud.PeerSyncReplicationTest.test

Error Message:
PeerSync failed. Had to fail back to replication expected:<0> but was:<15>

Stack Trace:
java.lang.AssertionError: PeerSync failed. Had to fail back to replication 
expected:<0> but was:<15>
at 
__randomizedtesting.SeedInfo.seed([ECA815F681D6777D:64FC2A2C2F2A1A85]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.solr.cloud.PeerSyncReplicationTest.bringUpDeadNodeAndEnsureNoReplication(PeerSyncReplicationTest.java:290)
at 
org.apache.solr.cloud.PeerSyncReplicationTest.forceNodeFailureAndDoPeerSync(PeerSyncReplicationTest.java:244)
at 
org.apache.solr.cloud.PeerSyncReplicationTest.test(PeerSyncReplicationTest.java:130)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:538)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 

Re: Welcome Christine Poerschke to the PMC

2017-01-02 Thread Tommaso Teofili
Welcome Christine!

Tommaso

Il giorno dom 1 gen 2017 alle ore 20:41 Stefan Matheis  ha
scritto:

> Congrats Christine!
>
> -Stefan
>
> On Dec 30, 2016 1:47 PM, "Adrien Grand"  wrote:
>
> I am pleased to announce that Christine Poerschke has accepted the PMC's
> invitation to join.
>
> Welcome Christine!
>
> Adrien
>
>
>


Re: Welcome Mikhail Khludnev to the PMC

2017-01-02 Thread Tommaso Teofili
Welcome Mikhail!

Tommaso

Il giorno dom 1 gen 2017 alle ore 20:42 Stefan Matheis  ha
scritto:

> Welcome Mikhail!
>
> -Stefan
>
> On Dec 30, 2016 4:16 PM, "Adrien Grand"  wrote:
>
> I am pleased to announce that Mikhail Khludnev has accepted the PMC's
> invitation to join.
>
> Welcome Mikhail!
>
> Adrien
>
>
>


Re: Welcome Jim Ferenczi as a Lucene/Solr committer

2017-01-02 Thread Tommaso Teofili
Welcome Jim!

Tommaso

Il giorno lun 2 gen 2017 alle ore 02:53 Joel Bernstein 
ha scritto:

> Welcome Jim!
>
> Joel Bernstein
> http://joelsolr.blogspot.com/
>
> On Sun, Jan 1, 2017 at 2:43 PM, Uwe Schindler  wrote:
>
> Welcome Jim!
>
> Uwe
>
> Am 1. Januar 2017 18:30:34 MEZ schrieb Steve Rowe :
>
> Welcome Jim!
>
> --
> Steve
> www.lucidworks.com
>
>  On Jan 1, 2017, at 5:04 AM, Michael McCandless  
> wrote:
>
>  I'm pleased to announce that Jim Ferenczi has accepted the Lucene
>  PMC's invitation to become a committer.
>
>  Jim, it's tradition that you introduce yourself with a brief bio.
>
>  Your handle "jimczi" has been added to the “lucene" LDAP group, so you
>  now have commit privileges. Please test this by adding yourself to the
>  committers section of the Who We Are page on the website:
>   (instructions here
>  ).
>
>  The ASF dev page also has lots of useful links: .
>
>  Congratulations and welcome and Happy New Year,
>
>  Mike McCandless
>
>  http://blog.mikemccandless.com
>
> --
>
>  To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>  For additional commands, e-mail: dev-h...@lucene.apache.org
>
>
>
> --
>
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>
> --
> Uwe Schindler
> Achterdiek 19, 28357 Bremen
> https://www.thetaphi.de
>
>
>


[JENKINS] Lucene-Solr-NightlyTests-master - Build # 1200 - Still Unstable

2017-01-02 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1200/

11 tests failed.
FAILED:  org.apache.lucene.search.TestFuzzyQuery.testRandom

Error Message:
Test abandoned because suite timeout was reached.

Stack Trace:
java.lang.Exception: Test abandoned because suite timeout was reached.
at __randomizedtesting.SeedInfo.seed([2992BD4857635026]:0)


FAILED:  junit.framework.TestSuite.org.apache.lucene.search.TestFuzzyQuery

Error Message:
Suite timeout exceeded (>= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (>= 720 msec).
at __randomizedtesting.SeedInfo.seed([2992BD4857635026]:0)


FAILED:  
org.apache.solr.cloud.CdcrReplicationDistributedZkTest.testReplicationAfterLeaderChange

Error Message:
Timeout waiting for CDCR replication to complete @source_collection:shard2

Stack Trace:
java.lang.RuntimeException: Timeout waiting for CDCR replication to complete 
@source_collection:shard2
at 
__randomizedtesting.SeedInfo.seed([17F057081E27D73F:C5001BEB4088710D]:0)
at 
org.apache.solr.cloud.BaseCdcrDistributedZkTest.waitForReplicationToComplete(BaseCdcrDistributedZkTest.java:795)
at 
org.apache.solr.cloud.CdcrReplicationDistributedZkTest.testReplicationAfterLeaderChange(CdcrReplicationDistributedZkTest.java:306)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at