[JENKINS-EA] Lucene-Solr-BadApples-7.x-Linux (64bit/jdk-10-ea+43) - Build # 11 - Still Unstable!

2018-03-19 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-BadApples-7.x-Linux/11/
Java: 64bit/jdk-10-ea+43 -XX:-UseCompressedOops -XX:+UseG1GC

11 tests failed.
FAILED:  org.apache.solr.cloud.MoveReplicaHDFSTest.testFailedMove

Error Message:
No live SolrServers available to handle this 
request:[http://127.0.0.1:36703/solr/MoveReplicaHDFSTest_failed_coll_true, 
http://127.0.0.1:45443/solr/MoveReplicaHDFSTest_failed_coll_true]

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this 
request:[http://127.0.0.1:36703/solr/MoveReplicaHDFSTest_failed_coll_true, 
http://127.0.0.1:45443/solr/MoveReplicaHDFSTest_failed_coll_true]
at 
__randomizedtesting.SeedInfo.seed([178282F9A056ADDD:BD4F510B1785780D]:0)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:462)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1105)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:885)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:992)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:818)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:942)
at 
org.apache.solr.cloud.MoveReplicaTest.testFailedMove(MoveReplicaTest.java:308)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
  

[JENKINS] Lucene-Solr-master-Windows (64bit/jdk1.8.0_144) - Build # 7230 - Still Unstable!

2018-03-19 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/7230/
Java: 64bit/jdk1.8.0_144 -XX:-UseCompressedOops -XX:+UseG1GC

6 tests failed.
FAILED:  org.apache.solr.common.util.TestTimeSource.testEpochTime

Error Message:
NanoTimeSource time diff=234951

Stack Trace:
java.lang.AssertionError: NanoTimeSource time diff=234951
at 
__randomizedtesting.SeedInfo.seed([EE03A586948F82BF:D66FD6A3005F20F9]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.common.util.TestTimeSource.doTestEpochTime(TestTimeSource.java:48)
at 
org.apache.solr.common.util.TestTimeSource.testEpochTime(TestTimeSource.java:31)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  org.apache.solr.handler.TestReplicationHandler.doTestStressReplication

Error Message:
found:2[index.20180320090644194, index.20180320090644844, index.properties, 
replication.properties, snapshot_met

[JENKINS] Lucene-Solr-Tests-master - Build # 2442 - Unstable

2018-03-19 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/2442/

4 tests failed.
FAILED:  org.apache.solr.cloud.ReplaceNodeTest.test

Error Message:
Error from server at https://127.0.0.1:36204/solr: KeeperErrorCode = Session 
expired for /overseer/collection-map-completed/mn-000

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at https://127.0.0.1:36204/solr: KeeperErrorCode = Session expired 
for /overseer/collection-map-completed/mn-000
at 
__randomizedtesting.SeedInfo.seed([816CB142F1B9373E:9388E985F455AC6]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:643)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1105)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:885)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:818)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
at org.apache.solr.cloud.ReplaceNodeTest.test(ReplaceNodeTest.java:94)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAda

[JENKINS-EA] Lucene-Solr-BadApples-master-Linux (64bit/jdk-10-ea+43) - Build # 11 - Still Unstable!

2018-03-19 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-BadApples-master-Linux/11/
Java: 64bit/jdk-10-ea+43 -XX:+UseCompressedOops -XX:+UseG1GC

4 tests failed.
FAILED:  org.apache.solr.ltr.TestLTRReRankingPipeline.testDifferentTopN

Error Message:
expected:<1.0> but was:<0.0>

Stack Trace:
java.lang.AssertionError: expected:<1.0> but was:<0.0>
at 
__randomizedtesting.SeedInfo.seed([AED44B9388BF0DCE:5F7539C3BD04C75C]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:443)
at org.junit.Assert.assertEquals(Assert.java:512)
at 
org.apache.solr.ltr.TestLTRReRankingPipeline.testDifferentTopN(TestLTRReRankingPipeline.java:256)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  org.apache.solr.ltr.TestLTRReRankingPipeline.testDifferentTopN

Error Message:
expected:<1.0> but was:<0.0>

Stack Trace:
java.lang.AssertionError: expected:<1.0> but was:<0.0>
at 
__randomizedtesting.SeedInfo.seed([AED44B9388BF0DCE:5F7539C3BD04C75C]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:443)
at org.junit.Assert.assertEquals(Assert.java:512)
at 
org.apache.solr.ltr.TestLTRReRankingPipeline

[JENKINS] Lucene-Solr-NightlyTests-master - Build # 1507 - Still Unstable

2018-03-19 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1507/

1 tests failed.
FAILED:  
org.apache.solr.handler.TestReplicationHandler.doTestReplicateAfterCoreReload

Error Message:
expected:<[{indexVersion=1521516546070,generation=2,filelist=[_b5.cfe, _b5.cfs, 
_b5.si, _ba.fdt, _ba.fdx, _ba.fnm, _ba.nvd, _ba.nvm, _ba.si, _ba_FST50_0.doc, 
_ba_FST50_0.tfp, _bb.cfe, _bb.cfs, _bb.si, _bc.fdt, _bc.fdx, _bc.fnm, _bc.nvd, 
_bc.nvm, _bc.si, _bc_FST50_0.doc, _bc_FST50_0.tfp, _bd.fdt, _bd.fdx, _bd.fnm, 
_bd.nvd, _bd.nvm, _bd.si, _bd_FST50_0.doc, _bd_FST50_0.tfp, _bf.fdt, _bf.fdx, 
_bf.fnm, _bf.nvd, _bf.nvm, _bf.si, _bf_FST50_0.doc, _bf_FST50_0.tfp, _bg.fdt, 
_bg.fdx, _bg.fnm, _bg.nvd, _bg.nvm, _bg.si, _bg_FST50_0.doc, _bg_FST50_0.tfp, 
segments_2]}]> but 
was:<[{indexVersion=1521516546070,generation=2,filelist=[_b5.cfe, _b5.cfs, 
_b5.si, _ba.fdt, _ba.fdx, _ba.fnm, _ba.nvd, _ba.nvm, _ba.si, _ba_FST50_0.doc, 
_ba_FST50_0.tfp, _bb.cfe, _bb.cfs, _bb.si, _bc.fdt, _bc.fdx, _bc.fnm, _bc.nvd, 
_bc.nvm, _bc.si, _bc_FST50_0.doc, _bc_FST50_0.tfp, _bd.fdt, _bd.fdx, _bd.fnm, 
_bd.nvd, _bd.nvm, _bd.si, _bd_FST50_0.doc, _bd_FST50_0.tfp, _bf.fdt, _bf.fdx, 
_bf.fnm, _bf.nvd, _bf.nvm, _bf.si, _bf_FST50_0.doc, _bf_FST50_0.tfp, _bg.fdt, 
_bg.fdx, _bg.fnm, _bg.nvd, _bg.nvm, _bg.si, _bg_FST50_0.doc, _bg_FST50_0.tfp, 
segments_2]}, {indexVersion=1521516546070,generation=3,filelist=[_be.cfe, 
_be.cfs, _be.si, _bf.fdt, _bf.fdx, _bf.fnm, _bf.nvd, _bf.nvm, _bf.si, 
_bf_FST50_0.doc, _bf_FST50_0.tfp, _bg.fdt, _bg.fdx, _bg.fnm, _bg.nvd, _bg.nvm, 
_bg.si, _bg_FST50_0.doc, _bg_FST50_0.tfp, segments_3]}]>

Stack Trace:
java.lang.AssertionError: 
expected:<[{indexVersion=1521516546070,generation=2,filelist=[_b5.cfe, _b5.cfs, 
_b5.si, _ba.fdt, _ba.fdx, _ba.fnm, _ba.nvd, _ba.nvm, _ba.si, _ba_FST50_0.doc, 
_ba_FST50_0.tfp, _bb.cfe, _bb.cfs, _bb.si, _bc.fdt, _bc.fdx, _bc.fnm, _bc.nvd, 
_bc.nvm, _bc.si, _bc_FST50_0.doc, _bc_FST50_0.tfp, _bd.fdt, _bd.fdx, _bd.fnm, 
_bd.nvd, _bd.nvm, _bd.si, _bd_FST50_0.doc, _bd_FST50_0.tfp, _bf.fdt, _bf.fdx, 
_bf.fnm, _bf.nvd, _bf.nvm, _bf.si, _bf_FST50_0.doc, _bf_FST50_0.tfp, _bg.fdt, 
_bg.fdx, _bg.fnm, _bg.nvd, _bg.nvm, _bg.si, _bg_FST50_0.doc, _bg_FST50_0.tfp, 
segments_2]}]> but 
was:<[{indexVersion=1521516546070,generation=2,filelist=[_b5.cfe, _b5.cfs, 
_b5.si, _ba.fdt, _ba.fdx, _ba.fnm, _ba.nvd, _ba.nvm, _ba.si, _ba_FST50_0.doc, 
_ba_FST50_0.tfp, _bb.cfe, _bb.cfs, _bb.si, _bc.fdt, _bc.fdx, _bc.fnm, _bc.nvd, 
_bc.nvm, _bc.si, _bc_FST50_0.doc, _bc_FST50_0.tfp, _bd.fdt, _bd.fdx, _bd.fnm, 
_bd.nvd, _bd.nvm, _bd.si, _bd_FST50_0.doc, _bd_FST50_0.tfp, _bf.fdt, _bf.fdx, 
_bf.fnm, _bf.nvd, _bf.nvm, _bf.si, _bf_FST50_0.doc, _bf_FST50_0.tfp, _bg.fdt, 
_bg.fdx, _bg.fnm, _bg.nvd, _bg.nvm, _bg.si, _bg_FST50_0.doc, _bg_FST50_0.tfp, 
segments_2]}, {indexVersion=1521516546070,generation=3,filelist=[_be.cfe, 
_be.cfs, _be.si, _bf.fdt, _bf.fdx, _bf.fnm, _bf.nvd, _bf.nvm, _bf.si, 
_bf_FST50_0.doc, _bf_FST50_0.tfp, _bg.fdt, _bg.fdx, _bg.fnm, _bg.nvd, _bg.nvm, 
_bg.si, _bg_FST50_0.doc, _bg_FST50_0.tfp, segments_3]}]>
at 
__randomizedtesting.SeedInfo.seed([994682F94E7E11:254E5DB289067012]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:147)
at 
org.apache.solr.handler.TestReplicationHandler.doTestReplicateAfterCoreReload(TestReplicationHandler.java:1284)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.c

[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk1.8.0_162) - Build # 1562 - Unstable!

2018-03-19 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/1562/
Java: 64bit/jdk1.8.0_162 -XX:+UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.update.TestInPlaceUpdatesDistrib.test

Error Message:
This doc was supposed to have been deleted, but was: SolrDocument{id=1, 
inplace_updatable_float=1.0, _version_=1595427121858084866, 
inplace_updatable_int_with_default=666, 
inplace_updatable_float_with_default=42.0}

Stack Trace:
java.lang.AssertionError: This doc was supposed to have been deleted, but was: 
SolrDocument{id=1, inplace_updatable_float=1.0, _version_=1595427121858084866, 
inplace_updatable_int_with_default=666, 
inplace_updatable_float_with_default=42.0}
at 
__randomizedtesting.SeedInfo.seed([FC2C58F0E3039B66:7478672A4DFFF69E]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNull(Assert.java:551)
at 
org.apache.solr.update.TestInPlaceUpdatesDistrib.delayedReorderingFetchesMissingUpdateFromLeaderTest(TestInPlaceUpdatesDistrib.java:972)
at 
org.apache.solr.update.TestInPlaceUpdatesDistrib.test(TestInPlaceUpdatesDistrib.java:147)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:993)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.ap

[JENKINS] Lucene-Solr-Tests-7.3 - Build # 20 - Still Unstable

2018-03-19 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.3/20/

3 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.TestLeaderElectionZkExpiry

Error Message:
ObjectTracker found 1 object(s) that were not released!!! [Overseer] 
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.solr.cloud.Overseer  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at org.apache.solr.cloud.Overseer.start(Overseer.java:545)  at 
org.apache.solr.cloud.OverseerElectionContext.runLeaderProcess(ElectionContext.java:850)
  at 
org.apache.solr.cloud.LeaderElector.runIamLeaderProcess(LeaderElector.java:170) 
 at 
org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:135)  
at org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:307)  at 
org.apache.solr.cloud.LeaderElector.retryElection(LeaderElector.java:393)  at 
org.apache.solr.cloud.ZkController.rejoinOverseerElection(ZkController.java:2063)
  at 
org.apache.solr.cloud.Overseer$ClusterStateUpdater.checkIfIamStillLeader(Overseer.java:331)
  at java.lang.Thread.run(Thread.java:748)  

Stack Trace:
java.lang.AssertionError: ObjectTracker found 1 object(s) that were not 
released!!! [Overseer]
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.solr.cloud.Overseer
at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
at org.apache.solr.cloud.Overseer.start(Overseer.java:545)
at 
org.apache.solr.cloud.OverseerElectionContext.runLeaderProcess(ElectionContext.java:850)
at 
org.apache.solr.cloud.LeaderElector.runIamLeaderProcess(LeaderElector.java:170)
at 
org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:135)
at 
org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:307)
at 
org.apache.solr.cloud.LeaderElector.retryElection(LeaderElector.java:393)
at 
org.apache.solr.cloud.ZkController.rejoinOverseerElection(ZkController.java:2063)
at 
org.apache.solr.cloud.Overseer$ClusterStateUpdater.checkIfIamStillLeader(Overseer.java:331)
at java.lang.Thread.run(Thread.java:748)


at __randomizedtesting.SeedInfo.seed([DE13BEE2F3FCCC8F]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNull(Assert.java:551)
at 
org.apache.solr.SolrTestCaseJ4.teardownTestCases(SolrTestCaseJ4.java:301)
at sun.reflect.GeneratedMethodAccessor66.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:897)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.TestLeaderElectionZkExpiry

Error Message:
1 thread leaked from SUITE scope at 
org.apache.solr.cloud.TestLeaderElec

[jira] [Commented] (SOLR-11551) Standardize response codes and success/failure determination for core-admin api calls

2018-03-19 Thread Jason Gerlowski (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16405708#comment-16405708
 ] 

Jason Gerlowski commented on SOLR-11551:


Thanks for refreshing the patch Steve (and bringing it back to my attention as 
a result).  I had let this sit initially as it was waiting on SOLR-11608, but 
that fix went in awhile ago.

To circle back to the earlier discussion around 404's; longterm I think that's 
the right "status" value for Solr to return.  It's a common status code that 
most users are familiar with intuitively, and it allows us to distinguish 
between two similar but distinct types of failures.  But I don't think trying 
to squeeze that in here is the right call, since (as Shawn pointed out) it 
doesn't seem to be returned by any APIs currently (and this JIRA has so far 
been scoped to the CoreAdmin APIs).

If no one objects, I'd like to take a second look at the test coverage for this 
and merge it in later this week.

> Standardize response codes and success/failure determination for core-admin 
> api calls
> -
>
> Key: SOLR-11551
> URL: https://issues.apache.org/jira/browse/SOLR-11551
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Assignee: Jason Gerlowski
>Priority: Major
> Attachments: SOLR-11551.patch, SOLR-11551.patch
>
>
> If we were to tackle SOLR-11526 I think we need to start fixing the core 
> admin api's first.
> If we are relying on response codes I think we should make the following 
> change and fix all the APIs 
> {code}
>   interface CoreAdminOp {
> void execute(CallInfo it) throws Exception;
>   }
> {code}
> To
> {code}
>   interface CoreAdminOp {
> /**
>  *
>  * @param it request/response object
>  *
>  * If the request is invalid throw a SolrException with 
> SolrException.ErrorCode.BAD_REQUEST ( 400 )
>  * If the execution of the command fails throw a SolrException with 
> SolrException.ErrorCode.SERVER_ERROR ( 500 )
>  * Add a "error-message" key to the response object with the exception ( 
> this part should be done at the caller of this method so that each operation 
> doesn't need to do the same thing )
>  */
> void execute(CallInfo it);
>   }
> {code}
> cc [~gerlowskija]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-11551) Standardize response codes and success/failure determination for core-admin api calls

2018-03-19 Thread Jason Gerlowski (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11551?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gerlowski reassigned SOLR-11551:
--

Assignee: Jason Gerlowski

> Standardize response codes and success/failure determination for core-admin 
> api calls
> -
>
> Key: SOLR-11551
> URL: https://issues.apache.org/jira/browse/SOLR-11551
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Assignee: Jason Gerlowski
>Priority: Major
> Attachments: SOLR-11551.patch, SOLR-11551.patch
>
>
> If we were to tackle SOLR-11526 I think we need to start fixing the core 
> admin api's first.
> If we are relying on response codes I think we should make the following 
> change and fix all the APIs 
> {code}
>   interface CoreAdminOp {
> void execute(CallInfo it) throws Exception;
>   }
> {code}
> To
> {code}
>   interface CoreAdminOp {
> /**
>  *
>  * @param it request/response object
>  *
>  * If the request is invalid throw a SolrException with 
> SolrException.ErrorCode.BAD_REQUEST ( 400 )
>  * If the execution of the command fails throw a SolrException with 
> SolrException.ErrorCode.SERVER_ERROR ( 500 )
>  * Add a "error-message" key to the response object with the exception ( 
> this part should be done at the caller of this method so that each operation 
> doesn't need to do the same thing )
>  */
> void execute(CallInfo it);
>   }
> {code}
> cc [~gerlowskija]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-12122) nodes expression should support multiValued walk target

2018-03-19 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16405627#comment-16405627
 ] 

Joel Bernstein edited comment on SOLR-12122 at 3/20/18 12:32 AM:
-

Point taken on the comments. No more new expressions are going in until a 
refactoring takes place. I will add javadoc as part of this refactoring. The 
point of the refactoring is to make it easier for other people to contribute. 
There is also a very large documentation effort underway to provide a user 
guide for the Math Expressions. You can the see the work in progress here:

[https://github.com/joel-bernstein/lucene-solr/blob/math_expressions_documentation/solr/solr-ref-guide/src/math-expressions.adoc]

There is a vast amount of functionality now in Streaming Expressions. My main 
focus for the 7.4 release is refactoring and documenting.

 

 


was (Author: joel.bernstein):
Point taken on the comments. No more new expressions are going in until a 
refactoring takes place. I will add javadoc as part of this refactoring. The 
point of the refactoring is to make it easier for other people to contribute. 
There is also a very large documentation effort underway to provide a user 
guide for the Math Expressions. You can the see the work in progress here:

[https://github.com/joel-bernstein/lucene-solr/blob/math_expressions_documentation/solr/solr-ref-guide/src/math-expressions.adoc]

There is vast amount of functionality now in Streaming Expressions. My main 
focus for the 7.4 release is refactoring and documenting.

 

 

> nodes expression should support multiValued walk target
> ---
>
> Key: SOLR-12122
> URL: https://issues.apache.org/jira/browse/SOLR-12122
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Reporter: David Smiley
>Priority: Major
>
> The {{nodes}} streaming expression has a {{walk}} argument that articulates a 
> pair of Solr fields of the form {{traversalFrom->traversalTo}}.  It assumed 
> that they are *not* multiValued.  It _appears_ not difficult to add 
> multiValued support to traversalTo; that's what this issue is about.
> See 
> http://lucene.472066.n3.nabble.com/Using-multi-valued-field-in-solr-cloud-Graph-Traversal-Query-td4324379.html
> Note: {{gatherNodes}} appears to be the older name which is still supported. 
> It's more commonly known as {{nodes}}.  graph-traversal.adoc documents it



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-12122) nodes expression should support multiValued walk target

2018-03-19 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16405627#comment-16405627
 ] 

Joel Bernstein edited comment on SOLR-12122 at 3/20/18 12:32 AM:
-

Point taken on the comments. No more new expressions are going in until a 
refactoring takes place. I will add javadoc as part of this refactoring. The 
point of the refactoring is to make it easier for other people to contribute. 
There is also a very large documentation effort underway to provide a user 
guide for the Math Expressions. You can the see the work in progress here:

[https://github.com/joel-bernstein/lucene-solr/blob/math_expressions_documentation/solr/solr-ref-guide/src/math-expressions.adoc]

There is vast amount of functionality now in Streaming Expressions. My main 
focus for the 7.4 release is refactoring and documenting.

 

 


was (Author: joel.bernstein):
Point taken on the comments. No more new expressions are going in until a 
refactoring takes. I will add javadoc as part of this refactoring. The point of 
the refactoring is to make it easier for other people to contribute. There is 
also a very large documentation effort underway to provide a user guide for the 
Math Expressions. You can the see the work in progress here:

[https://github.com/joel-bernstein/lucene-solr/blob/math_expressions_documentation/solr/solr-ref-guide/src/math-expressions.adoc]

There is vast amount of functionality now in Streaming Expressions. My main 
focus for the 7.4 release is refactoring and documenting.

 

 

> nodes expression should support multiValued walk target
> ---
>
> Key: SOLR-12122
> URL: https://issues.apache.org/jira/browse/SOLR-12122
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Reporter: David Smiley
>Priority: Major
>
> The {{nodes}} streaming expression has a {{walk}} argument that articulates a 
> pair of Solr fields of the form {{traversalFrom->traversalTo}}.  It assumed 
> that they are *not* multiValued.  It _appears_ not difficult to add 
> multiValued support to traversalTo; that's what this issue is about.
> See 
> http://lucene.472066.n3.nabble.com/Using-multi-valued-field-in-solr-cloud-Graph-Traversal-Query-td4324379.html
> Note: {{gatherNodes}} appears to be the older name which is still supported. 
> It's more commonly known as {{nodes}}.  graph-traversal.adoc documents it



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12122) nodes expression should support multiValued walk target

2018-03-19 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16405645#comment-16405645
 ] 

Joel Bernstein commented on SOLR-12122:
---

There are two issues to consider with adding multi-value hops.

1) Currently nodes are uniqued before any logic is applied. The unique 
operation relies on the sort coming from the /export handler. Another technique 
for uniquing nodes will need to be applied in the mult-value scenario.

2) We may not want to do an exhaustive pull of all the values from multi-valued 
fields. We could consider using the significantTerms stream which works on 
multi-valued fields already and limits the results based on a statistical 
analysis. 

The original code does a traditional breadth first traversal retrieving all 
nodes in its path. Using significant terms would limit that walk to only 
significant nodes.

> nodes expression should support multiValued walk target
> ---
>
> Key: SOLR-12122
> URL: https://issues.apache.org/jira/browse/SOLR-12122
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Reporter: David Smiley
>Priority: Major
>
> The {{nodes}} streaming expression has a {{walk}} argument that articulates a 
> pair of Solr fields of the form {{traversalFrom->traversalTo}}.  It assumed 
> that they are *not* multiValued.  It _appears_ not difficult to add 
> multiValued support to traversalTo; that's what this issue is about.
> See 
> http://lucene.472066.n3.nabble.com/Using-multi-valued-field-in-solr-cloud-Graph-Traversal-Query-td4324379.html
> Note: {{gatherNodes}} appears to be the older name which is still supported. 
> It's more commonly known as {{nodes}}.  graph-traversal.adoc documents it



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-12122) nodes expression should support multiValued walk target

2018-03-19 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16405627#comment-16405627
 ] 

Joel Bernstein edited comment on SOLR-12122 at 3/20/18 12:16 AM:
-

Point taken on the comments. No more new expressions are going in until a 
refactoring takes. I will add javadoc as part of this refactoring. The point of 
the refactoring is to make it easier for other people to contribute. There is 
also a very large documentation effort underway to provide a user guide for the 
Math Expressions. You can the see the work in progress here:

[https://github.com/joel-bernstein/lucene-solr/blob/math_expressions_documentation/solr/solr-ref-guide/src/math-expressions.adoc]

There is vast amount of functionality now in Streaming Expressions. My main 
focus for the 7.4 release is refactoring and documenting.

 

 


was (Author: joel.bernstein):
Point taken on the comments. No more new expressions are going in until a 
refactoring takes. I will add javadoc as part of this refactoring. The point of 
the refactoring to make it easier for other people to contribute. There is also 
very large documentation effort underway provide user guide for the Math 
Expressions. You can the see the work in progress here:

[https://github.com/joel-bernstein/lucene-solr/blob/math_expressions_documentation/solr/solr-ref-guide/src/math-expressions.adoc]

There is vast amount of functionality now in Streaming Expressions. My main 
focus for the 7.4 release is refactoring and documenting.

 

 

> nodes expression should support multiValued walk target
> ---
>
> Key: SOLR-12122
> URL: https://issues.apache.org/jira/browse/SOLR-12122
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Reporter: David Smiley
>Priority: Major
>
> The {{nodes}} streaming expression has a {{walk}} argument that articulates a 
> pair of Solr fields of the form {{traversalFrom->traversalTo}}.  It assumed 
> that they are *not* multiValued.  It _appears_ not difficult to add 
> multiValued support to traversalTo; that's what this issue is about.
> See 
> http://lucene.472066.n3.nabble.com/Using-multi-valued-field-in-solr-cloud-Graph-Traversal-Query-td4324379.html
> Note: {{gatherNodes}} appears to be the older name which is still supported. 
> It's more commonly known as {{nodes}}.  graph-traversal.adoc documents it



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12122) nodes expression should support multiValued walk target

2018-03-19 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16405627#comment-16405627
 ] 

Joel Bernstein commented on SOLR-12122:
---

Point taken on the comments. No more new expressions are going in until a 
refactoring takes. I will add javadoc as part of this refactoring. The point of 
the refactoring to make it easier for other people to contribute. There is also 
very large documentation effort underway provide user guide for the Math 
Expressions. You can the see the work in progress here:

[https://github.com/joel-bernstein/lucene-solr/blob/math_expressions_documentation/solr/solr-ref-guide/src/math-expressions.adoc]

There is vast amount of functionality now in Streaming Expressions. My main 
focus for the 7.4 release is refactoring and documenting.

 

 

> nodes expression should support multiValued walk target
> ---
>
> Key: SOLR-12122
> URL: https://issues.apache.org/jira/browse/SOLR-12122
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Reporter: David Smiley
>Priority: Major
>
> The {{nodes}} streaming expression has a {{walk}} argument that articulates a 
> pair of Solr fields of the form {{traversalFrom->traversalTo}}.  It assumed 
> that they are *not* multiValued.  It _appears_ not difficult to add 
> multiValued support to traversalTo; that's what this issue is about.
> See 
> http://lucene.472066.n3.nabble.com/Using-multi-valued-field-in-solr-cloud-Graph-Traversal-Query-td4324379.html
> Note: {{gatherNodes}} appears to be the older name which is still supported. 
> It's more commonly known as {{nodes}}.  graph-traversal.adoc documents it



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11551) Standardize response codes and success/failure determination for core-admin api calls

2018-03-19 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16405569#comment-16405569
 ] 

Steve Rowe commented on SOLR-11551:
---

I attached a modernized version of Jason's patch, no other changes.

> Standardize response codes and success/failure determination for core-admin 
> api calls
> -
>
> Key: SOLR-11551
> URL: https://issues.apache.org/jira/browse/SOLR-11551
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Priority: Major
> Attachments: SOLR-11551.patch, SOLR-11551.patch
>
>
> If we were to tackle SOLR-11526 I think we need to start fixing the core 
> admin api's first.
> If we are relying on response codes I think we should make the following 
> change and fix all the APIs 
> {code}
>   interface CoreAdminOp {
> void execute(CallInfo it) throws Exception;
>   }
> {code}
> To
> {code}
>   interface CoreAdminOp {
> /**
>  *
>  * @param it request/response object
>  *
>  * If the request is invalid throw a SolrException with 
> SolrException.ErrorCode.BAD_REQUEST ( 400 )
>  * If the execution of the command fails throw a SolrException with 
> SolrException.ErrorCode.SERVER_ERROR ( 500 )
>  * Add a "error-message" key to the response object with the exception ( 
> this part should be done at the caller of this method so that each operation 
> doesn't need to do the same thing )
>  */
> void execute(CallInfo it);
>   }
> {code}
> cc [~gerlowskija]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11551) Standardize response codes and success/failure determination for core-admin api calls

2018-03-19 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11551?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe updated SOLR-11551:
--
Attachment: SOLR-11551.patch

> Standardize response codes and success/failure determination for core-admin 
> api calls
> -
>
> Key: SOLR-11551
> URL: https://issues.apache.org/jira/browse/SOLR-11551
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Priority: Major
> Attachments: SOLR-11551.patch, SOLR-11551.patch
>
>
> If we were to tackle SOLR-11526 I think we need to start fixing the core 
> admin api's first.
> If we are relying on response codes I think we should make the following 
> change and fix all the APIs 
> {code}
>   interface CoreAdminOp {
> void execute(CallInfo it) throws Exception;
>   }
> {code}
> To
> {code}
>   interface CoreAdminOp {
> /**
>  *
>  * @param it request/response object
>  *
>  * If the request is invalid throw a SolrException with 
> SolrException.ErrorCode.BAD_REQUEST ( 400 )
>  * If the execution of the command fails throw a SolrException with 
> SolrException.ErrorCode.SERVER_ERROR ( 500 )
>  * Add a "error-message" key to the response object with the exception ( 
> this part should be done at the caller of this method so that each operation 
> doesn't need to do the same thing )
>  */
> void execute(CallInfo it);
>   }
> {code}
> cc [~gerlowskija]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11891) DocsStreamer populates SolrDocument w/unnecessary fields

2018-03-19 Thread wei wang (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16405497#comment-16405497
 ] 

wei wang commented on SOLR-11891:
-

Thanks Hoss!  We will run a test with your patch to see the results.  

> DocsStreamer populates SolrDocument w/unnecessary fields
> 
>
> Key: SOLR-11891
> URL: https://issues.apache.org/jira/browse/SOLR-11891
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Response Writers
>Affects Versions: 5.4, 6.4.2, 6.6.2
>Reporter: wei wang
>Assignee: Hoss Man
>Priority: Major
> Fix For: 7.4, master (8.0)
>
> Attachments: DocsStreamer.java.diff, SOLR-11891.patch, 
> SOLR-11891.patch.BAD
>
>
> We observe that solr query time increases significantly with the number of 
> rows requested,  even all we retrieve for each document is just fl=id,score.  
> Debugged a bit and see that most of the increased time was spent in 
> BinaryResponseWriter,  converting lucene document into SolrDocument.  Inside 
> convertLuceneDocToSolrDoc():   
> [https://github.com/apache/lucene-solr/blob/df874432b9a17b547acb24a01d3491839e6a6b69/solr/core/src/java/org/apache/solr/response/DocsStreamer.java#L182]
>  
> I am a bit puzzled why we need to iterate through all the fields in the 
> document. Why can’t we just iterate through the requested field list?    
> [https://github.com/apache/lucene-solr/blob/df874432b9a17b547acb24a01d3491839e6a6b69/solr/core/src/java/org/apache/solr/response/DocsStreamer.java#L156]
>  
> e.g. when pass in the field list as 
> sdoc = convertLuceneDocToSolrDoc(doc, rctx.getSearcher().getSchema(), fnames)
> and just iterate through fnames,  there is a significant performance boost in 
> our case.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-11891) DocsStreamer populates SolrDocument w/unnecessary fields

2018-03-19 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11891?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man resolved SOLR-11891.
-
   Resolution: Fixed
Fix Version/s: master (8.0)
   7.4

> DocsStreamer populates SolrDocument w/unnecessary fields
> 
>
> Key: SOLR-11891
> URL: https://issues.apache.org/jira/browse/SOLR-11891
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Response Writers
>Affects Versions: 5.4, 6.4.2, 6.6.2
>Reporter: wei wang
>Assignee: Hoss Man
>Priority: Major
> Fix For: 7.4, master (8.0)
>
> Attachments: DocsStreamer.java.diff, SOLR-11891.patch, 
> SOLR-11891.patch.BAD
>
>
> We observe that solr query time increases significantly with the number of 
> rows requested,  even all we retrieve for each document is just fl=id,score.  
> Debugged a bit and see that most of the increased time was spent in 
> BinaryResponseWriter,  converting lucene document into SolrDocument.  Inside 
> convertLuceneDocToSolrDoc():   
> [https://github.com/apache/lucene-solr/blob/df874432b9a17b547acb24a01d3491839e6a6b69/solr/core/src/java/org/apache/solr/response/DocsStreamer.java#L182]
>  
> I am a bit puzzled why we need to iterate through all the fields in the 
> document. Why can’t we just iterate through the requested field list?    
> [https://github.com/apache/lucene-solr/blob/df874432b9a17b547acb24a01d3491839e6a6b69/solr/core/src/java/org/apache/solr/response/DocsStreamer.java#L156]
>  
> e.g. when pass in the field list as 
> sdoc = convertLuceneDocToSolrDoc(doc, rctx.getSearcher().getSchema(), fnames)
> and just iterate through fnames,  there is a significant performance boost in 
> our case.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-12108) raw transformers ([json] and [xml]) drop the field value if wt is not a match and documentCache is not enabled

2018-03-19 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man resolved SOLR-12108.
-
   Resolution: Fixed
Fix Version/s: master (8.0)
   7.4

> raw transformers ([json] and [xml]) drop the field value if wt is not a match 
> and documentCache is not enabled
> --
>
> Key: SOLR-12108
> URL: https://issues.apache.org/jira/browse/SOLR-12108
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Hoss Man
>Priority: Major
> Fix For: 7.4, master (8.0)
>
>
> discovered this while working on SOLR-11891...
> The {{RawValueTransformerFactory}} class is suppose to treat the field value 
> as a normal string in situations where an instance is limited by the {{wt}} 
> param (which it is automatically for the default {{[json]}} and {{[xml]}} 
> transformers.
> This is currently implemented by {{RawValueTransformerFactory.create()}} 
> assuming it can just return "null" if the ResponseWriter in use doesn't match 
> - but because of how this transformer abuses the "key" to implicitly indicate 
> the field to be returned (ie: {{my_json_fieldName:[json]}}, it means that 
> nothing about the resulting {{ReturnFields}} datastructure indicates that the 
> field ({{my_json_fieldName}}) should be returned at all.
> Because of the existing sloppy code in SOLR-11891, that means this bug in 
> ChildDocumentTransformer only impacts current users if the documentCache is 
> disabled
> 
> Example steps to reproduce w/techproducts config assuming {{solrconfig.xml}} 
> is edited to disable documentCache...
> {noformat}
> $ curl 'http://localhost:8983/solr/techproducts/update?commit=true' -H 
> 'Content-Type: application/json' --data-binary '[
>   {
> "id": "1",
> "raw_s":"{\"raw\":\"json\"}" } ]'
> {
>   "responseHeader":{
> "status":0,
> "QTime":39}}
> $ curl 'http://localhost:8983/solr/techproducts/query?wt=json&q=id:1&fl=raw_s'
> {
>   "responseHeader":{
> "status":0,
> "QTime":2,
> "params":{
>   "q":"id:1",
>   "fl":"raw_s",
>   "wt":"json"}},
>   "response":{"numFound":1,"start":0,"docs":[
>   {
> "raw_s":"{\"raw\":\"json\"}"}]
>   }}
> $ curl 
> 'http://localhost:8983/solr/techproducts/query?wt=json&q=id:1&fl=raw_s:%5Bjson%5D'
> {
>   "responseHeader":{
> "status":0,
> "QTime":0,
> "params":{
>   "q":"id:1",
>   "fl":"raw_s:[json]",
>   "wt":"json"}},
>   "response":{"numFound":1,"start":0,"docs":[
>   {
> "raw_s":{"raw":"json"}}]
>   }}
> $ curl 
> 'http://localhost:8983/solr/techproducts/query?wt=xml&q=id:1&fl=raw_s:%5Bjson%5D'
> 
> 
> 
>   0
>   0
>   
> id:1
> raw_s:[json]
> xml
>   
> 
> 
>   
> 
> 
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-12107) [child] doc transformer used w/o uniqueKey in 'fl' fails with NPE unless documentCache is enabled

2018-03-19 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man resolved SOLR-12107.
-
   Resolution: Fixed
Fix Version/s: master (8.0)
   7.4

> [child] doc transformer used w/o uniqueKey in 'fl' fails with NPE unless 
> documentCache is enabled
> -
>
> Key: SOLR-12107
> URL: https://issues.apache.org/jira/browse/SOLR-12107
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Hoss Man
>Priority: Major
> Fix For: 7.4, master (8.0)
>
>
> discovered this while working on SOLR-11891...
> The ChildDocumentTransformer implicitly assumes the uniqueKey field will 
> allways be available when transforming the doc, w/o explicitly requesting it 
> via {{getExtraRequestFields()}}
> Because of the existing sloppy code in SOLR-11891, that means this bug in 
> ChildDocumentTransformer only impacts current users if the documentCache is 
> disabled
> 
> Example steps to reproduce w/techproducts config assuming {{solrconfig.xml}} 
> is edited to disable documentCache...
> {noformat}
> $ curl 'http://localhost:8983/solr/techproducts/update?commit=true' -H 
> 'Content-Type: application/json' --data-binary '[
> >   {
> > "id": "1",
> > "title": "Solr adds block join support",
> > "content_type": "parentDocument",
> > "_childDocuments_": [
> >   {
> > "id": "2",
> > "comments": "SolrCloud supports it too!"
> >   }
> > ]
> >   },
> >   {
> > "id": "3",
> > "title": "New Lucene and Solr release is out",
> > "content_type": "parentDocument",
> > "_childDocuments_": [
> >   {
> > "id": "4",
> > "comments": "Lots of new features"
> >   }
> > ]
> >   }
> > ]'
> {
>   "responseHeader":{
> "status":0,
> "QTime":69}}
> $ curl 'http://localhost:8983/solr/techproducts/query?q=id:1'
> {
>   "responseHeader":{
> "status":0,
> "QTime":5,
> "params":{
>   "q":"id:1"}},
>   "response":{"numFound":1,"start":0,"docs":[
>   {
> "id":"1",
> "title":["Solr adds block join support"],
> "content_type":["parentDocument"],
> "_version_":1595047178033692672}]
>   }}
> $ curl 
> 'http://localhost:8983/solr/techproducts/query?q=id:1&fl=id,%5Bchild+parentFilter="content_type:parentDocument"%5D'
> {
>   "responseHeader":{
> "status":0,
> "QTime":0,
> "params":{
>   "q":"id:1",
>   "fl":"id,[child parentFilter=\"content_type:parentDocument\"]"}},
>   "response":{"numFound":1,"start":0,"docs":[
>   {
> "id":"1",
> "_childDocuments_":[
> {
>   "id":"2",
>   "comments":"SolrCloud supports it too!",
>   "_version_":1595047178033692672}]}]
>   }}
> $ curl 
> 'http://localhost:8983/solr/techproducts/query?q=id:1&fl=%5Bchild+parentFilter="content_type:parentDocument"%5D'
> {
>   "error":{
> "trace":"java.lang.NullPointerException\n\tat 
> org.apache.solr.response.transform.ChildDocTransformer.transform(ChildDocTransformerFactory.java:133)\n\tat
>  org.apache.solr.response.DocsStreamer.next(DocsStreamer.java:120)\n\tat 
> org.apache.solr.response.DocsStreamer.next(DocsStreamer.java:57)\n\tat 
> org.apache.solr.response.TextResponseWriter.writeDocuments(TextResponseWriter.java:275)\n\tat
>  
> org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:161)\n\tat
>  
> org.apache.solr.response.JSONWriter.writeNamedListAsMapWithDups(JSONResponseWriter.java:209)\n\tat
>  
> org.apache.solr.response.JSONWriter.writeNamedList(JSONResponseWriter.java:325)\n\tat
>  
> org.apache.solr.response.JSONWriter.writeResponse(JSONResponseWriter.java:120)\n\tat
>  
> org.apache.solr.response.JSONResponseWriter.write(JSONResponseWriter.java:71)\n\tat
>  
> org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:65)\n\tat
>  
> org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:789)\n\tat
>  org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:526)\n\tat 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:384)\n\tat
>  
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:330)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1629)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:533)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)\n\tat
>  
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)\n\

[jira] [Commented] (SOLR-12107) [child] doc transformer used w/o uniqueKey in 'fl' fails with NPE unless documentCache is enabled

2018-03-19 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16405487#comment-16405487
 ] 

ASF subversion and git services commented on SOLR-12107:


Commit 11af2144b66717f41e2fcb5c73c7059cf009a00a in lucene-solr's branch 
refs/heads/branch_7x from Chris Hostetter
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=11af214 ]

SOLR-11891: DocStreamer now respects the ReturnFields when populating a 
SolrDocument
This is an optimization that reduces the number of unneccessary fields a 
ResponseWriter will see if documentCache is used

This commit also includes fixes for SOLR-12107 & SOLR-12108 -- two bugs that 
were previously dependent on the
un-optimized behavior of DocStreamer in order to function properly.

- SOLR-12107: Fixed a error in [child] transformer that could ocur if 
documentCache was not used
- SOLR-12108: Fixed the fallback behavior of [raw] and [xml] transformers when 
an incompatble 'wt' was specified,
  the field value was lost if documentCache was not used.

(cherry picked from commit 8bd7e5c9d254c1d629a784e0b601885adea2f57b)


> [child] doc transformer used w/o uniqueKey in 'fl' fails with NPE unless 
> documentCache is enabled
> -
>
> Key: SOLR-12107
> URL: https://issues.apache.org/jira/browse/SOLR-12107
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Hoss Man
>Priority: Major
>
> discovered this while working on SOLR-11891...
> The ChildDocumentTransformer implicitly assumes the uniqueKey field will 
> allways be available when transforming the doc, w/o explicitly requesting it 
> via {{getExtraRequestFields()}}
> Because of the existing sloppy code in SOLR-11891, that means this bug in 
> ChildDocumentTransformer only impacts current users if the documentCache is 
> disabled
> 
> Example steps to reproduce w/techproducts config assuming {{solrconfig.xml}} 
> is edited to disable documentCache...
> {noformat}
> $ curl 'http://localhost:8983/solr/techproducts/update?commit=true' -H 
> 'Content-Type: application/json' --data-binary '[
> >   {
> > "id": "1",
> > "title": "Solr adds block join support",
> > "content_type": "parentDocument",
> > "_childDocuments_": [
> >   {
> > "id": "2",
> > "comments": "SolrCloud supports it too!"
> >   }
> > ]
> >   },
> >   {
> > "id": "3",
> > "title": "New Lucene and Solr release is out",
> > "content_type": "parentDocument",
> > "_childDocuments_": [
> >   {
> > "id": "4",
> > "comments": "Lots of new features"
> >   }
> > ]
> >   }
> > ]'
> {
>   "responseHeader":{
> "status":0,
> "QTime":69}}
> $ curl 'http://localhost:8983/solr/techproducts/query?q=id:1'
> {
>   "responseHeader":{
> "status":0,
> "QTime":5,
> "params":{
>   "q":"id:1"}},
>   "response":{"numFound":1,"start":0,"docs":[
>   {
> "id":"1",
> "title":["Solr adds block join support"],
> "content_type":["parentDocument"],
> "_version_":1595047178033692672}]
>   }}
> $ curl 
> 'http://localhost:8983/solr/techproducts/query?q=id:1&fl=id,%5Bchild+parentFilter="content_type:parentDocument"%5D'
> {
>   "responseHeader":{
> "status":0,
> "QTime":0,
> "params":{
>   "q":"id:1",
>   "fl":"id,[child parentFilter=\"content_type:parentDocument\"]"}},
>   "response":{"numFound":1,"start":0,"docs":[
>   {
> "id":"1",
> "_childDocuments_":[
> {
>   "id":"2",
>   "comments":"SolrCloud supports it too!",
>   "_version_":1595047178033692672}]}]
>   }}
> $ curl 
> 'http://localhost:8983/solr/techproducts/query?q=id:1&fl=%5Bchild+parentFilter="content_type:parentDocument"%5D'
> {
>   "error":{
> "trace":"java.lang.NullPointerException\n\tat 
> org.apache.solr.response.transform.ChildDocTransformer.transform(ChildDocTransformerFactory.java:133)\n\tat
>  org.apache.solr.response.DocsStreamer.next(DocsStreamer.java:120)\n\tat 
> org.apache.solr.response.DocsStreamer.next(DocsStreamer.java:57)\n\tat 
> org.apache.solr.response.TextResponseWriter.writeDocuments(TextResponseWriter.java:275)\n\tat
>  
> org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:161)\n\tat
>  
> org.apache.solr.response.JSONWriter.writeNamedListAsMapWithDups(JSONResponseWriter.java:209)\n\tat
>  
> org.apache.solr.response.JSONWriter.writeNamedList(JSONResponseWriter.java:325)\n\tat
>  
> org.apache.solr.response.JSONWriter.writeResponse(JSONResponseWriter.java:120)\n\tat
>  
> org.apache.solr.response.JSONResponseWriter.write(JSONResponseWriter.java:71)\n\tat
>  
> org.

[jira] [Commented] (SOLR-12107) [child] doc transformer used w/o uniqueKey in 'fl' fails with NPE unless documentCache is enabled

2018-03-19 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16405485#comment-16405485
 ] 

ASF subversion and git services commented on SOLR-12107:


Commit 11af2144b66717f41e2fcb5c73c7059cf009a00a in lucene-solr's branch 
refs/heads/branch_7x from Chris Hostetter
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=11af214 ]

SOLR-11891: DocStreamer now respects the ReturnFields when populating a 
SolrDocument
This is an optimization that reduces the number of unneccessary fields a 
ResponseWriter will see if documentCache is used

This commit also includes fixes for SOLR-12107 & SOLR-12108 -- two bugs that 
were previously dependent on the
un-optimized behavior of DocStreamer in order to function properly.

- SOLR-12107: Fixed a error in [child] transformer that could ocur if 
documentCache was not used
- SOLR-12108: Fixed the fallback behavior of [raw] and [xml] transformers when 
an incompatble 'wt' was specified,
  the field value was lost if documentCache was not used.

(cherry picked from commit 8bd7e5c9d254c1d629a784e0b601885adea2f57b)


> [child] doc transformer used w/o uniqueKey in 'fl' fails with NPE unless 
> documentCache is enabled
> -
>
> Key: SOLR-12107
> URL: https://issues.apache.org/jira/browse/SOLR-12107
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Hoss Man
>Priority: Major
>
> discovered this while working on SOLR-11891...
> The ChildDocumentTransformer implicitly assumes the uniqueKey field will 
> allways be available when transforming the doc, w/o explicitly requesting it 
> via {{getExtraRequestFields()}}
> Because of the existing sloppy code in SOLR-11891, that means this bug in 
> ChildDocumentTransformer only impacts current users if the documentCache is 
> disabled
> 
> Example steps to reproduce w/techproducts config assuming {{solrconfig.xml}} 
> is edited to disable documentCache...
> {noformat}
> $ curl 'http://localhost:8983/solr/techproducts/update?commit=true' -H 
> 'Content-Type: application/json' --data-binary '[
> >   {
> > "id": "1",
> > "title": "Solr adds block join support",
> > "content_type": "parentDocument",
> > "_childDocuments_": [
> >   {
> > "id": "2",
> > "comments": "SolrCloud supports it too!"
> >   }
> > ]
> >   },
> >   {
> > "id": "3",
> > "title": "New Lucene and Solr release is out",
> > "content_type": "parentDocument",
> > "_childDocuments_": [
> >   {
> > "id": "4",
> > "comments": "Lots of new features"
> >   }
> > ]
> >   }
> > ]'
> {
>   "responseHeader":{
> "status":0,
> "QTime":69}}
> $ curl 'http://localhost:8983/solr/techproducts/query?q=id:1'
> {
>   "responseHeader":{
> "status":0,
> "QTime":5,
> "params":{
>   "q":"id:1"}},
>   "response":{"numFound":1,"start":0,"docs":[
>   {
> "id":"1",
> "title":["Solr adds block join support"],
> "content_type":["parentDocument"],
> "_version_":1595047178033692672}]
>   }}
> $ curl 
> 'http://localhost:8983/solr/techproducts/query?q=id:1&fl=id,%5Bchild+parentFilter="content_type:parentDocument"%5D'
> {
>   "responseHeader":{
> "status":0,
> "QTime":0,
> "params":{
>   "q":"id:1",
>   "fl":"id,[child parentFilter=\"content_type:parentDocument\"]"}},
>   "response":{"numFound":1,"start":0,"docs":[
>   {
> "id":"1",
> "_childDocuments_":[
> {
>   "id":"2",
>   "comments":"SolrCloud supports it too!",
>   "_version_":1595047178033692672}]}]
>   }}
> $ curl 
> 'http://localhost:8983/solr/techproducts/query?q=id:1&fl=%5Bchild+parentFilter="content_type:parentDocument"%5D'
> {
>   "error":{
> "trace":"java.lang.NullPointerException\n\tat 
> org.apache.solr.response.transform.ChildDocTransformer.transform(ChildDocTransformerFactory.java:133)\n\tat
>  org.apache.solr.response.DocsStreamer.next(DocsStreamer.java:120)\n\tat 
> org.apache.solr.response.DocsStreamer.next(DocsStreamer.java:57)\n\tat 
> org.apache.solr.response.TextResponseWriter.writeDocuments(TextResponseWriter.java:275)\n\tat
>  
> org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:161)\n\tat
>  
> org.apache.solr.response.JSONWriter.writeNamedListAsMapWithDups(JSONResponseWriter.java:209)\n\tat
>  
> org.apache.solr.response.JSONWriter.writeNamedList(JSONResponseWriter.java:325)\n\tat
>  
> org.apache.solr.response.JSONWriter.writeResponse(JSONResponseWriter.java:120)\n\tat
>  
> org.apache.solr.response.JSONResponseWriter.write(JSONResponseWriter.java:71)\n\tat
>  
> org.

[jira] [Commented] (SOLR-12108) raw transformers ([json] and [xml]) drop the field value if wt is not a match and documentCache is not enabled

2018-03-19 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16405488#comment-16405488
 ] 

ASF subversion and git services commented on SOLR-12108:


Commit 11af2144b66717f41e2fcb5c73c7059cf009a00a in lucene-solr's branch 
refs/heads/branch_7x from Chris Hostetter
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=11af214 ]

SOLR-11891: DocStreamer now respects the ReturnFields when populating a 
SolrDocument
This is an optimization that reduces the number of unneccessary fields a 
ResponseWriter will see if documentCache is used

This commit also includes fixes for SOLR-12107 & SOLR-12108 -- two bugs that 
were previously dependent on the
un-optimized behavior of DocStreamer in order to function properly.

- SOLR-12107: Fixed a error in [child] transformer that could ocur if 
documentCache was not used
- SOLR-12108: Fixed the fallback behavior of [raw] and [xml] transformers when 
an incompatble 'wt' was specified,
  the field value was lost if documentCache was not used.

(cherry picked from commit 8bd7e5c9d254c1d629a784e0b601885adea2f57b)


> raw transformers ([json] and [xml]) drop the field value if wt is not a match 
> and documentCache is not enabled
> --
>
> Key: SOLR-12108
> URL: https://issues.apache.org/jira/browse/SOLR-12108
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Hoss Man
>Priority: Major
>
> discovered this while working on SOLR-11891...
> The {{RawValueTransformerFactory}} class is suppose to treat the field value 
> as a normal string in situations where an instance is limited by the {{wt}} 
> param (which it is automatically for the default {{[json]}} and {{[xml]}} 
> transformers.
> This is currently implemented by {{RawValueTransformerFactory.create()}} 
> assuming it can just return "null" if the ResponseWriter in use doesn't match 
> - but because of how this transformer abuses the "key" to implicitly indicate 
> the field to be returned (ie: {{my_json_fieldName:[json]}}, it means that 
> nothing about the resulting {{ReturnFields}} datastructure indicates that the 
> field ({{my_json_fieldName}}) should be returned at all.
> Because of the existing sloppy code in SOLR-11891, that means this bug in 
> ChildDocumentTransformer only impacts current users if the documentCache is 
> disabled
> 
> Example steps to reproduce w/techproducts config assuming {{solrconfig.xml}} 
> is edited to disable documentCache...
> {noformat}
> $ curl 'http://localhost:8983/solr/techproducts/update?commit=true' -H 
> 'Content-Type: application/json' --data-binary '[
>   {
> "id": "1",
> "raw_s":"{\"raw\":\"json\"}" } ]'
> {
>   "responseHeader":{
> "status":0,
> "QTime":39}}
> $ curl 'http://localhost:8983/solr/techproducts/query?wt=json&q=id:1&fl=raw_s'
> {
>   "responseHeader":{
> "status":0,
> "QTime":2,
> "params":{
>   "q":"id:1",
>   "fl":"raw_s",
>   "wt":"json"}},
>   "response":{"numFound":1,"start":0,"docs":[
>   {
> "raw_s":"{\"raw\":\"json\"}"}]
>   }}
> $ curl 
> 'http://localhost:8983/solr/techproducts/query?wt=json&q=id:1&fl=raw_s:%5Bjson%5D'
> {
>   "responseHeader":{
> "status":0,
> "QTime":0,
> "params":{
>   "q":"id:1",
>   "fl":"raw_s:[json]",
>   "wt":"json"}},
>   "response":{"numFound":1,"start":0,"docs":[
>   {
> "raw_s":{"raw":"json"}}]
>   }}
> $ curl 
> 'http://localhost:8983/solr/techproducts/query?wt=xml&q=id:1&fl=raw_s:%5Bjson%5D'
> 
> 
> 
>   0
>   0
>   
> id:1
> raw_s:[json]
> xml
>   
> 
> 
>   
> 
> 
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12108) raw transformers ([json] and [xml]) drop the field value if wt is not a match and documentCache is not enabled

2018-03-19 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16405486#comment-16405486
 ] 

ASF subversion and git services commented on SOLR-12108:


Commit 11af2144b66717f41e2fcb5c73c7059cf009a00a in lucene-solr's branch 
refs/heads/branch_7x from Chris Hostetter
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=11af214 ]

SOLR-11891: DocStreamer now respects the ReturnFields when populating a 
SolrDocument
This is an optimization that reduces the number of unneccessary fields a 
ResponseWriter will see if documentCache is used

This commit also includes fixes for SOLR-12107 & SOLR-12108 -- two bugs that 
were previously dependent on the
un-optimized behavior of DocStreamer in order to function properly.

- SOLR-12107: Fixed a error in [child] transformer that could ocur if 
documentCache was not used
- SOLR-12108: Fixed the fallback behavior of [raw] and [xml] transformers when 
an incompatble 'wt' was specified,
  the field value was lost if documentCache was not used.

(cherry picked from commit 8bd7e5c9d254c1d629a784e0b601885adea2f57b)


> raw transformers ([json] and [xml]) drop the field value if wt is not a match 
> and documentCache is not enabled
> --
>
> Key: SOLR-12108
> URL: https://issues.apache.org/jira/browse/SOLR-12108
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Hoss Man
>Priority: Major
>
> discovered this while working on SOLR-11891...
> The {{RawValueTransformerFactory}} class is suppose to treat the field value 
> as a normal string in situations where an instance is limited by the {{wt}} 
> param (which it is automatically for the default {{[json]}} and {{[xml]}} 
> transformers.
> This is currently implemented by {{RawValueTransformerFactory.create()}} 
> assuming it can just return "null" if the ResponseWriter in use doesn't match 
> - but because of how this transformer abuses the "key" to implicitly indicate 
> the field to be returned (ie: {{my_json_fieldName:[json]}}, it means that 
> nothing about the resulting {{ReturnFields}} datastructure indicates that the 
> field ({{my_json_fieldName}}) should be returned at all.
> Because of the existing sloppy code in SOLR-11891, that means this bug in 
> ChildDocumentTransformer only impacts current users if the documentCache is 
> disabled
> 
> Example steps to reproduce w/techproducts config assuming {{solrconfig.xml}} 
> is edited to disable documentCache...
> {noformat}
> $ curl 'http://localhost:8983/solr/techproducts/update?commit=true' -H 
> 'Content-Type: application/json' --data-binary '[
>   {
> "id": "1",
> "raw_s":"{\"raw\":\"json\"}" } ]'
> {
>   "responseHeader":{
> "status":0,
> "QTime":39}}
> $ curl 'http://localhost:8983/solr/techproducts/query?wt=json&q=id:1&fl=raw_s'
> {
>   "responseHeader":{
> "status":0,
> "QTime":2,
> "params":{
>   "q":"id:1",
>   "fl":"raw_s",
>   "wt":"json"}},
>   "response":{"numFound":1,"start":0,"docs":[
>   {
> "raw_s":"{\"raw\":\"json\"}"}]
>   }}
> $ curl 
> 'http://localhost:8983/solr/techproducts/query?wt=json&q=id:1&fl=raw_s:%5Bjson%5D'
> {
>   "responseHeader":{
> "status":0,
> "QTime":0,
> "params":{
>   "q":"id:1",
>   "fl":"raw_s:[json]",
>   "wt":"json"}},
>   "response":{"numFound":1,"start":0,"docs":[
>   {
> "raw_s":{"raw":"json"}}]
>   }}
> $ curl 
> 'http://localhost:8983/solr/techproducts/query?wt=xml&q=id:1&fl=raw_s:%5Bjson%5D'
> 
> 
> 
>   0
>   0
>   
> id:1
> raw_s:[json]
> xml
>   
> 
> 
>   
> 
> 
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11891) DocsStreamer populates SolrDocument w/unnecessary fields

2018-03-19 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16405484#comment-16405484
 ] 

ASF subversion and git services commented on SOLR-11891:


Commit 11af2144b66717f41e2fcb5c73c7059cf009a00a in lucene-solr's branch 
refs/heads/branch_7x from Chris Hostetter
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=11af214 ]

SOLR-11891: DocStreamer now respects the ReturnFields when populating a 
SolrDocument
This is an optimization that reduces the number of unneccessary fields a 
ResponseWriter will see if documentCache is used

This commit also includes fixes for SOLR-12107 & SOLR-12108 -- two bugs that 
were previously dependent on the
un-optimized behavior of DocStreamer in order to function properly.

- SOLR-12107: Fixed a error in [child] transformer that could ocur if 
documentCache was not used
- SOLR-12108: Fixed the fallback behavior of [raw] and [xml] transformers when 
an incompatble 'wt' was specified,
  the field value was lost if documentCache was not used.

(cherry picked from commit 8bd7e5c9d254c1d629a784e0b601885adea2f57b)


> DocsStreamer populates SolrDocument w/unnecessary fields
> 
>
> Key: SOLR-11891
> URL: https://issues.apache.org/jira/browse/SOLR-11891
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Response Writers
>Affects Versions: 5.4, 6.4.2, 6.6.2
>Reporter: wei wang
>Assignee: Hoss Man
>Priority: Major
> Attachments: DocsStreamer.java.diff, SOLR-11891.patch, 
> SOLR-11891.patch.BAD
>
>
> We observe that solr query time increases significantly with the number of 
> rows requested,  even all we retrieve for each document is just fl=id,score.  
> Debugged a bit and see that most of the increased time was spent in 
> BinaryResponseWriter,  converting lucene document into SolrDocument.  Inside 
> convertLuceneDocToSolrDoc():   
> [https://github.com/apache/lucene-solr/blob/df874432b9a17b547acb24a01d3491839e6a6b69/solr/core/src/java/org/apache/solr/response/DocsStreamer.java#L182]
>  
> I am a bit puzzled why we need to iterate through all the fields in the 
> document. Why can’t we just iterate through the requested field list?    
> [https://github.com/apache/lucene-solr/blob/df874432b9a17b547acb24a01d3491839e6a6b69/solr/core/src/java/org/apache/solr/response/DocsStreamer.java#L156]
>  
> e.g. when pass in the field list as 
> sdoc = convertLuceneDocToSolrDoc(doc, rctx.getSearcher().getSchema(), fnames)
> and just iterate through fnames,  there is a significant performance boost in 
> our case.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11913) SolrParams ought to implement Iterable>

2018-03-19 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16405481#comment-16405481
 ] 

David Smiley commented on SOLR-11913:
-

It's a start.
The key part as referenced in the description -- having SolrParams implement 
Iterable wasn't done.
Why did you create SolrParams.getMapEntry?  You could inline it to do an 
anonymous inner class
Please override this for ModifiableSolrParams to return a more optimal 
implementation.

After I code review those changes, we can consider callers of 
getParameterNamesIterator to see which of those would be good candidates to 
renovate to use the Java 5 for-each style.

> SolrParams ought to implement Iterable>
> --
>
> Key: SOLR-11913
> URL: https://issues.apache.org/jira/browse/SOLR-11913
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: David Smiley
>Priority: Minor
>  Labels: newdev
> Attachments: SOLR-11913.patch
>
>
> SolrJ ought to implement {{Iterable>}} so that 
> it's easier to iterate on it, either using Java 5 for-each style, or Java 8 
> streams.  The implementation on ModifiableSolrParams can delegate through to 
> the underlying LinkedHashMap entry set.  The default impl can produce a 
> Map.Entry with a getValue that calls through to getParams.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-12020) terms faceting on date field fails in distributed refinement

2018-03-19 Thread Antelmo Aguilar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16405477#comment-16405477
 ] 

Antelmo Aguilar edited comment on SOLR-12020 at 3/19/18 9:26 PM:
-

Hi, I am the person that reported this issue and I tried the fix 
[~ysee...@gmail.com] pushed and I still get the same error.  I spoke with him 
through e-mail and he recommended me providing more details in this ticket. 

So I am attaching the configuration files for the core that we use where the 
query is failing, a file of the data we index, and a screenshot of the output I 
get when running a query.

Once you get the core and the data indexed, using the following error should 
hopefully give the same error I attached:
{noformat}
https://localhost:/solr/vb_popbio/abndGraphdata?q=*:*&term=species_category&fq=geo_coords:[35.1021,-115.998%20TO%2036.9982,-114.046]
{noformat}
Hopefully this helps with the debugging. I also wanted to say that this query 
works on Solr 6.1 (not Solr 6.2 like I had initially reported).  Thanks for 
looking into this.


was (Author: aguilara):
Hi, I am the person that reported this issue and I tried the fix 
[~ysee...@gmail.com] pushed and I still get the same error.  I spoke with him 
through e-mail and he recommended me providing more details in this ticket.  

So I am attaching the configuration files for the core that we use where the 
query is failing, a file of the data we index, and a screenshot of the output I 
get when running a query.

Once you get the core and the data indexed, using the following error should 
hopefully give the same error I attached:

https://localhost:/solr/vb_popbio/abndGraphdata?q=*:*&term=species_category&fq=geo_coords:[35.1021,-115.998%20TO%2036.9982,-114.046]

> terms faceting on date field fails in distributed refinement
> 
>
> Key: SOLR-12020
> URL: https://issues.apache.org/jira/browse/SOLR-12020
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module
>Affects Versions: 6.6
>Reporter: Yonik Seeley
>Assignee: Yonik Seeley
>Priority: Major
> Fix For: 7.3
>
> Attachments: SOLR-12020.patch, SOLR-12020.patch, Solr Error.png, 
> popbio-solr-VB-2018-02-main-05.json.gz, vb_popbio_conf.tar.gz
>
>
> This appears to be a regression, as the reporter indicates that Solr 6.2 
> worked and Solr 6.6 does not. 
> http://markmail.org/message/hwlajuy5jnmf4yd6
> I've reproduced the issue on the master branch (future v8) as well.
> A typical exception that results from a terms facet on a date field is:
> {code}
> org.apache.solr.common.SolrException: Invalid Date String:'Sat Feb 03 
> 01:02:03 WET 2001'
>   at 
> org.apache.solr.util.DateMathParser.parseMath(DateMathParser.java:247)
>   at 
> org.apache.solr.util.DateMathParser.parseMath(DateMathParser.java:226)
>   at 
> org.apache.solr.schema.DatePointField.toNativeType(DatePointField.java:113)
>   at 
> org.apache.solr.search.facet.FacetFieldProcessor.refineBucket(FacetFieldProcessor.java:683)
>   at 
> org.apache.solr.search.facet.FacetFieldProcessor.refineFacets(FacetFieldProcessor.java:638)
>   at 
> org.apache.solr.search.facet.FacetFieldProcessorByArray.calcFacets(FacetFieldProcessorByArray.java:66)
>   at 
> org.apache.solr.search.facet.FacetFieldProcessorByArray.process(FacetFieldProcessorByArray.java:58)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12020) terms faceting on date field fails in distributed refinement

2018-03-19 Thread Antelmo Aguilar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16405477#comment-16405477
 ] 

Antelmo Aguilar commented on SOLR-12020:


Hi, I am the person that reported this issue and I tried the fix 
[~ysee...@gmail.com] pushed and I still get the same error.  I spoke with him 
through e-mail and he recommended me providing more details in this ticket.  

So I am attaching the configuration files for the core that we use where the 
query is failing, a file of the data we index, and a screenshot of the output I 
get when running a query.

Once you get the core and the data indexed, using the following error should 
hopefully give the same error I attached:

https://localhost:/solr/vb_popbio/abndGraphdata?q=*:*&term=species_category&fq=geo_coords:[35.1021,-115.998%20TO%2036.9982,-114.046]

> terms faceting on date field fails in distributed refinement
> 
>
> Key: SOLR-12020
> URL: https://issues.apache.org/jira/browse/SOLR-12020
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module
>Affects Versions: 6.6
>Reporter: Yonik Seeley
>Assignee: Yonik Seeley
>Priority: Major
> Fix For: 7.3
>
> Attachments: SOLR-12020.patch, SOLR-12020.patch, Solr Error.png, 
> popbio-solr-VB-2018-02-main-05.json.gz, vb_popbio_conf.tar.gz
>
>
> This appears to be a regression, as the reporter indicates that Solr 6.2 
> worked and Solr 6.6 does not. 
> http://markmail.org/message/hwlajuy5jnmf4yd6
> I've reproduced the issue on the master branch (future v8) as well.
> A typical exception that results from a terms facet on a date field is:
> {code}
> org.apache.solr.common.SolrException: Invalid Date String:'Sat Feb 03 
> 01:02:03 WET 2001'
>   at 
> org.apache.solr.util.DateMathParser.parseMath(DateMathParser.java:247)
>   at 
> org.apache.solr.util.DateMathParser.parseMath(DateMathParser.java:226)
>   at 
> org.apache.solr.schema.DatePointField.toNativeType(DatePointField.java:113)
>   at 
> org.apache.solr.search.facet.FacetFieldProcessor.refineBucket(FacetFieldProcessor.java:683)
>   at 
> org.apache.solr.search.facet.FacetFieldProcessor.refineFacets(FacetFieldProcessor.java:638)
>   at 
> org.apache.solr.search.facet.FacetFieldProcessorByArray.calcFacets(FacetFieldProcessorByArray.java:66)
>   at 
> org.apache.solr.search.facet.FacetFieldProcessorByArray.process(FacetFieldProcessorByArray.java:58)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12020) terms faceting on date field fails in distributed refinement

2018-03-19 Thread Antelmo Aguilar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12020?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Antelmo Aguilar updated SOLR-12020:
---
Attachment: Solr Error.png

> terms faceting on date field fails in distributed refinement
> 
>
> Key: SOLR-12020
> URL: https://issues.apache.org/jira/browse/SOLR-12020
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module
>Affects Versions: 6.6
>Reporter: Yonik Seeley
>Assignee: Yonik Seeley
>Priority: Major
> Fix For: 7.3
>
> Attachments: SOLR-12020.patch, SOLR-12020.patch, Solr Error.png, 
> popbio-solr-VB-2018-02-main-05.json.gz, vb_popbio_conf.tar.gz
>
>
> This appears to be a regression, as the reporter indicates that Solr 6.2 
> worked and Solr 6.6 does not. 
> http://markmail.org/message/hwlajuy5jnmf4yd6
> I've reproduced the issue on the master branch (future v8) as well.
> A typical exception that results from a terms facet on a date field is:
> {code}
> org.apache.solr.common.SolrException: Invalid Date String:'Sat Feb 03 
> 01:02:03 WET 2001'
>   at 
> org.apache.solr.util.DateMathParser.parseMath(DateMathParser.java:247)
>   at 
> org.apache.solr.util.DateMathParser.parseMath(DateMathParser.java:226)
>   at 
> org.apache.solr.schema.DatePointField.toNativeType(DatePointField.java:113)
>   at 
> org.apache.solr.search.facet.FacetFieldProcessor.refineBucket(FacetFieldProcessor.java:683)
>   at 
> org.apache.solr.search.facet.FacetFieldProcessor.refineFacets(FacetFieldProcessor.java:638)
>   at 
> org.apache.solr.search.facet.FacetFieldProcessorByArray.calcFacets(FacetFieldProcessorByArray.java:66)
>   at 
> org.apache.solr.search.facet.FacetFieldProcessorByArray.process(FacetFieldProcessorByArray.java:58)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12020) terms faceting on date field fails in distributed refinement

2018-03-19 Thread Antelmo Aguilar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12020?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Antelmo Aguilar updated SOLR-12020:
---
Attachment: popbio-solr-VB-2018-02-main-05.json.gz

> terms faceting on date field fails in distributed refinement
> 
>
> Key: SOLR-12020
> URL: https://issues.apache.org/jira/browse/SOLR-12020
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module
>Affects Versions: 6.6
>Reporter: Yonik Seeley
>Assignee: Yonik Seeley
>Priority: Major
> Fix For: 7.3
>
> Attachments: SOLR-12020.patch, SOLR-12020.patch, Solr Error.png, 
> popbio-solr-VB-2018-02-main-05.json.gz, vb_popbio_conf.tar.gz
>
>
> This appears to be a regression, as the reporter indicates that Solr 6.2 
> worked and Solr 6.6 does not. 
> http://markmail.org/message/hwlajuy5jnmf4yd6
> I've reproduced the issue on the master branch (future v8) as well.
> A typical exception that results from a terms facet on a date field is:
> {code}
> org.apache.solr.common.SolrException: Invalid Date String:'Sat Feb 03 
> 01:02:03 WET 2001'
>   at 
> org.apache.solr.util.DateMathParser.parseMath(DateMathParser.java:247)
>   at 
> org.apache.solr.util.DateMathParser.parseMath(DateMathParser.java:226)
>   at 
> org.apache.solr.schema.DatePointField.toNativeType(DatePointField.java:113)
>   at 
> org.apache.solr.search.facet.FacetFieldProcessor.refineBucket(FacetFieldProcessor.java:683)
>   at 
> org.apache.solr.search.facet.FacetFieldProcessor.refineFacets(FacetFieldProcessor.java:638)
>   at 
> org.apache.solr.search.facet.FacetFieldProcessorByArray.calcFacets(FacetFieldProcessorByArray.java:66)
>   at 
> org.apache.solr.search.facet.FacetFieldProcessorByArray.process(FacetFieldProcessorByArray.java:58)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12020) terms faceting on date field fails in distributed refinement

2018-03-19 Thread Antelmo Aguilar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12020?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Antelmo Aguilar updated SOLR-12020:
---
Attachment: vb_popbio_conf.tar.gz

> terms faceting on date field fails in distributed refinement
> 
>
> Key: SOLR-12020
> URL: https://issues.apache.org/jira/browse/SOLR-12020
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module
>Affects Versions: 6.6
>Reporter: Yonik Seeley
>Assignee: Yonik Seeley
>Priority: Major
> Fix For: 7.3
>
> Attachments: SOLR-12020.patch, SOLR-12020.patch, vb_popbio_conf.tar.gz
>
>
> This appears to be a regression, as the reporter indicates that Solr 6.2 
> worked and Solr 6.6 does not. 
> http://markmail.org/message/hwlajuy5jnmf4yd6
> I've reproduced the issue on the master branch (future v8) as well.
> A typical exception that results from a terms facet on a date field is:
> {code}
> org.apache.solr.common.SolrException: Invalid Date String:'Sat Feb 03 
> 01:02:03 WET 2001'
>   at 
> org.apache.solr.util.DateMathParser.parseMath(DateMathParser.java:247)
>   at 
> org.apache.solr.util.DateMathParser.parseMath(DateMathParser.java:226)
>   at 
> org.apache.solr.schema.DatePointField.toNativeType(DatePointField.java:113)
>   at 
> org.apache.solr.search.facet.FacetFieldProcessor.refineBucket(FacetFieldProcessor.java:683)
>   at 
> org.apache.solr.search.facet.FacetFieldProcessor.refineFacets(FacetFieldProcessor.java:638)
>   at 
> org.apache.solr.search.facet.FacetFieldProcessorByArray.calcFacets(FacetFieldProcessorByArray.java:66)
>   at 
> org.apache.solr.search.facet.FacetFieldProcessorByArray.process(FacetFieldProcessorByArray.java:58)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-11779) Basic long-term collection of aggregated metrics

2018-03-19 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16405471#comment-16405471
 ] 

David Smiley edited comment on SOLR-11779 at 3/19/18 9:19 PM:
--

[~otis] Is there some standard/common API for Solr to query such an external 
system without depending on a particular implementation?  If not I suppose a 
few could be coded to some internal plugin API -- just one at first.  

To Otis's point, I think it's pretty reasonable to say that the "sophisticated" 
autoscaling strategies require the user to do more -- like install some 
thingamajig.  In doing so, we keep our code base simpler?


was (Author: dsmiley):
[~otis] Is there some standard/common API for Solr to query such an external 
system without depending on a particular implementation?  If not I suppose a 
few could be coded to some internal plugin API -- just one at first.  

To Oti's point, I think it's pretty reasonable to say that the "sophisticated" 
autoscaling strategies require the user to do more -- like install some 
thingamajig.  In doing so, we keep our code base simpler?

> Basic long-term collection of aggregated metrics
> 
>
> Key: SOLR-11779
> URL: https://issues.apache.org/jira/browse/SOLR-11779
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Affects Versions: 7.3, master (8.0)
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
>Priority: Major
>
> Tracking the key metrics over time is very helpful in understanding the 
> cluster and user behavior.
> Currently even basic metrics tracking requires setting up an external system 
> and either polling {{/admin/metrics}} or using {{SolrMetricReporter}}-s. The 
> advantage of this setup is that these external tools usually provide a lot of 
> sophisticated functionality. The downside is that they don't ship out of the 
> box with Solr and require additional admin effort to set up.
> Solr could collect some of the key metrics and keep their historical values 
> in a round-robin database (eg. using RRD4j) to keep the size of the historic 
> data constant (eg. ~64kB per metric), but at the same providing out of the 
> box useful insights into the basic system behavior over time. This data could 
> be persisted to the {{.system}} collection as blobs, and it could be also 
> presented in the Admin UI as graphs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11779) Basic long-term collection of aggregated metrics

2018-03-19 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16405471#comment-16405471
 ] 

David Smiley commented on SOLR-11779:
-

[~otis] Is there some standard/common API for Solr to query such an external 
system without depending on a particular implementation?  If not I suppose a 
few could be coded to some internal plugin API -- just one at first.  

To Oti's point, I think it's pretty reasonable to say that the "sophisticated" 
autoscaling strategies require the user to do more -- like install some 
thingamajig.  In doing so, we keep our code base simpler?

> Basic long-term collection of aggregated metrics
> 
>
> Key: SOLR-11779
> URL: https://issues.apache.org/jira/browse/SOLR-11779
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Affects Versions: 7.3, master (8.0)
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
>Priority: Major
>
> Tracking the key metrics over time is very helpful in understanding the 
> cluster and user behavior.
> Currently even basic metrics tracking requires setting up an external system 
> and either polling {{/admin/metrics}} or using {{SolrMetricReporter}}-s. The 
> advantage of this setup is that these external tools usually provide a lot of 
> sophisticated functionality. The downside is that they don't ship out of the 
> box with Solr and require additional admin effort to set up.
> Solr could collect some of the key metrics and keep their historical values 
> in a round-robin database (eg. using RRD4j) to keep the size of the historic 
> data constant (eg. ~64kB per metric), but at the same providing out of the 
> box useful insights into the basic system behavior over time. This data could 
> be persisted to the {{.system}} collection as blobs, and it could be also 
> presented in the Admin UI as graphs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8212) Never swallow Exceptions in IndexWriter and DocumentsWriter

2018-03-19 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16405456#comment-16405456
 ] 

ASF subversion and git services commented on LUCENE-8212:
-

Commit 65559cb94d2cbbc9081f6f5d6d8f6bac055b11e6 in lucene-solr's branch 
refs/heads/master from [~simonw]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=65559cb ]

LUCENE-8212: Make sure terms hash is always closed

if stored fields writer barfs we still need to close terms hash to
close pending files. This is crucial for some tests like 
TestIndexWriterOnVMError
that randomly failed due to this.


>  Never swallow Exceptions in IndexWriter and DocumentsWriter
> 
>
> Key: LUCENE-8212
> URL: https://issues.apache.org/jira/browse/LUCENE-8212
> Project: Lucene - Core
>  Issue Type: Improvement
>Affects Versions: 7.4, master (8.0)
>Reporter: Simon Willnauer
>Priority: Major
> Fix For: 7.4, master (8.0)
>
> Attachments: LUCENE-8212.patch, LUCENE-8212.patch
>
>
>  IndexWriter as well as DocumentsWriter caught Throwable and ignored it. This 
> is mainly a relict from pre Java 7 were exceptions didn't have the needed API 
> to suppress exceptions. This change handles exceptions correctly where the 
> original exception is rethrown and all other exceptions are added as 
> suppressed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8212) Never swallow Exceptions in IndexWriter and DocumentsWriter

2018-03-19 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16405455#comment-16405455
 ] 

ASF subversion and git services commented on LUCENE-8212:
-

Commit a00f5416afeb742213484400a4bf35f23ec47ce6 in lucene-solr's branch 
refs/heads/branch_7x from [~simonw]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=a00f541 ]

LUCENE-8212: Make sure terms hash is always closed

if stored fields writer barfs we still need to close terms hash to
close pending files. This is crucial for some tests like 
TestIndexWriterOnVMError
that randomly failed due to this.


>  Never swallow Exceptions in IndexWriter and DocumentsWriter
> 
>
> Key: LUCENE-8212
> URL: https://issues.apache.org/jira/browse/LUCENE-8212
> Project: Lucene - Core
>  Issue Type: Improvement
>Affects Versions: 7.4, master (8.0)
>Reporter: Simon Willnauer
>Priority: Major
> Fix For: 7.4, master (8.0)
>
> Attachments: LUCENE-8212.patch, LUCENE-8212.patch
>
>
>  IndexWriter as well as DocumentsWriter caught Throwable and ignored it. This 
> is mainly a relict from pre Java 7 were exceptions didn't have the needed API 
> to suppress exceptions. This change handles exceptions correctly where the 
> original exception is rethrown and all other exceptions are added as 
> suppressed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8214) Improve selection of testPoint for GeoComplexPolygon

2018-03-19 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16405423#comment-16405423
 ] 

ASF subversion and git services commented on LUCENE-8214:
-

Commit 1f3a8bc17559e4edcdbce479fef032d643cab0c5 in lucene-solr's branch 
refs/heads/branch_6x from [~kwri...@metacarta.com]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=1f3a8bc ]

LUCENE-8214: Update CHANGES.txt


> Improve selection of testPoint for GeoComplexPolygon
> 
>
> Key: LUCENE-8214
> URL: https://issues.apache.org/jira/browse/LUCENE-8214
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/spatial3d
>Reporter: Ignacio Vera
>Assignee: Karl Wright
>Priority: Major
> Fix For: 6.7, 7.4, master (8.0)
>
> Attachments: LUCENE-8214.patch
>
>
> I have been checking the effect of the testPoint on GeoComplexPolygon and it 
> seems performance can change quite a bit depending on the choice. 
> The results with random polygons with 20k points shows that a good choice is 
> to ue the center of mass of the shape. On the worst case the performance is 
> similar to what we have now and the best case is twice as fast for 
> {{within()}} and {{getRelationship()}} methods.
> Therefore I would like to propose to use that point whenever possible.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8214) Improve selection of testPoint for GeoComplexPolygon

2018-03-19 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16405421#comment-16405421
 ] 

ASF subversion and git services commented on LUCENE-8214:
-

Commit 18e040290e7ee7e2a3cec6e0ff03ec7381c50bda in lucene-solr's branch 
refs/heads/branch_7x from [~kwri...@metacarta.com]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=18e0402 ]

LUCENE-8214: Update CHANGES.txt


> Improve selection of testPoint for GeoComplexPolygon
> 
>
> Key: LUCENE-8214
> URL: https://issues.apache.org/jira/browse/LUCENE-8214
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/spatial3d
>Reporter: Ignacio Vera
>Assignee: Karl Wright
>Priority: Major
> Fix For: 6.7, 7.4, master (8.0)
>
> Attachments: LUCENE-8214.patch
>
>
> I have been checking the effect of the testPoint on GeoComplexPolygon and it 
> seems performance can change quite a bit depending on the choice. 
> The results with random polygons with 20k points shows that a good choice is 
> to ue the center of mass of the shape. On the worst case the performance is 
> similar to what we have now and the best case is twice as fast for 
> {{within()}} and {{getRelationship()}} methods.
> Therefore I would like to propose to use that point whenever possible.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8214) Improve selection of testPoint for GeoComplexPolygon

2018-03-19 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16405420#comment-16405420
 ] 

ASF subversion and git services commented on LUCENE-8214:
-

Commit a83241184474924b63a2d21aff2cf198b907ad45 in lucene-solr's branch 
refs/heads/master from [~kwri...@metacarta.com]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=a832411 ]

LUCENE-8214: Move message to a different place


> Improve selection of testPoint for GeoComplexPolygon
> 
>
> Key: LUCENE-8214
> URL: https://issues.apache.org/jira/browse/LUCENE-8214
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/spatial3d
>Reporter: Ignacio Vera
>Assignee: Karl Wright
>Priority: Major
> Fix For: 6.7, 7.4, master (8.0)
>
> Attachments: LUCENE-8214.patch
>
>
> I have been checking the effect of the testPoint on GeoComplexPolygon and it 
> seems performance can change quite a bit depending on the choice. 
> The results with random polygons with 20k points shows that a good choice is 
> to ue the center of mass of the shape. On the worst case the performance is 
> similar to what we have now and the best case is twice as fast for 
> {{within()}} and {{getRelationship()}} methods.
> Therefore I would like to propose to use that point whenever possible.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8214) Improve selection of testPoint for GeoComplexPolygon

2018-03-19 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16405419#comment-16405419
 ] 

ASF subversion and git services commented on LUCENE-8214:
-

Commit 9b4b7c6bbed76b76046d5216a10283c3da658c97 in lucene-solr's branch 
refs/heads/master from [~kwri...@metacarta.com]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=9b4b7c6 ]

LUCENE-8214: Update CHANGES.txt


> Improve selection of testPoint for GeoComplexPolygon
> 
>
> Key: LUCENE-8214
> URL: https://issues.apache.org/jira/browse/LUCENE-8214
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/spatial3d
>Reporter: Ignacio Vera
>Assignee: Karl Wright
>Priority: Major
> Fix For: 6.7, 7.4, master (8.0)
>
> Attachments: LUCENE-8214.patch
>
>
> I have been checking the effect of the testPoint on GeoComplexPolygon and it 
> seems performance can change quite a bit depending on the choice. 
> The results with random polygons with 20k points shows that a good choice is 
> to ue the center of mass of the shape. On the worst case the performance is 
> similar to what we have now and the best case is twice as fast for 
> {{within()}} and {{getRelationship()}} methods.
> Therefore I would like to propose to use that point whenever possible.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12122) nodes expression should support multiValued walk target

2018-03-19 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated SOLR-12122:

Description: 
The {{nodes}} streaming expression has a {{walk}} argument that articulates a 
pair of Solr fields of the form {{traversalFrom->traversalTo}}.  It assumed 
that they are *not* multiValued.  It _appears_ not difficult to add multiValued 
support to traversalTo; that's what this issue is about.

See 
http://lucene.472066.n3.nabble.com/Using-multi-valued-field-in-solr-cloud-Graph-Traversal-Query-td4324379.html

Note: {{gatherNodes}} appears to be the older name which is still supported. 
It's more commonly known as {{nodes}}.  graph-traversal.adoc documents it

  was:
The {{gatherNodes}} streaming expression has a {{walk}} argument that 
articulates a pair of Solr fields of the form {{traversalFrom->traversalTo}}.  
It assumed that they are *not* multiValued.  It _appears_ not difficult to add 
multiValued support to traversalTo; that's what this issue is about.

See 
http://lucene.472066.n3.nabble.com/Using-multi-valued-field-in-solr-cloud-Graph-Traversal-Query-td4324379.html

Summary: nodes expression should support multiValued walk target  (was: 
gatherNodes expression should support multiValued walk target)

> nodes expression should support multiValued walk target
> ---
>
> Key: SOLR-12122
> URL: https://issues.apache.org/jira/browse/SOLR-12122
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Reporter: David Smiley
>Priority: Major
>
> The {{nodes}} streaming expression has a {{walk}} argument that articulates a 
> pair of Solr fields of the form {{traversalFrom->traversalTo}}.  It assumed 
> that they are *not* multiValued.  It _appears_ not difficult to add 
> multiValued support to traversalTo; that's what this issue is about.
> See 
> http://lucene.472066.n3.nabble.com/Using-multi-valued-field-in-solr-cloud-Graph-Traversal-Query-td4324379.html
> Note: {{gatherNodes}} appears to be the older name which is still supported. 
> It's more commonly known as {{nodes}}.  graph-traversal.adoc documents it



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Windows (64bit/jdk1.8.0_144) - Build # 511 - Still Unstable!

2018-03-19 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Windows/511/
Java: 64bit/jdk1.8.0_144 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.common.util.TestTimeSource.testEpochTime

Error Message:
SimTimeSource:50.0 time diff=30481650

Stack Trace:
java.lang.AssertionError: SimTimeSource:50.0 time diff=30481650
at 
__randomizedtesting.SeedInfo.seed([A4990CECE2AE19C0:9CF57FC9767EBB86]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.common.util.TestTimeSource.doTestEpochTime(TestTimeSource.java:48)
at 
org.apache.solr.common.util.TestTimeSource.testEpochTime(TestTimeSource.java:32)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)




Build Log:
[...truncated 1790 lines...]
   [junit4] JVM J0: stdout was not empty, see: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\temp\junit4-J0-20180319_18

[jira] [Updated] (LUCENE-8215) Fix several fragile exception handling places in o.a.l.index

2018-03-19 Thread Simon Willnauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Simon Willnauer updated LUCENE-8215:

Attachment: LUCENE-8215.patch

>  Fix several fragile exception handling places in o.a.l.index
> -
>
> Key: LUCENE-8215
> URL: https://issues.apache.org/jira/browse/LUCENE-8215
> Project: Lucene - Core
>  Issue Type: Improvement
>Affects Versions: 7.4, master (8.0)
>Reporter: Simon Willnauer
>Priority: Major
> Fix For: 7.4, master (8.0)
>
> Attachments: LUCENE-8215.patch, LUCENE-8215.patch, LUCENE-8215.patch
>
>
> Several places in the index package don't handle exceptions well or ignores 
> them. This change adds some utility methods and cuts over to make use of 
> try/with blocks to simplify exception handling.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8215) Fix several fragile exception handling places in o.a.l.index

2018-03-19 Thread Simon Willnauer (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16405385#comment-16405385
 ] 

Simon Willnauer commented on LUCENE-8215:
-

[~dweiss] good feedback. I changed the name and added some more javadocs. 
thanks for looking at it!

>  Fix several fragile exception handling places in o.a.l.index
> -
>
> Key: LUCENE-8215
> URL: https://issues.apache.org/jira/browse/LUCENE-8215
> Project: Lucene - Core
>  Issue Type: Improvement
>Affects Versions: 7.4, master (8.0)
>Reporter: Simon Willnauer
>Priority: Major
> Fix For: 7.4, master (8.0)
>
> Attachments: LUCENE-8215.patch, LUCENE-8215.patch, LUCENE-8215.patch
>
>
> Several places in the index package don't handle exceptions well or ignores 
> them. This change adds some utility methods and cuts over to make use of 
> try/with blocks to simplify exception handling.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12122) gatherNodes expression should support multiValued walk target

2018-03-19 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16405369#comment-16405369
 ] 

David Smiley commented on SOLR-12122:
-

AFAICT, in GatherNodesStream, line ~579 {{tuple.getString(traverseTo)}} can use 
getStrings(), and we loop over each value.  It appears we should also take care 
to apply the metrics logic once.  That's about it.  WDYT [~joel.bernstein]?

In Solr 7.3, multiValued docValue fields are finally sortable.  That used to be 
an obstacle.

As an aside: I'm very disappointed in the extreme lack of comments in the 
streaming expressions module.  At least I'm judging from 
{{org.apache.solr.client.solrj.io.graph}} package.  It makes code especially 
hard to read for those who didn't write it.  As a random example, Strings are 
used as keys into maps/sets but not documented what those strings are when the 
field name is non-obvious (e.g. {{List> graph}}  -- alrighty). 
 If I end up reviewing new code going in that is similarly under-documented, 
I'll have to throw down a -1 flag.

> gatherNodes expression should support multiValued walk target
> -
>
> Key: SOLR-12122
> URL: https://issues.apache.org/jira/browse/SOLR-12122
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Reporter: David Smiley
>Priority: Major
>
> The {{gatherNodes}} streaming expression has a {{walk}} argument that 
> articulates a pair of Solr fields of the form {{traversalFrom->traversalTo}}. 
>  It assumed that they are *not* multiValued.  It _appears_ not difficult to 
> add multiValued support to traversalTo; that's what this issue is about.
> See 
> http://lucene.472066.n3.nabble.com/Using-multi-valued-field-in-solr-cloud-Graph-Traversal-Query-td4324379.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12123) Make SSD optimized values in ConcurrentMergeScheduler default in Solr

2018-03-19 Thread Shalin Shekhar Mangar (JIRA)
Shalin Shekhar Mangar created SOLR-12123:


 Summary: Make SSD optimized values in ConcurrentMergeScheduler 
default in Solr
 Key: SOLR-12123
 URL: https://issues.apache.org/jira/browse/SOLR-12123
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Shalin Shekhar Mangar
 Fix For: 7.4, master (8.0)


In SOLR-12098, [~dweiss] suggested that we make SSDs the default for choosing 
the values for maxThreads and maxMergeCount in ConcurrentMergeScheduler. SSDs 
are prevalent enough that it makes sense to do this because the wrong defaults 
have a tremendous effect on indexing throughput.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12122) gatherNodes expression should support multiValued walk target

2018-03-19 Thread David Smiley (JIRA)
David Smiley created SOLR-12122:
---

 Summary: gatherNodes expression should support multiValued walk 
target
 Key: SOLR-12122
 URL: https://issues.apache.org/jira/browse/SOLR-12122
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
  Components: streaming expressions
Reporter: David Smiley


The {{gatherNodes}} streaming expression has a {{walk}} argument that 
articulates a pair of Solr fields of the form {{traversalFrom->traversalTo}}.  
It assumed that they are *not* multiValued.  It _appears_ not difficult to add 
multiValued support to traversalTo; that's what this issue is about.

See 
http://lucene.472066.n3.nabble.com/Using-multi-valued-field-in-solr-cloud-Graph-Traversal-Query-td4324379.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-BadApples-Tests-7.x - Build # 17 - Still Unstable

2018-03-19 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-7.x/17/

3 tests failed.
FAILED:  org.apache.solr.TestDistributedSearch.test

Error Message:
Expected to find shardAddress in the up shard info: 
{error=org.apache.solr.client.solrj.SolrServerException: Time allowed to handle 
this request exceeded,trace=org.apache.solr.client.solrj.SolrServerException: 
Time allowed to handle this request exceeded  at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:460)
  at 
org.apache.solr.handler.component.HttpShardHandlerFactory.makeLoadBalancedRequest(HttpShardHandlerFactory.java:273)
  at 
org.apache.solr.handler.component.HttpShardHandler.lambda$submit$0(HttpShardHandler.java:175)
  at java.util.concurrent.FutureTask.run(FutureTask.java:266)  at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)  at 
java.util.concurrent.FutureTask.run(FutureTask.java:266)  at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:176)
  at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:188)
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
 at java.lang.Thread.run(Thread.java:748) ,time=1}

Stack Trace:
java.lang.AssertionError: Expected to find shardAddress in the up shard info: 
{error=org.apache.solr.client.solrj.SolrServerException: Time allowed to handle 
this request exceeded,trace=org.apache.solr.client.solrj.SolrServerException: 
Time allowed to handle this request exceeded
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:460)
at 
org.apache.solr.handler.component.HttpShardHandlerFactory.makeLoadBalancedRequest(HttpShardHandlerFactory.java:273)
at 
org.apache.solr.handler.component.HttpShardHandler.lambda$submit$0(HttpShardHandler.java:175)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:176)
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:188)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
,time=1}
at 
__randomizedtesting.SeedInfo.seed([33AECFF5B4B33C48:BBFAF02F1A4F51B0]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.TestDistributedSearch.comparePartialResponses(TestDistributedSearch.java:1191)
at 
org.apache.solr.TestDistributedSearch.queryPartialResults(TestDistributedSearch.java:1132)
at 
org.apache.solr.TestDistributedSearch.test(TestDistributedSearch.java:992)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsRepeatStatement.callStatement(BaseDistributedSearchTestCase.java:1019)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
  

[jira] [Commented] (SOLR-11779) Basic long-term collection of aggregated metrics

2018-03-19 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16405341#comment-16405341
 ] 

Shalin Shekhar Mangar commented on SOLR-11779:
--

We need this feature to understand cluster behaviour. This helps us build more 
sophisticated autoscaling strategies. We need historical values of only a few 
key metrics and this is not intended to replace or obviate proper metric stores 
(for which we have built extensive support through the metric reporter APIs). 
But even if we ignore that requirement, I think some amount of historical 
cluster statistics would be nice to expose as APIs and on the Solr UI without 
forcing people to setup external systems.

> Basic long-term collection of aggregated metrics
> 
>
> Key: SOLR-11779
> URL: https://issues.apache.org/jira/browse/SOLR-11779
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Affects Versions: 7.3, master (8.0)
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
>Priority: Major
>
> Tracking the key metrics over time is very helpful in understanding the 
> cluster and user behavior.
> Currently even basic metrics tracking requires setting up an external system 
> and either polling {{/admin/metrics}} or using {{SolrMetricReporter}}-s. The 
> advantage of this setup is that these external tools usually provide a lot of 
> sophisticated functionality. The downside is that they don't ship out of the 
> box with Solr and require additional admin effort to set up.
> Solr could collect some of the key metrics and keep their historical values 
> in a round-robin database (eg. using RRD4j) to keep the size of the historic 
> data constant (eg. ~64kB per metric), but at the same providing out of the 
> box useful insights into the basic system behavior over time. This data could 
> be persisted to the {{.system}} collection as blobs, and it could be also 
> presented in the Admin UI as graphs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12094) JsonRecordReader ignores root fields after split

2018-03-19 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16405329#comment-16405329
 ] 

Dawid Weiss commented on SOLR-12094:


Thanks! I'll take a look at it next week (vacation). Here's another one you may 
want to take a look at (not related, but similar type of issue): SOLR-10012 :)

> JsonRecordReader ignores root fields after split
> 
>
> Key: SOLR-12094
> URL: https://issues.apache.org/jira/browse/SOLR-12094
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: master (8.0)
>Reporter: Przemysław Szeremiota
>Priority: Major
> Attachments: SOLR-12094.patch, json-record-reader-bug.patch
>
>
> JsonRecordReader, when configured with other than top-level split, ignores 
> all top-level JSON nodes after split ends, for example:
> {code}
> {
>   "first": "John",
>   "last": "Doe",
>   "grade": 8,
>   "exams": [
> {
> "subject": "Maths",
> "test": "term1",
> "marks": 90
> },
> {
> "subject": "Biology",
> "test": "term1",
> "marks": 86
> }
>   ],
>   "after": "456"
> }
> {code}
> Node "after" won't be visible in SolrInputDocument constructed from 
> /update/json/docs.
> I don't have fix, only (breaking) patch for relevant test



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-12043) Add mlt.maxdfpct to Solr's documentation

2018-03-19 Thread Dawid Weiss (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12043?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Weiss resolved SOLR-12043.

Resolution: Fixed

> Add mlt.maxdfpct to Solr's documentation
> 
>
> Key: SOLR-12043
> URL: https://issues.apache.org/jira/browse/SOLR-12043
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Trivial
> Fix For: 7.3
>
> Attachments: SOLR-12043.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Lucene/Solr 7.3

2018-03-19 Thread Alan Woodward
Go ahead!

> On 19 Mar 2018, at 18:33, Andrzej Białecki  
> wrote:
> 
> Alan,
> 
> I would like to commit the change in SOLR-11407 
> (78d592d2fdfc64c227fc1bcb8fafa3d806fbb384) to branch_7_3. This fixes the 
> logic that waits for replica recovery and provides more details about any 
> failures.
> 
>> On 17 Mar 2018, at 13:01, Alan Woodward > > wrote:
>> 
>> I’d like to build the RC on Monday, but it depends on SOLR-12070.  I can 
>> help debugging that if need be.
>> 
>> +1 to backport your fixes
>> 
>>> On 17 Mar 2018, at 01:42, Varun Thacker >> > wrote:
>>> 
>>> I was going through the blockers for 7.3 and only SOLR-12070 came up. Is 
>>> the fix complete for this Andrzej?
>>> 
>>> @Alan : When do you plan on cutting an RC ? I committed SOLR-12083 
>>> yesterday and SOLR-12063 today to master/branch_7x. Both are important 
>>> fixes for CDCR so if you are okay I can backport it to the release branch
>>> 
>>> On Fri, Mar 16, 2018 at 4:58 PM, Đạt Cao Mạnh >> > wrote:
>>> Hi guys, Alan
>>> 
>>> I committed the fix for SOLR-12110 to branch_7_3
>>> 
>>> Thanks!
>>> 
>>> On Fri, Mar 16, 2018 at 5:43 PM Đạt Cao Mạnh >> > wrote:
>>> Hi Alan,
>>> 
>>> Sure the issue is marked as Blocker for 7.3.
>>> 
>>> On Fri, Mar 16, 2018 at 3:12 PM Alan Woodward >> > wrote:
>>> Thanks Đạt, could you mark the issue as a Blocker and let me know when it’s 
>>> been resolved?
>>> 
 On 16 Mar 2018, at 02:05, Đạt Cao Mạnh >>> > wrote:
 
 Hi guys, Alan,
 
 I found a blocker issue SOLR-12110, when investigating test failure. I've 
 already uploaded a patch and beasting the tests, if the result is good I 
 will commit soon.
 
 Thanks!
  
 On Tue, Mar 13, 2018 at 7:49 PM Alan Woodward >>> > wrote:
 Just realised that I don’t have an ASF Jenkins account - Uwe or Steve, can 
 you give me a hand setting up the 7.3 Jenkins jobs?
 
 Thanks, Alan
 
 
> On 12 Mar 2018, at 09:32, Alan Woodward  > wrote:
> 
> I’ve created the 7.3 release branch.  I’ll leave 24 hours for bug-fixes 
> and doc patches and then create a release candidate.
> 
> We’re now in feature-freeze for 7.3, so please bear in mind the following:
> No new features may be committed to the branch.
> Documentation patches, build patches and serious bug fixes may be 
> committed to the branch. However, you should submit all patches you want 
> to commit to Jira first to give others the chance to review and possibly 
> vote against the patch. Keep in mind that it is our main intention to 
> keep the branch as stable as possible.
> All patches that are intended for the branch should first be committed to 
> the unstable branch, merged into the stable branch, and then into the 
> current release branch.
> Normal unstable and stable branch development may continue as usual. 
> However, if you plan to commit a big change to the unstable branch while 
> the branch feature freeze is in effect, think twice: can't the addition 
> wait a couple more days? Merges of bug fixes into the branch may become 
> more difficult.
> Only Jira issues with Fix version “7.3" and priority "Blocker" will delay 
> a release candidate build.
> 
> 
>> On 9 Mar 2018, at 16:43, Alan Woodward > > wrote:
>> 
>> FYI I’m still recovering from my travels, so I’m going to create the 
>> release branch on Monday instead.
>> 
>>> On 27 Feb 2018, at 18:51, Cassandra Targett >> > wrote:
>>> 
>>> I intend to create the Ref Guide RC as soon as the Lucene/Solr 
>>> artifacts RC is ready, so this is a great time to remind folks that if 
>>> you've got Ref Guide changes to be done, you've got a couple weeks. If 
>>> you're stuck or not sure what to do, let me know & I'm happy to help 
>>> you out.
>>> 
>>> Eventually we'd like to release both the Ref Guide and Lucene/Solr with 
>>> the same release process, so this will be a big first test to see how 
>>> ready for that we are.
>>> 
>>> On Tue, Feb 27, 2018 at 11:42 AM, Michael McCandless 
>>> mailto:luc...@mikemccandless.com>> wrote:
>>> +1
>>> 
>>> Mike McCandless
>>> 
>>> http://blog.mikemccandless.com 
>>> 
>>> On Fri, Feb 23, 2018 at 4:50 AM, Alan Woodward 
>>> >> > wrote:
>>> Hi all,
>>> 
>>> It’s been a couple of months since the 7.2 release, and we’ve 
>>> accumulated some nice new features since then.  I’d like to volunteer 
>>> to be RM for a 7.3 release.
>>> 
>>> I’m travelling for the nex

[jira] [Commented] (SOLR-11779) Basic long-term collection of aggregated metrics

2018-03-19 Thread Otis Gospodnetic (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16405324#comment-16405324
 ] 

Otis Gospodnetic commented on SOLR-11779:
-

IMHO don't do it.  Investing in APIs and building tools around Solr that 
consume Solr metrics, events, etc. is a much better investment than keeping 
things self-contained.  A platform and ecosystem it enables win over a tool 
that tries to do everything.

> Basic long-term collection of aggregated metrics
> 
>
> Key: SOLR-11779
> URL: https://issues.apache.org/jira/browse/SOLR-11779
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Affects Versions: 7.3, master (8.0)
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
>Priority: Major
>
> Tracking the key metrics over time is very helpful in understanding the 
> cluster and user behavior.
> Currently even basic metrics tracking requires setting up an external system 
> and either polling {{/admin/metrics}} or using {{SolrMetricReporter}}-s. The 
> advantage of this setup is that these external tools usually provide a lot of 
> sophisticated functionality. The downside is that they don't ship out of the 
> box with Solr and require additional admin effort to set up.
> Solr could collect some of the key metrics and keep their historical values 
> in a round-robin database (eg. using RRD4j) to keep the size of the historic 
> data constant (eg. ~64kB per metric), but at the same providing out of the 
> box useful insights into the basic system behavior over time. This data could 
> be persisted to the {{.system}} collection as blobs, and it could be also 
> presented in the Admin UI as graphs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8215) Fix several fragile exception handling places in o.a.l.index

2018-03-19 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16405323#comment-16405323
 ] 

Dawid Weiss commented on LUCENE-8215:
-

I like it, although I'm not sure if IOUtils.close(...) is the right method 
name. I had to look at it a few times to understand what it does... it's not 
really "closing" anything, it just applies the consumer predicate to all 
arguments, suppressing any exceptions except the first one... Wouldn't it be 
better to call it something like "applyToAll" (other name suggestions welcome) 
and provide a better explanation of how exceptions are handled in the javadoc?

Not a blocker at all, but I think it'd make it easier to understand (since 
IOConsumer is not even closeable).

>  Fix several fragile exception handling places in o.a.l.index
> -
>
> Key: LUCENE-8215
> URL: https://issues.apache.org/jira/browse/LUCENE-8215
> Project: Lucene - Core
>  Issue Type: Improvement
>Affects Versions: 7.4, master (8.0)
>Reporter: Simon Willnauer
>Priority: Major
> Fix For: 7.4, master (8.0)
>
> Attachments: LUCENE-8215.patch, LUCENE-8215.patch
>
>
> Several places in the index package don't handle exceptions well or ignores 
> them. This change adds some utility methods and cuts over to make use of 
> try/with blocks to simplify exception handling.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12107) [child] doc transformer used w/o uniqueKey in 'fl' fails with NPE unless documentCache is enabled

2018-03-19 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16405316#comment-16405316
 ] 

ASF subversion and git services commented on SOLR-12107:


Commit 8bd7e5c9d254c1d629a784e0b601885adea2f57b in lucene-solr's branch 
refs/heads/master from Chris Hostetter
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=8bd7e5c ]

SOLR-11891: DocStreamer now respects the ReturnFields when populating a 
SolrDocument
This is an optimization that reduces the number of unneccessary fields a 
ResponseWriter will see if documentCache is used

This commit also includes fixes for SOLR-12107 & SOLR-12108 -- two bugs that 
were previously dependent on the
un-optimized behavior of DocStreamer in order to function properly.

- SOLR-12107: Fixed a error in [child] transformer that could ocur if 
documentCache was not used
- SOLR-12108: Fixed the fallback behavior of [raw] and [xml] transformers when 
an incompatble 'wt' was specified,
  the field value was lost if documentCache was not used.


> [child] doc transformer used w/o uniqueKey in 'fl' fails with NPE unless 
> documentCache is enabled
> -
>
> Key: SOLR-12107
> URL: https://issues.apache.org/jira/browse/SOLR-12107
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Hoss Man
>Priority: Major
>
> discovered this while working on SOLR-11891...
> The ChildDocumentTransformer implicitly assumes the uniqueKey field will 
> allways be available when transforming the doc, w/o explicitly requesting it 
> via {{getExtraRequestFields()}}
> Because of the existing sloppy code in SOLR-11891, that means this bug in 
> ChildDocumentTransformer only impacts current users if the documentCache is 
> disabled
> 
> Example steps to reproduce w/techproducts config assuming {{solrconfig.xml}} 
> is edited to disable documentCache...
> {noformat}
> $ curl 'http://localhost:8983/solr/techproducts/update?commit=true' -H 
> 'Content-Type: application/json' --data-binary '[
> >   {
> > "id": "1",
> > "title": "Solr adds block join support",
> > "content_type": "parentDocument",
> > "_childDocuments_": [
> >   {
> > "id": "2",
> > "comments": "SolrCloud supports it too!"
> >   }
> > ]
> >   },
> >   {
> > "id": "3",
> > "title": "New Lucene and Solr release is out",
> > "content_type": "parentDocument",
> > "_childDocuments_": [
> >   {
> > "id": "4",
> > "comments": "Lots of new features"
> >   }
> > ]
> >   }
> > ]'
> {
>   "responseHeader":{
> "status":0,
> "QTime":69}}
> $ curl 'http://localhost:8983/solr/techproducts/query?q=id:1'
> {
>   "responseHeader":{
> "status":0,
> "QTime":5,
> "params":{
>   "q":"id:1"}},
>   "response":{"numFound":1,"start":0,"docs":[
>   {
> "id":"1",
> "title":["Solr adds block join support"],
> "content_type":["parentDocument"],
> "_version_":1595047178033692672}]
>   }}
> $ curl 
> 'http://localhost:8983/solr/techproducts/query?q=id:1&fl=id,%5Bchild+parentFilter="content_type:parentDocument"%5D'
> {
>   "responseHeader":{
> "status":0,
> "QTime":0,
> "params":{
>   "q":"id:1",
>   "fl":"id,[child parentFilter=\"content_type:parentDocument\"]"}},
>   "response":{"numFound":1,"start":0,"docs":[
>   {
> "id":"1",
> "_childDocuments_":[
> {
>   "id":"2",
>   "comments":"SolrCloud supports it too!",
>   "_version_":1595047178033692672}]}]
>   }}
> $ curl 
> 'http://localhost:8983/solr/techproducts/query?q=id:1&fl=%5Bchild+parentFilter="content_type:parentDocument"%5D'
> {
>   "error":{
> "trace":"java.lang.NullPointerException\n\tat 
> org.apache.solr.response.transform.ChildDocTransformer.transform(ChildDocTransformerFactory.java:133)\n\tat
>  org.apache.solr.response.DocsStreamer.next(DocsStreamer.java:120)\n\tat 
> org.apache.solr.response.DocsStreamer.next(DocsStreamer.java:57)\n\tat 
> org.apache.solr.response.TextResponseWriter.writeDocuments(TextResponseWriter.java:275)\n\tat
>  
> org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:161)\n\tat
>  
> org.apache.solr.response.JSONWriter.writeNamedListAsMapWithDups(JSONResponseWriter.java:209)\n\tat
>  
> org.apache.solr.response.JSONWriter.writeNamedList(JSONResponseWriter.java:325)\n\tat
>  
> org.apache.solr.response.JSONWriter.writeResponse(JSONResponseWriter.java:120)\n\tat
>  
> org.apache.solr.response.JSONResponseWriter.write(JSONResponseWriter.java:71)\n\tat
>  
> org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResp

[jira] [Commented] (SOLR-12107) [child] doc transformer used w/o uniqueKey in 'fl' fails with NPE unless documentCache is enabled

2018-03-19 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16405314#comment-16405314
 ] 

ASF subversion and git services commented on SOLR-12107:


Commit 8bd7e5c9d254c1d629a784e0b601885adea2f57b in lucene-solr's branch 
refs/heads/master from Chris Hostetter
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=8bd7e5c ]

SOLR-11891: DocStreamer now respects the ReturnFields when populating a 
SolrDocument
This is an optimization that reduces the number of unneccessary fields a 
ResponseWriter will see if documentCache is used

This commit also includes fixes for SOLR-12107 & SOLR-12108 -- two bugs that 
were previously dependent on the
un-optimized behavior of DocStreamer in order to function properly.

- SOLR-12107: Fixed a error in [child] transformer that could ocur if 
documentCache was not used
- SOLR-12108: Fixed the fallback behavior of [raw] and [xml] transformers when 
an incompatble 'wt' was specified,
  the field value was lost if documentCache was not used.


> [child] doc transformer used w/o uniqueKey in 'fl' fails with NPE unless 
> documentCache is enabled
> -
>
> Key: SOLR-12107
> URL: https://issues.apache.org/jira/browse/SOLR-12107
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Hoss Man
>Priority: Major
>
> discovered this while working on SOLR-11891...
> The ChildDocumentTransformer implicitly assumes the uniqueKey field will 
> allways be available when transforming the doc, w/o explicitly requesting it 
> via {{getExtraRequestFields()}}
> Because of the existing sloppy code in SOLR-11891, that means this bug in 
> ChildDocumentTransformer only impacts current users if the documentCache is 
> disabled
> 
> Example steps to reproduce w/techproducts config assuming {{solrconfig.xml}} 
> is edited to disable documentCache...
> {noformat}
> $ curl 'http://localhost:8983/solr/techproducts/update?commit=true' -H 
> 'Content-Type: application/json' --data-binary '[
> >   {
> > "id": "1",
> > "title": "Solr adds block join support",
> > "content_type": "parentDocument",
> > "_childDocuments_": [
> >   {
> > "id": "2",
> > "comments": "SolrCloud supports it too!"
> >   }
> > ]
> >   },
> >   {
> > "id": "3",
> > "title": "New Lucene and Solr release is out",
> > "content_type": "parentDocument",
> > "_childDocuments_": [
> >   {
> > "id": "4",
> > "comments": "Lots of new features"
> >   }
> > ]
> >   }
> > ]'
> {
>   "responseHeader":{
> "status":0,
> "QTime":69}}
> $ curl 'http://localhost:8983/solr/techproducts/query?q=id:1'
> {
>   "responseHeader":{
> "status":0,
> "QTime":5,
> "params":{
>   "q":"id:1"}},
>   "response":{"numFound":1,"start":0,"docs":[
>   {
> "id":"1",
> "title":["Solr adds block join support"],
> "content_type":["parentDocument"],
> "_version_":1595047178033692672}]
>   }}
> $ curl 
> 'http://localhost:8983/solr/techproducts/query?q=id:1&fl=id,%5Bchild+parentFilter="content_type:parentDocument"%5D'
> {
>   "responseHeader":{
> "status":0,
> "QTime":0,
> "params":{
>   "q":"id:1",
>   "fl":"id,[child parentFilter=\"content_type:parentDocument\"]"}},
>   "response":{"numFound":1,"start":0,"docs":[
>   {
> "id":"1",
> "_childDocuments_":[
> {
>   "id":"2",
>   "comments":"SolrCloud supports it too!",
>   "_version_":1595047178033692672}]}]
>   }}
> $ curl 
> 'http://localhost:8983/solr/techproducts/query?q=id:1&fl=%5Bchild+parentFilter="content_type:parentDocument"%5D'
> {
>   "error":{
> "trace":"java.lang.NullPointerException\n\tat 
> org.apache.solr.response.transform.ChildDocTransformer.transform(ChildDocTransformerFactory.java:133)\n\tat
>  org.apache.solr.response.DocsStreamer.next(DocsStreamer.java:120)\n\tat 
> org.apache.solr.response.DocsStreamer.next(DocsStreamer.java:57)\n\tat 
> org.apache.solr.response.TextResponseWriter.writeDocuments(TextResponseWriter.java:275)\n\tat
>  
> org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:161)\n\tat
>  
> org.apache.solr.response.JSONWriter.writeNamedListAsMapWithDups(JSONResponseWriter.java:209)\n\tat
>  
> org.apache.solr.response.JSONWriter.writeNamedList(JSONResponseWriter.java:325)\n\tat
>  
> org.apache.solr.response.JSONWriter.writeResponse(JSONResponseWriter.java:120)\n\tat
>  
> org.apache.solr.response.JSONResponseWriter.write(JSONResponseWriter.java:71)\n\tat
>  
> org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResp

[jira] [Commented] (SOLR-12108) raw transformers ([json] and [xml]) drop the field value if wt is not a match and documentCache is not enabled

2018-03-19 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16405317#comment-16405317
 ] 

ASF subversion and git services commented on SOLR-12108:


Commit 8bd7e5c9d254c1d629a784e0b601885adea2f57b in lucene-solr's branch 
refs/heads/master from Chris Hostetter
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=8bd7e5c ]

SOLR-11891: DocStreamer now respects the ReturnFields when populating a 
SolrDocument
This is an optimization that reduces the number of unneccessary fields a 
ResponseWriter will see if documentCache is used

This commit also includes fixes for SOLR-12107 & SOLR-12108 -- two bugs that 
were previously dependent on the
un-optimized behavior of DocStreamer in order to function properly.

- SOLR-12107: Fixed a error in [child] transformer that could ocur if 
documentCache was not used
- SOLR-12108: Fixed the fallback behavior of [raw] and [xml] transformers when 
an incompatble 'wt' was specified,
  the field value was lost if documentCache was not used.


> raw transformers ([json] and [xml]) drop the field value if wt is not a match 
> and documentCache is not enabled
> --
>
> Key: SOLR-12108
> URL: https://issues.apache.org/jira/browse/SOLR-12108
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Hoss Man
>Priority: Major
>
> discovered this while working on SOLR-11891...
> The {{RawValueTransformerFactory}} class is suppose to treat the field value 
> as a normal string in situations where an instance is limited by the {{wt}} 
> param (which it is automatically for the default {{[json]}} and {{[xml]}} 
> transformers.
> This is currently implemented by {{RawValueTransformerFactory.create()}} 
> assuming it can just return "null" if the ResponseWriter in use doesn't match 
> - but because of how this transformer abuses the "key" to implicitly indicate 
> the field to be returned (ie: {{my_json_fieldName:[json]}}, it means that 
> nothing about the resulting {{ReturnFields}} datastructure indicates that the 
> field ({{my_json_fieldName}}) should be returned at all.
> Because of the existing sloppy code in SOLR-11891, that means this bug in 
> ChildDocumentTransformer only impacts current users if the documentCache is 
> disabled
> 
> Example steps to reproduce w/techproducts config assuming {{solrconfig.xml}} 
> is edited to disable documentCache...
> {noformat}
> $ curl 'http://localhost:8983/solr/techproducts/update?commit=true' -H 
> 'Content-Type: application/json' --data-binary '[
>   {
> "id": "1",
> "raw_s":"{\"raw\":\"json\"}" } ]'
> {
>   "responseHeader":{
> "status":0,
> "QTime":39}}
> $ curl 'http://localhost:8983/solr/techproducts/query?wt=json&q=id:1&fl=raw_s'
> {
>   "responseHeader":{
> "status":0,
> "QTime":2,
> "params":{
>   "q":"id:1",
>   "fl":"raw_s",
>   "wt":"json"}},
>   "response":{"numFound":1,"start":0,"docs":[
>   {
> "raw_s":"{\"raw\":\"json\"}"}]
>   }}
> $ curl 
> 'http://localhost:8983/solr/techproducts/query?wt=json&q=id:1&fl=raw_s:%5Bjson%5D'
> {
>   "responseHeader":{
> "status":0,
> "QTime":0,
> "params":{
>   "q":"id:1",
>   "fl":"raw_s:[json]",
>   "wt":"json"}},
>   "response":{"numFound":1,"start":0,"docs":[
>   {
> "raw_s":{"raw":"json"}}]
>   }}
> $ curl 
> 'http://localhost:8983/solr/techproducts/query?wt=xml&q=id:1&fl=raw_s:%5Bjson%5D'
> 
> 
> 
>   0
>   0
>   
> id:1
> raw_s:[json]
> xml
>   
> 
> 
>   
> 
> 
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11891) DocsStreamer populates SolrDocument w/unnecessary fields

2018-03-19 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16405313#comment-16405313
 ] 

ASF subversion and git services commented on SOLR-11891:


Commit 8bd7e5c9d254c1d629a784e0b601885adea2f57b in lucene-solr's branch 
refs/heads/master from Chris Hostetter
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=8bd7e5c ]

SOLR-11891: DocStreamer now respects the ReturnFields when populating a 
SolrDocument
This is an optimization that reduces the number of unneccessary fields a 
ResponseWriter will see if documentCache is used

This commit also includes fixes for SOLR-12107 & SOLR-12108 -- two bugs that 
were previously dependent on the
un-optimized behavior of DocStreamer in order to function properly.

- SOLR-12107: Fixed a error in [child] transformer that could ocur if 
documentCache was not used
- SOLR-12108: Fixed the fallback behavior of [raw] and [xml] transformers when 
an incompatble 'wt' was specified,
  the field value was lost if documentCache was not used.


> DocsStreamer populates SolrDocument w/unnecessary fields
> 
>
> Key: SOLR-11891
> URL: https://issues.apache.org/jira/browse/SOLR-11891
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Response Writers
>Affects Versions: 5.4, 6.4.2, 6.6.2
>Reporter: wei wang
>Assignee: Hoss Man
>Priority: Major
> Attachments: DocsStreamer.java.diff, SOLR-11891.patch, 
> SOLR-11891.patch.BAD
>
>
> We observe that solr query time increases significantly with the number of 
> rows requested,  even all we retrieve for each document is just fl=id,score.  
> Debugged a bit and see that most of the increased time was spent in 
> BinaryResponseWriter,  converting lucene document into SolrDocument.  Inside 
> convertLuceneDocToSolrDoc():   
> [https://github.com/apache/lucene-solr/blob/df874432b9a17b547acb24a01d3491839e6a6b69/solr/core/src/java/org/apache/solr/response/DocsStreamer.java#L182]
>  
> I am a bit puzzled why we need to iterate through all the fields in the 
> document. Why can’t we just iterate through the requested field list?    
> [https://github.com/apache/lucene-solr/blob/df874432b9a17b547acb24a01d3491839e6a6b69/solr/core/src/java/org/apache/solr/response/DocsStreamer.java#L156]
>  
> e.g. when pass in the field list as 
> sdoc = convertLuceneDocToSolrDoc(doc, rctx.getSearcher().getSchema(), fnames)
> and just iterate through fnames,  there is a significant performance boost in 
> our case.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12108) raw transformers ([json] and [xml]) drop the field value if wt is not a match and documentCache is not enabled

2018-03-19 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16405315#comment-16405315
 ] 

ASF subversion and git services commented on SOLR-12108:


Commit 8bd7e5c9d254c1d629a784e0b601885adea2f57b in lucene-solr's branch 
refs/heads/master from Chris Hostetter
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=8bd7e5c ]

SOLR-11891: DocStreamer now respects the ReturnFields when populating a 
SolrDocument
This is an optimization that reduces the number of unneccessary fields a 
ResponseWriter will see if documentCache is used

This commit also includes fixes for SOLR-12107 & SOLR-12108 -- two bugs that 
were previously dependent on the
un-optimized behavior of DocStreamer in order to function properly.

- SOLR-12107: Fixed a error in [child] transformer that could ocur if 
documentCache was not used
- SOLR-12108: Fixed the fallback behavior of [raw] and [xml] transformers when 
an incompatble 'wt' was specified,
  the field value was lost if documentCache was not used.


> raw transformers ([json] and [xml]) drop the field value if wt is not a match 
> and documentCache is not enabled
> --
>
> Key: SOLR-12108
> URL: https://issues.apache.org/jira/browse/SOLR-12108
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Hoss Man
>Priority: Major
>
> discovered this while working on SOLR-11891...
> The {{RawValueTransformerFactory}} class is suppose to treat the field value 
> as a normal string in situations where an instance is limited by the {{wt}} 
> param (which it is automatically for the default {{[json]}} and {{[xml]}} 
> transformers.
> This is currently implemented by {{RawValueTransformerFactory.create()}} 
> assuming it can just return "null" if the ResponseWriter in use doesn't match 
> - but because of how this transformer abuses the "key" to implicitly indicate 
> the field to be returned (ie: {{my_json_fieldName:[json]}}, it means that 
> nothing about the resulting {{ReturnFields}} datastructure indicates that the 
> field ({{my_json_fieldName}}) should be returned at all.
> Because of the existing sloppy code in SOLR-11891, that means this bug in 
> ChildDocumentTransformer only impacts current users if the documentCache is 
> disabled
> 
> Example steps to reproduce w/techproducts config assuming {{solrconfig.xml}} 
> is edited to disable documentCache...
> {noformat}
> $ curl 'http://localhost:8983/solr/techproducts/update?commit=true' -H 
> 'Content-Type: application/json' --data-binary '[
>   {
> "id": "1",
> "raw_s":"{\"raw\":\"json\"}" } ]'
> {
>   "responseHeader":{
> "status":0,
> "QTime":39}}
> $ curl 'http://localhost:8983/solr/techproducts/query?wt=json&q=id:1&fl=raw_s'
> {
>   "responseHeader":{
> "status":0,
> "QTime":2,
> "params":{
>   "q":"id:1",
>   "fl":"raw_s",
>   "wt":"json"}},
>   "response":{"numFound":1,"start":0,"docs":[
>   {
> "raw_s":"{\"raw\":\"json\"}"}]
>   }}
> $ curl 
> 'http://localhost:8983/solr/techproducts/query?wt=json&q=id:1&fl=raw_s:%5Bjson%5D'
> {
>   "responseHeader":{
> "status":0,
> "QTime":0,
> "params":{
>   "q":"id:1",
>   "fl":"raw_s:[json]",
>   "wt":"json"}},
>   "response":{"numFound":1,"start":0,"docs":[
>   {
> "raw_s":{"raw":"json"}}]
>   }}
> $ curl 
> 'http://localhost:8983/solr/techproducts/query?wt=xml&q=id:1&fl=raw_s:%5Bjson%5D'
> 
> 
> 
>   0
>   0
>   
> id:1
> raw_s:[json]
> xml
>   
> 
> 
>   
> 
> 
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6425) Move extractTerms to Weight

2018-03-19 Thread Dean Gurvitz (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16405244#comment-16405244
 ] 

Dean Gurvitz commented on LUCENE-6425:
--

I was wondering how an explicit API change took place in a minor Lucene 
version? With no deprecation warnings or anything of that kind coming first. I 
recently upgraded a minor version of Lucene and was very surprised when things 
stopped compiling.

Plus, it seems to me that the solution offered by Adrien for getting non index 
dependent terms is very messy and inelegant compared to the previous situation. 
Is there no way to change this?

> Move extractTerms to Weight
> ---
>
> Key: LUCENE-6425
> URL: https://issues.apache.org/jira/browse/LUCENE-6425
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Fix For: 5.2, 6.0
>
> Attachments: LUCENE-6425.patch, LUCENE-6425.patch
>
>
> Today we have extractTerms on Query, but it is supposed to only be called 
> after the query has been specialized to a given IndexReader using 
> Query.rewrite(IndexReader) to allow some complex queries to replace terms 
> "matchers" with actual terms (eg. WildcardQuery).
> However, we already have an abstraction for indexreader-specialized queries: 
> Weight. So I think it would make more sense to have extractTerms on Weight. 
> This would also remove the trap of calling extractTerms on a query which is 
> not rewritten yet.
> Since Weights know about whether scores are needed or not, I also hope this 
> would help improve the extractTerms semantics. We currently have 2 use-cases 
> for extractTerms: distributed IDF and highlighting. While the former only 
> cares about terms which are used for scoring, it could make sense to 
> highlight terms that were used for matching, even if they did not contribute 
> to the score (eg. if wrapped in a ConstantScoreQuery or a BooleanQuery FILTER 
> clause). So highlighters could do searcher.createNormalizedWeight(query, 
> false).extractTerms(termSet) to get all terms that were used for matching the 
> query while distributed IDF would instead do 
> searcher.createNormalizedWeight(query, true).extractTerms(termSet) to get 
> scoring terms only.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [jira] [Commented] (SOLR-11407) AutoscalingHistoryHandlerTest fails frequently

2018-03-19 Thread Erick Erickson
Andrzej:

Do you want to un-BadApple the test too? Or is that premature? I
collect BadApple=true failures in a separate folder, so I can easily
check end-of-week if there are any failures for this test and
un-badapple it if not. I'd do this during my usual Saturday BadApple
work. Let me know.

Erick

On Mon, Mar 19, 2018 at 11:36 AM, Alan Woodward (JIRA)  wrote:
>
> [ 
> https://issues.apache.org/jira/browse/SOLR-11407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16405236#comment-16405236
>  ]
>
> Alan Woodward commented on SOLR-11407:
> --
>
> Do you want to backport the latest commits?
>
>> AutoscalingHistoryHandlerTest fails frequently
>> --
>>
>> Key: SOLR-11407
>> URL: https://issues.apache.org/jira/browse/SOLR-11407
>> Project: Solr
>>  Issue Type: Bug
>>  Security Level: Public(Default Security Level. Issues are Public)
>>  Components: AutoScaling
>>Reporter: Andrzej Bialecki
>>Assignee: Andrzej Bialecki
>>Priority: Blocker
>> Fix For: 7.3, master (8.0)
>>
>> Attachments: tests-failures.txt
>>
>>
>> This test fails frequently on jenkins with a failed assertion (see also 
>> SOLR-11378 for other failure mode):
>> {code}
>>[junit4] FAILURE 6.49s J2 | AutoscalingHistoryHandlerTest.testHistory <<<
>>[junit4]> Throwable #1: java.lang.AssertionError: expected:<8> but 
>> was:<6>
>>[junit4]>  at 
>> __randomizedtesting.SeedInfo.seed([164F10BB7F145FDE:7BB3B446C55CA0D9]:0)
>>[junit4]>  at 
>> org.apache.solr.handler.admin.AutoscalingHistoryHandlerTest.testHistory(AutoscalingHistoryHandlerTest.java:194)
>>[junit4]>  at java.lang.Thread.run(Thread.java:748)
>> {code}
>
>
>
> --
> This message was sent by Atlassian JIRA
> (v7.6.3#76005)
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11407) AutoscalingHistoryHandlerTest fails frequently

2018-03-19 Thread Alan Woodward (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16405236#comment-16405236
 ] 

Alan Woodward commented on SOLR-11407:
--

Do you want to backport the latest commits?

> AutoscalingHistoryHandlerTest fails frequently
> --
>
> Key: SOLR-11407
> URL: https://issues.apache.org/jira/browse/SOLR-11407
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
>Priority: Blocker
> Fix For: 7.3, master (8.0)
>
> Attachments: tests-failures.txt
>
>
> This test fails frequently on jenkins with a failed assertion (see also 
> SOLR-11378 for other failure mode):
> {code}
>[junit4] FAILURE 6.49s J2 | AutoscalingHistoryHandlerTest.testHistory <<<
>[junit4]> Throwable #1: java.lang.AssertionError: expected:<8> but 
> was:<6>
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([164F10BB7F145FDE:7BB3B446C55CA0D9]:0)
>[junit4]>  at 
> org.apache.solr.handler.admin.AutoscalingHistoryHandlerTest.testHistory(AutoscalingHistoryHandlerTest.java:194)
>[junit4]>  at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Lucene/Solr 7.3

2018-03-19 Thread Andrzej Białecki
Alan,

I would like to commit the change in SOLR-11407 
(78d592d2fdfc64c227fc1bcb8fafa3d806fbb384) to branch_7_3. This fixes the logic 
that waits for replica recovery and provides more details about any failures.

> On 17 Mar 2018, at 13:01, Alan Woodward  wrote:
> 
> I’d like to build the RC on Monday, but it depends on SOLR-12070.  I can help 
> debugging that if need be.
> 
> +1 to backport your fixes
> 
>> On 17 Mar 2018, at 01:42, Varun Thacker > > wrote:
>> 
>> I was going through the blockers for 7.3 and only SOLR-12070 came up. Is the 
>> fix complete for this Andrzej?
>> 
>> @Alan : When do you plan on cutting an RC ? I committed SOLR-12083 yesterday 
>> and SOLR-12063 today to master/branch_7x. Both are important fixes for CDCR 
>> so if you are okay I can backport it to the release branch
>> 
>> On Fri, Mar 16, 2018 at 4:58 PM, Đạt Cao Mạnh > > wrote:
>> Hi guys, Alan
>> 
>> I committed the fix for SOLR-12110 to branch_7_3
>> 
>> Thanks!
>> 
>> On Fri, Mar 16, 2018 at 5:43 PM Đạt Cao Mạnh > > wrote:
>> Hi Alan,
>> 
>> Sure the issue is marked as Blocker for 7.3.
>> 
>> On Fri, Mar 16, 2018 at 3:12 PM Alan Woodward > > wrote:
>> Thanks Đạt, could you mark the issue as a Blocker and let me know when it’s 
>> been resolved?
>> 
>>> On 16 Mar 2018, at 02:05, Đạt Cao Mạnh >> > wrote:
>>> 
>>> Hi guys, Alan,
>>> 
>>> I found a blocker issue SOLR-12110, when investigating test failure. I've 
>>> already uploaded a patch and beasting the tests, if the result is good I 
>>> will commit soon.
>>> 
>>> Thanks!
>>>  
>>> On Tue, Mar 13, 2018 at 7:49 PM Alan Woodward >> > wrote:
>>> Just realised that I don’t have an ASF Jenkins account - Uwe or Steve, can 
>>> you give me a hand setting up the 7.3 Jenkins jobs?
>>> 
>>> Thanks, Alan
>>> 
>>> 
 On 12 Mar 2018, at 09:32, Alan Woodward >>> > wrote:
 
 I’ve created the 7.3 release branch.  I’ll leave 24 hours for bug-fixes 
 and doc patches and then create a release candidate.
 
 We’re now in feature-freeze for 7.3, so please bear in mind the following:
 No new features may be committed to the branch.
 Documentation patches, build patches and serious bug fixes may be 
 committed to the branch. However, you should submit all patches you want 
 to commit to Jira first to give others the chance to review and possibly 
 vote against the patch. Keep in mind that it is our main intention to keep 
 the branch as stable as possible.
 All patches that are intended for the branch should first be committed to 
 the unstable branch, merged into the stable branch, and then into the 
 current release branch.
 Normal unstable and stable branch development may continue as usual. 
 However, if you plan to commit a big change to the unstable branch while 
 the branch feature freeze is in effect, think twice: can't the addition 
 wait a couple more days? Merges of bug fixes into the branch may become 
 more difficult.
 Only Jira issues with Fix version “7.3" and priority "Blocker" will delay 
 a release candidate build.
 
 
> On 9 Mar 2018, at 16:43, Alan Woodward  > wrote:
> 
> FYI I’m still recovering from my travels, so I’m going to create the 
> release branch on Monday instead.
> 
>> On 27 Feb 2018, at 18:51, Cassandra Targett > > wrote:
>> 
>> I intend to create the Ref Guide RC as soon as the Lucene/Solr artifacts 
>> RC is ready, so this is a great time to remind folks that if you've got 
>> Ref Guide changes to be done, you've got a couple weeks. If you're stuck 
>> or not sure what to do, let me know & I'm happy to help you out.
>> 
>> Eventually we'd like to release both the Ref Guide and Lucene/Solr with 
>> the same release process, so this will be a big first test to see how 
>> ready for that we are.
>> 
>> On Tue, Feb 27, 2018 at 11:42 AM, Michael McCandless 
>> mailto:luc...@mikemccandless.com>> wrote:
>> +1
>> 
>> Mike McCandless
>> 
>> http://blog.mikemccandless.com 
>> 
>> On Fri, Feb 23, 2018 at 4:50 AM, Alan Woodward 
>> > > wrote:
>> Hi all,
>> 
>> It’s been a couple of months since the 7.2 release, and we’ve 
>> accumulated some nice new features since then.  I’d like to volunteer to 
>> be RM for a 7.3 release.
>> 
>> I’m travelling for the next couple of weeks, so I would plan to create 
>> the release branch two weeks today, on the 9th March (unless anybody 
>> else wants to do it sooner, of course :)
>> 
>> - Alan
>> --

[jira] [Comment Edited] (SOLR-11407) AutoscalingHistoryHandlerTest fails frequently

2018-03-19 Thread Andrzej Bialecki (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16405232#comment-16405232
 ] 

Andrzej Bialecki  edited comment on SOLR-11407 at 3/19/18 6:29 PM:
---

[~romseygeek] I noticed that the log that you attached came from the version of 
the code prior to the commits above (because it doesn't show the DocCollection 
in the log message), which is also unfortunately the version on branch_7_3. The 
current version of the code ignores all replicas that are located on the down 
nodes so it wouldn't have produced this failure due to down replicas located on 
down nodes.


was (Author: ab):
[~romseygeek] I noticed that the log that you attached came from the version of 
the code prior to the commits above (because it doesn't show the DocCollection 
in the log message). This version of the code also ignores all replicas that 
are located on the down nodes so it wouldn't have produced this failure due to 
down replicas located on down nodes.

> AutoscalingHistoryHandlerTest fails frequently
> --
>
> Key: SOLR-11407
> URL: https://issues.apache.org/jira/browse/SOLR-11407
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
>Priority: Blocker
> Fix For: 7.3, master (8.0)
>
> Attachments: tests-failures.txt
>
>
> This test fails frequently on jenkins with a failed assertion (see also 
> SOLR-11378 for other failure mode):
> {code}
>[junit4] FAILURE 6.49s J2 | AutoscalingHistoryHandlerTest.testHistory <<<
>[junit4]> Throwable #1: java.lang.AssertionError: expected:<8> but 
> was:<6>
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([164F10BB7F145FDE:7BB3B446C55CA0D9]:0)
>[junit4]>  at 
> org.apache.solr.handler.admin.AutoscalingHistoryHandlerTest.testHistory(AutoscalingHistoryHandlerTest.java:194)
>[junit4]>  at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11407) AutoscalingHistoryHandlerTest fails frequently

2018-03-19 Thread Andrzej Bialecki (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16405232#comment-16405232
 ] 

Andrzej Bialecki  commented on SOLR-11407:
--

[~romseygeek] I noticed that the log that you attached came from the version of 
the code prior to the commits above (because it doesn't show the DocCollection 
in the log message). This version of the code also ignores all replicas that 
are located on the down nodes so it wouldn't have produced this failure due to 
down replicas located on down nodes.

> AutoscalingHistoryHandlerTest fails frequently
> --
>
> Key: SOLR-11407
> URL: https://issues.apache.org/jira/browse/SOLR-11407
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
>Priority: Blocker
> Fix For: 7.3, master (8.0)
>
> Attachments: tests-failures.txt
>
>
> This test fails frequently on jenkins with a failed assertion (see also 
> SOLR-11378 for other failure mode):
> {code}
>[junit4] FAILURE 6.49s J2 | AutoscalingHistoryHandlerTest.testHistory <<<
>[junit4]> Throwable #1: java.lang.AssertionError: expected:<8> but 
> was:<6>
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([164F10BB7F145FDE:7BB3B446C55CA0D9]:0)
>[junit4]>  at 
> org.apache.solr.handler.admin.AutoscalingHistoryHandlerTest.testHistory(AutoscalingHistoryHandlerTest.java:194)
>[junit4]>  at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-7.3 - Build # 19 - Unstable

2018-03-19 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.3/19/

1 tests failed.
FAILED:  org.apache.solr.cloud.autoscaling.SearchRateTriggerTest.testTrigger

Error Message:
expected:<0> but was:<1>

Stack Trace:
java.lang.AssertionError: expected:<0> but was:<1>
at 
__randomizedtesting.SeedInfo.seed([B0FA6540DBFF320C:D33153C242304121]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.cloud.autoscaling.SearchRateTriggerTest.testTrigger(SearchRateTriggerTest.java:101)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)




Build Log:
[...truncated 12270 lines...]
   [junit4] Suite: org.apache.solr.cloud.autoscaling.SearchRateTriggerTest
   [junit4]   2> 175950 INFO  
(SUITE-SearchRateTriggerTest-se

[jira] [Commented] (SOLR-12091) Rename TimeSource.getTime to getTimeNs

2018-03-19 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16405176#comment-16405176
 ] 

ASF subversion and git services commented on SOLR-12091:


Commit 87c7f3a265b39777f456790953e975cbc3b36291 in lucene-solr's branch 
refs/heads/branch_7x from [~ab]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=87c7f3a ]

SOLR-12091: Sync the CHANGES.txt section with the one on master.


> Rename TimeSource.getTime to getTimeNs
> --
>
> Key: SOLR-12091
> URL: https://issues.apache.org/jira/browse/SOLR-12091
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
>Priority: Minor
> Fix For: 7.4, master (8.0)
>
> Attachments: SOLR-12091.patch
>
>
> Spinoff from SOLR-11670. {{TimeSource.getTime()}} and {{getEpochTime}} return 
> values in nanoseconds, which may be confusing. As suggested by [~dsmiley] 
> renaming them to {{getTimeNs}} and {{getEpochTimeNs}} would reduce the 
> likelihood of using their values in situations where milliseconds are 
> expected.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-12118) use ivy-versions.properties values as attributes in ref-guide files to replace hard coded version numbers

2018-03-19 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man resolved SOLR-12118.
-
   Resolution: Fixed
Fix Version/s: master (8.0)
   7.4

> use ivy-versions.properties values as attributes in ref-guide files to 
> replace hard coded version numbers
> -
>
> Key: SOLR-12118
> URL: https://issues.apache.org/jira/browse/SOLR-12118
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Hoss Man
>Priority: Major
> Fix For: 7.4, master (8.0)
>
> Attachments: SOLR-12118.patch
>
>
> There's currently a bunch of places in the ref guide where we mention third 
> party libraries and refer to hard coded version numbers - many of which are 
> not consistent with the versions of those libraries actually in use because 
> it's easy to overlook them.
> We should improve the ref-guide build files to pull in the 
> {{ivy-version.properties}} variables to use as attributes in the source files.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12118) use ivy-versions.properties values as attributes in ref-guide files to replace hard coded version numbers

2018-03-19 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16405163#comment-16405163
 ] 

ASF subversion and git services commented on SOLR-12118:


Commit 9c1f55b32a53be3b3b7cbf42c9799fa4360ad01a in lucene-solr's branch 
refs/heads/branch_7x from Chris Hostetter
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=9c1f55b ]

SOLR-12118: Solr Ref-Guide can now use some ivy version props directly as 
attributes in content

(cherry picked from commit d6ed71b5c4777db1847ae18f11855d853f511f40)

Conflicts:
solr/CHANGES.txt


> use ivy-versions.properties values as attributes in ref-guide files to 
> replace hard coded version numbers
> -
>
> Key: SOLR-12118
> URL: https://issues.apache.org/jira/browse/SOLR-12118
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Hoss Man
>Priority: Major
> Attachments: SOLR-12118.patch
>
>
> There's currently a bunch of places in the ref guide where we mention third 
> party libraries and refer to hard coded version numbers - many of which are 
> not consistent with the versions of those libraries actually in use because 
> it's easy to overlook them.
> We should improve the ref-guide build files to pull in the 
> {{ivy-version.properties}} variables to use as attributes in the source files.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12118) use ivy-versions.properties values as attributes in ref-guide files to replace hard coded version numbers

2018-03-19 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16405141#comment-16405141
 ] 

ASF subversion and git services commented on SOLR-12118:


Commit d6ed71b5c4777db1847ae18f11855d853f511f40 in lucene-solr's branch 
refs/heads/master from Chris Hostetter
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=d6ed71b ]

SOLR-12118: Solr Ref-Guide can now use some ivy version props directly as 
attributes in content


> use ivy-versions.properties values as attributes in ref-guide files to 
> replace hard coded version numbers
> -
>
> Key: SOLR-12118
> URL: https://issues.apache.org/jira/browse/SOLR-12118
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Hoss Man
>Priority: Major
> Attachments: SOLR-12118.patch
>
>
> There's currently a bunch of places in the ref guide where we mention third 
> party libraries and refer to hard coded version numbers - many of which are 
> not consistent with the versions of those libraries actually in use because 
> it's easy to overlook them.
> We should improve the ref-guide build files to pull in the 
> {{ivy-version.properties}} variables to use as attributes in the source files.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 1744 - Unstable!

2018-03-19 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/1744/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.cloud.autoscaling.ScheduledTriggerTest.testTrigger

Error Message:
Did not expect the listener to fire on first run!

Stack Trace:
java.lang.AssertionError: Did not expect the listener to fire on first run!
at 
__randomizedtesting.SeedInfo.seed([5AA39BF6FB4563EC:3968AD74628A10C1]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.autoscaling.ScheduledTriggerTest.lambda$new$0(ScheduledTriggerTest.java:48)
at 
org.apache.solr.cloud.autoscaling.ScheduledTrigger.run(ScheduledTrigger.java:191)
at 
org.apache.solr.cloud.autoscaling.ScheduledTriggerTest.scheduledTriggerTest(ScheduledTriggerTest.java:102)
at 
org.apache.solr.cloud.autoscaling.ScheduledTriggerTest.testTrigger(ScheduledTriggerTest.java:75)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.r

[jira] [Commented] (SOLR-12096) Inconsistent response format in subquery transform

2018-03-19 Thread Munendra S N (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16405134#comment-16405134
 ] 

Munendra S N commented on SOLR-12096:
-

 [^SOLR-12096.patch] 
Updating the Patch with a test

> Inconsistent response format in subquery transform
> --
>
> Key: SOLR-12096
> URL: https://issues.apache.org/jira/browse/SOLR-12096
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Munendra S N
>Priority: Major
> Attachments: SOLR-12096.patch, SOLR-12096.patch, SOLR-12096.patch
>
>
> Solr version - 6.6.2
> The response of subquery transform is inconsistent with multi-shard compared 
> to single-shard
> h1. Single Shard collection
> Request 
> {code:java}
> localhost:8983/solr/k_test/search?sort=score desc,uniqueId 
> desc&q.op=AND&wt=json&q={!parent which=parent_field:true score=max}({!edismax 
> v=$origQuery})&facet=false&fl=uniqueId&fl=score&fl=_children_:[subquery]&fl=uniqueId&origQuery=false&qf=parent_field&_children_.fl=uniqueId&_children_.fl=score&_children_.rows=3&spellcheck=false&_children_.q={!edismax
>  qf=parentId v=$row.uniqueId}&rows=1
> {code}
> Response for above request
> {code:json}
> {
> "responseHeader": {
> "zkConnected": true,
> "status": 0,
> "QTime": 0,
> "params": {
> "fl": [
> "uniqueId",
> "score",
> "_children_:[subquery]",
> "uniqueId"
> ],
> "origQuery": "false",
> "q.op": "AND",
> "_children_.rows": "3",
> "sort": "score desc,uniqueId desc",
> "rows": "1",
> "q": "{!parent which=parent_field:true score=max}({!edismax 
> v=$origQuery})",
> "qf": "parent_field",
> "spellcheck": "false",
> "_children_.q": "{!edismax qf=parentId v=$row.uniqueId}",
> "_children_.fl": [
> "uniqueId",
> "score"
> ],
> "wt": "json",
> "facet": "false"
> }
> },
> "response": {
> "numFound": 1,
> "start": 0,
> "maxScore": 0.5,
> "docs": [
> {
> "uniqueId": "10001677",
> "score": 0.5,
> "_children_": {
> "numFound": 9,
> "start": 0,
> "docs": [
> {
> "uniqueId": "100016771",
> "score": 0.5
> },
> {
> "uniqueId": "100016772",
> "score": 0.5
> },
> {
> "uniqueId": "100016773",
> "score": 0.5
> }
> ]
> }
> }
> ]
> }
> }
> {code}
> Here, *_children_* suquery response is as expected (Based on documentation)
> h1. Multi Shard collection(2)
> Request
> {code:java}
> localhost:8983/solr/k_test_2/search?sort=score desc,uniqueId 
> desc&q.op=AND&wt=json&q={!parent which=parent_field:true score=max}({!edismax 
> v=$origQuery})&facet=false&fl=uniqueId&fl=score&fl=_children_:[subquery]&fl=uniqueId&origQuery=false&qf=parent_field&_children_.fl=uniqueId&_children_.fl=score&_children_.rows=3&spellcheck=false&_children_.q={!edismax
>  qf=parentId v=$row.uniqueId}&rows=1
> {code}
> Response
> {code:json}
> {
> "responseHeader": {
> "zkConnected": true,
> "status": 0,
> "QTime": 11,
> "params": {
> "fl": [
> "uniqueId",
> "score",
> "_children_:[subquery]",
> "uniqueId"
> ],
> "origQuery": "false",
> "q.op": "AND",
> "_children_.rows": "3",
> "sort": "score desc,uniqueId desc",
> "rows": "1",
> "q": "{!parent which=parent_field:true score=max}({!edismax 
> v=$origQuery})",
> "qf": "parent_field",
> "spellcheck": "false",
> "_children_.q": "{!edismax qf=parentId v=$row.uniqueId}",
> "_children_.fl": [
> "uniqueId",
> "score"
> ],
> "wt": "json",
> "facet": "false"
> }
> },
> "response": {
> "numFound": 5,
> "start": 0,
> "maxScore": 0.5,
> "docs": [
> {
> "uniqueId": "10006197",
> "_children_": [
> {
> "uniqueId": "100061971",
>

[jira] [Updated] (SOLR-12096) Inconsistent response format in subquery transform

2018-03-19 Thread Munendra S N (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12096?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Munendra S N updated SOLR-12096:

Attachment: SOLR-12096.patch

> Inconsistent response format in subquery transform
> --
>
> Key: SOLR-12096
> URL: https://issues.apache.org/jira/browse/SOLR-12096
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Munendra S N
>Priority: Major
> Attachments: SOLR-12096.patch, SOLR-12096.patch, SOLR-12096.patch
>
>
> Solr version - 6.6.2
> The response of subquery transform is inconsistent with multi-shard compared 
> to single-shard
> h1. Single Shard collection
> Request 
> {code:java}
> localhost:8983/solr/k_test/search?sort=score desc,uniqueId 
> desc&q.op=AND&wt=json&q={!parent which=parent_field:true score=max}({!edismax 
> v=$origQuery})&facet=false&fl=uniqueId&fl=score&fl=_children_:[subquery]&fl=uniqueId&origQuery=false&qf=parent_field&_children_.fl=uniqueId&_children_.fl=score&_children_.rows=3&spellcheck=false&_children_.q={!edismax
>  qf=parentId v=$row.uniqueId}&rows=1
> {code}
> Response for above request
> {code:json}
> {
> "responseHeader": {
> "zkConnected": true,
> "status": 0,
> "QTime": 0,
> "params": {
> "fl": [
> "uniqueId",
> "score",
> "_children_:[subquery]",
> "uniqueId"
> ],
> "origQuery": "false",
> "q.op": "AND",
> "_children_.rows": "3",
> "sort": "score desc,uniqueId desc",
> "rows": "1",
> "q": "{!parent which=parent_field:true score=max}({!edismax 
> v=$origQuery})",
> "qf": "parent_field",
> "spellcheck": "false",
> "_children_.q": "{!edismax qf=parentId v=$row.uniqueId}",
> "_children_.fl": [
> "uniqueId",
> "score"
> ],
> "wt": "json",
> "facet": "false"
> }
> },
> "response": {
> "numFound": 1,
> "start": 0,
> "maxScore": 0.5,
> "docs": [
> {
> "uniqueId": "10001677",
> "score": 0.5,
> "_children_": {
> "numFound": 9,
> "start": 0,
> "docs": [
> {
> "uniqueId": "100016771",
> "score": 0.5
> },
> {
> "uniqueId": "100016772",
> "score": 0.5
> },
> {
> "uniqueId": "100016773",
> "score": 0.5
> }
> ]
> }
> }
> ]
> }
> }
> {code}
> Here, *_children_* suquery response is as expected (Based on documentation)
> h1. Multi Shard collection(2)
> Request
> {code:java}
> localhost:8983/solr/k_test_2/search?sort=score desc,uniqueId 
> desc&q.op=AND&wt=json&q={!parent which=parent_field:true score=max}({!edismax 
> v=$origQuery})&facet=false&fl=uniqueId&fl=score&fl=_children_:[subquery]&fl=uniqueId&origQuery=false&qf=parent_field&_children_.fl=uniqueId&_children_.fl=score&_children_.rows=3&spellcheck=false&_children_.q={!edismax
>  qf=parentId v=$row.uniqueId}&rows=1
> {code}
> Response
> {code:json}
> {
> "responseHeader": {
> "zkConnected": true,
> "status": 0,
> "QTime": 11,
> "params": {
> "fl": [
> "uniqueId",
> "score",
> "_children_:[subquery]",
> "uniqueId"
> ],
> "origQuery": "false",
> "q.op": "AND",
> "_children_.rows": "3",
> "sort": "score desc,uniqueId desc",
> "rows": "1",
> "q": "{!parent which=parent_field:true score=max}({!edismax 
> v=$origQuery})",
> "qf": "parent_field",
> "spellcheck": "false",
> "_children_.q": "{!edismax qf=parentId v=$row.uniqueId}",
> "_children_.fl": [
> "uniqueId",
> "score"
> ],
> "wt": "json",
> "facet": "false"
> }
> },
> "response": {
> "numFound": 5,
> "start": 0,
> "maxScore": 0.5,
> "docs": [
> {
> "uniqueId": "10006197",
> "_children_": [
> {
> "uniqueId": "100061971",
> "score": 0.5
> },
> {
>  

[jira] [Commented] (LUCENE-8214) Improve selection of testPoint for GeoComplexPolygon

2018-03-19 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16405126#comment-16405126
 ] 

David Smiley commented on LUCENE-8214:
--

Curious; why is Karl committing Ignacio's code?

BTW CHANGES.txt change is missing

> Improve selection of testPoint for GeoComplexPolygon
> 
>
> Key: LUCENE-8214
> URL: https://issues.apache.org/jira/browse/LUCENE-8214
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/spatial3d
>Reporter: Ignacio Vera
>Assignee: Karl Wright
>Priority: Major
> Fix For: 6.7, 7.4, master (8.0)
>
> Attachments: LUCENE-8214.patch
>
>
> I have been checking the effect of the testPoint on GeoComplexPolygon and it 
> seems performance can change quite a bit depending on the choice. 
> The results with random polygons with 20k points shows that a good choice is 
> to ue the center of mass of the shape. On the worst case the performance is 
> similar to what we have now and the best case is twice as fast for 
> {{within()}} and {{getRelationship()}} methods.
> Therefore I would like to propose to use that point whenever possible.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12020) terms faceting on date field fails in distributed refinement

2018-03-19 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16405067#comment-16405067
 ] 

Yonik Seeley commented on SOLR-12020:
-

bq. Maybe JSONWriter.write should know about Date & Instant, so that we don't 
have to play wack-amole with special casing down-stream?

That will require a new noggit release, but yeah I agree Date should be handled 
"correctly" (non locale dependent) by default.

> terms faceting on date field fails in distributed refinement
> 
>
> Key: SOLR-12020
> URL: https://issues.apache.org/jira/browse/SOLR-12020
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module
>Affects Versions: 6.6
>Reporter: Yonik Seeley
>Assignee: Yonik Seeley
>Priority: Major
> Fix For: 7.3
>
> Attachments: SOLR-12020.patch, SOLR-12020.patch
>
>
> This appears to be a regression, as the reporter indicates that Solr 6.2 
> worked and Solr 6.6 does not. 
> http://markmail.org/message/hwlajuy5jnmf4yd6
> I've reproduced the issue on the master branch (future v8) as well.
> A typical exception that results from a terms facet on a date field is:
> {code}
> org.apache.solr.common.SolrException: Invalid Date String:'Sat Feb 03 
> 01:02:03 WET 2001'
>   at 
> org.apache.solr.util.DateMathParser.parseMath(DateMathParser.java:247)
>   at 
> org.apache.solr.util.DateMathParser.parseMath(DateMathParser.java:226)
>   at 
> org.apache.solr.schema.DatePointField.toNativeType(DatePointField.java:113)
>   at 
> org.apache.solr.search.facet.FacetFieldProcessor.refineBucket(FacetFieldProcessor.java:683)
>   at 
> org.apache.solr.search.facet.FacetFieldProcessor.refineFacets(FacetFieldProcessor.java:638)
>   at 
> org.apache.solr.search.facet.FacetFieldProcessorByArray.calcFacets(FacetFieldProcessorByArray.java:66)
>   at 
> org.apache.solr.search.facet.FacetFieldProcessorByArray.process(FacetFieldProcessorByArray.java:58)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12096) Inconsistent response format in subquery transform

2018-03-19 Thread Munendra S N (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12096?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Munendra S N updated SOLR-12096:

Attachment: SOLR-12096.patch

> Inconsistent response format in subquery transform
> --
>
> Key: SOLR-12096
> URL: https://issues.apache.org/jira/browse/SOLR-12096
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Munendra S N
>Priority: Major
> Attachments: SOLR-12096.patch, SOLR-12096.patch
>
>
> Solr version - 6.6.2
> The response of subquery transform is inconsistent with multi-shard compared 
> to single-shard
> h1. Single Shard collection
> Request 
> {code:java}
> localhost:8983/solr/k_test/search?sort=score desc,uniqueId 
> desc&q.op=AND&wt=json&q={!parent which=parent_field:true score=max}({!edismax 
> v=$origQuery})&facet=false&fl=uniqueId&fl=score&fl=_children_:[subquery]&fl=uniqueId&origQuery=false&qf=parent_field&_children_.fl=uniqueId&_children_.fl=score&_children_.rows=3&spellcheck=false&_children_.q={!edismax
>  qf=parentId v=$row.uniqueId}&rows=1
> {code}
> Response for above request
> {code:json}
> {
> "responseHeader": {
> "zkConnected": true,
> "status": 0,
> "QTime": 0,
> "params": {
> "fl": [
> "uniqueId",
> "score",
> "_children_:[subquery]",
> "uniqueId"
> ],
> "origQuery": "false",
> "q.op": "AND",
> "_children_.rows": "3",
> "sort": "score desc,uniqueId desc",
> "rows": "1",
> "q": "{!parent which=parent_field:true score=max}({!edismax 
> v=$origQuery})",
> "qf": "parent_field",
> "spellcheck": "false",
> "_children_.q": "{!edismax qf=parentId v=$row.uniqueId}",
> "_children_.fl": [
> "uniqueId",
> "score"
> ],
> "wt": "json",
> "facet": "false"
> }
> },
> "response": {
> "numFound": 1,
> "start": 0,
> "maxScore": 0.5,
> "docs": [
> {
> "uniqueId": "10001677",
> "score": 0.5,
> "_children_": {
> "numFound": 9,
> "start": 0,
> "docs": [
> {
> "uniqueId": "100016771",
> "score": 0.5
> },
> {
> "uniqueId": "100016772",
> "score": 0.5
> },
> {
> "uniqueId": "100016773",
> "score": 0.5
> }
> ]
> }
> }
> ]
> }
> }
> {code}
> Here, *_children_* suquery response is as expected (Based on documentation)
> h1. Multi Shard collection(2)
> Request
> {code:java}
> localhost:8983/solr/k_test_2/search?sort=score desc,uniqueId 
> desc&q.op=AND&wt=json&q={!parent which=parent_field:true score=max}({!edismax 
> v=$origQuery})&facet=false&fl=uniqueId&fl=score&fl=_children_:[subquery]&fl=uniqueId&origQuery=false&qf=parent_field&_children_.fl=uniqueId&_children_.fl=score&_children_.rows=3&spellcheck=false&_children_.q={!edismax
>  qf=parentId v=$row.uniqueId}&rows=1
> {code}
> Response
> {code:json}
> {
> "responseHeader": {
> "zkConnected": true,
> "status": 0,
> "QTime": 11,
> "params": {
> "fl": [
> "uniqueId",
> "score",
> "_children_:[subquery]",
> "uniqueId"
> ],
> "origQuery": "false",
> "q.op": "AND",
> "_children_.rows": "3",
> "sort": "score desc,uniqueId desc",
> "rows": "1",
> "q": "{!parent which=parent_field:true score=max}({!edismax 
> v=$origQuery})",
> "qf": "parent_field",
> "spellcheck": "false",
> "_children_.q": "{!edismax qf=parentId v=$row.uniqueId}",
> "_children_.fl": [
> "uniqueId",
> "score"
> ],
> "wt": "json",
> "facet": "false"
> }
> },
> "response": {
> "numFound": 5,
> "start": 0,
> "maxScore": 0.5,
> "docs": [
> {
> "uniqueId": "10006197",
> "_children_": [
> {
> "uniqueId": "100061971",
> "score": 0.5
> },
> {
> "un

[jira] [Updated] (LUCENE-8190) Replace dependency on LegacyCell for setting pruneLeafyBranches on RecursivePrefixTreeStrategy

2018-03-19 Thread Ignacio Vera (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ignacio Vera updated LUCENE-8190:
-
Fix Version/s: master (8.0)
   7.3

> Replace dependency on LegacyCell for setting pruneLeafyBranches on 
> RecursivePrefixTreeStrategy
> --
>
> Key: LUCENE-8190
> URL: https://issues.apache.org/jira/browse/LUCENE-8190
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/spatial-extras
>Reporter: Ignacio Vera
>Priority: Major
> Fix For: 7.3, master (8.0)
>
> Attachments: LUCENE-8190.patch, LUCENE-8190.patch
>
>
> The setting {{pruneLeafyBranches}} on {{RecursivePrefixTreeStrategy}} depends 
> on abstract class {{LegacyCell}} and therefore trees like the newly added 
> {{S2PrefixTree}} cannot benefit for such optimization.
> It is proposed to add a new specialize interface for {{cell}} interface and 
> make the setting depends on it instead of {{LegacyCell.}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8126) Spatial prefix tree based on S2 geometry

2018-03-19 Thread Ignacio Vera (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ignacio Vera updated LUCENE-8126:
-
Fix Version/s: master (8.0)

> Spatial prefix tree based on S2 geometry
> 
>
> Key: LUCENE-8126
> URL: https://issues.apache.org/jira/browse/LUCENE-8126
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: modules/spatial-extras
>Reporter: Ignacio Vera
>Assignee: Ignacio Vera
>Priority: Major
> Fix For: 7.3, master (8.0)
>
> Attachments: SPT-cell.pdf, SPT-query.jpeg
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Hi [~dsmiley],
> I have been working on a prefix tree based on goggle S2 geometry 
> (https://s2geometry.io/) to be used mainly with Geo3d shapes with very 
> promising results, in particular for complex shapes (e.g polygons). Using 
> this pixelization scheme reduces the size of the index, improves the 
> performance of the queries and reduces the loading time for non-point shapes. 
> If you are ok with this contribution and before providing any code I would 
> like to understand what is the correct/prefered approach:
> 1) Add new depency to the S2 library 
> (https://mvnrepository.com/artifact/io.sgr/s2-geometry-library-java). It has 
> Apache 2.0 license so it should be ok.
> 2) Create a utility class with all methods necessary to navigate the S2 tree 
> and create shapes from S2 cells (basically port what we need from the library 
> into Lucene).
> What do you think?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8126) Spatial prefix tree based on S2 geometry

2018-03-19 Thread Ignacio Vera (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ignacio Vera updated LUCENE-8126:
-
Fix Version/s: 7.3

> Spatial prefix tree based on S2 geometry
> 
>
> Key: LUCENE-8126
> URL: https://issues.apache.org/jira/browse/LUCENE-8126
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: modules/spatial-extras
>Reporter: Ignacio Vera
>Assignee: Ignacio Vera
>Priority: Major
> Fix For: 7.3
>
> Attachments: SPT-cell.pdf, SPT-query.jpeg
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Hi [~dsmiley],
> I have been working on a prefix tree based on goggle S2 geometry 
> (https://s2geometry.io/) to be used mainly with Geo3d shapes with very 
> promising results, in particular for complex shapes (e.g polygons). Using 
> this pixelization scheme reduces the size of the index, improves the 
> performance of the queries and reduces the loading time for non-point shapes. 
> If you are ok with this contribution and before providing any code I would 
> like to understand what is the correct/prefered approach:
> 1) Add new depency to the S2 library 
> (https://mvnrepository.com/artifact/io.sgr/s2-geometry-library-java). It has 
> Apache 2.0 license so it should be ok.
> 2) Create a utility class with all methods necessary to navigate the S2 tree 
> and create shapes from S2 cells (basically port what we need from the library 
> into Lucene).
> What do you think?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8126) Spatial prefix tree based on S2 geometry

2018-03-19 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16405005#comment-16405005
 ] 

Adrien Grand commented on LUCENE-8126:
--

[~ivera] Can you set the "Fix Version/s" on this issue?

> Spatial prefix tree based on S2 geometry
> 
>
> Key: LUCENE-8126
> URL: https://issues.apache.org/jira/browse/LUCENE-8126
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: modules/spatial-extras
>Reporter: Ignacio Vera
>Assignee: Ignacio Vera
>Priority: Major
> Attachments: SPT-cell.pdf, SPT-query.jpeg
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Hi [~dsmiley],
> I have been working on a prefix tree based on goggle S2 geometry 
> (https://s2geometry.io/) to be used mainly with Geo3d shapes with very 
> promising results, in particular for complex shapes (e.g polygons). Using 
> this pixelization scheme reduces the size of the index, improves the 
> performance of the queries and reduces the loading time for non-point shapes. 
> If you are ok with this contribution and before providing any code I would 
> like to understand what is the correct/prefered approach:
> 1) Add new depency to the S2 library 
> (https://mvnrepository.com/artifact/io.sgr/s2-geometry-library-java). It has 
> Apache 2.0 license so it should be ok.
> 2) Create a utility class with all methods necessary to navigate the S2 tree 
> and create shapes from S2 cells (basically port what we need from the library 
> into Lucene).
> What do you think?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8215) Fix several fragile exception handling places in o.a.l.index

2018-03-19 Thread Simon Willnauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Simon Willnauer updated LUCENE-8215:

Attachment: LUCENE-8215.patch

>  Fix several fragile exception handling places in o.a.l.index
> -
>
> Key: LUCENE-8215
> URL: https://issues.apache.org/jira/browse/LUCENE-8215
> Project: Lucene - Core
>  Issue Type: Improvement
>Affects Versions: 7.4, master (8.0)
>Reporter: Simon Willnauer
>Priority: Major
> Fix For: 7.4, master (8.0)
>
> Attachments: LUCENE-8215.patch, LUCENE-8215.patch
>
>
> Several places in the index package don't handle exceptions well or ignores 
> them. This change adds some utility methods and cuts over to make use of 
> try/with blocks to simplify exception handling.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-7.3 - Build # 4 - Failure

2018-03-19 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-7.3/4/

3 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.hdfs.HdfsBasicDistributedZkTest

Error Message:
ObjectTracker found 1 object(s) that were not released!!! [HdfsTransactionLog] 
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.solr.update.HdfsTransactionLog  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at 
org.apache.solr.update.HdfsTransactionLog.(HdfsTransactionLog.java:132)  
at org.apache.solr.update.HdfsTransactionLog.(HdfsTransactionLog.java:76) 
 at org.apache.solr.update.HdfsUpdateLog.ensureLog(HdfsUpdateLog.java:316)  at 
org.apache.solr.update.UpdateLog.add(UpdateLog.java:534)  at 
org.apache.solr.update.UpdateLog.add(UpdateLog.java:519)  at 
org.apache.solr.update.DirectUpdateHandler2.doNormalUpdate(DirectUpdateHandler2.java:352)
  at 
org.apache.solr.update.DirectUpdateHandler2.addDoc0(DirectUpdateHandler2.java:271)
  at 
org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:221)
  at 
org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:67)
  at 
org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
  at 
org.apache.solr.update.processor.DistributedUpdateProcessor.doLocalAdd(DistributedUpdateProcessor.java:950)
  at 
org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(DistributedUpdateProcessor.java:1163)
  at 
org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(DistributedUpdateProcessor.java:633)
  at 
org.apache.solr.handler.loader.JavabinLoader$1.update(JavabinLoader.java:98)  
at 
org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readOuterMostDocIterator(JavaBinUpdateRequestCodec.java:188)
  at 
org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readIterator(JavaBinUpdateRequestCodec.java:144)
  at org.apache.solr.common.util.JavaBinCodec.readObject(JavaBinCodec.java:311) 
 at org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:256)  at 
org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readNamedList(JavaBinUpdateRequestCodec.java:130)
  at org.apache.solr.common.util.JavaBinCodec.readObject(JavaBinCodec.java:276) 
 at org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:256)  at 
org.apache.solr.common.util.JavaBinCodec.unmarshal(JavaBinCodec.java:178)  at 
org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec.unmarshal(JavaBinUpdateRequestCodec.java:195)
  at 
org.apache.solr.handler.loader.JavabinLoader.parseAndLoadDocs(JavabinLoader.java:109)
  at org.apache.solr.handler.loader.JavabinLoader.load(JavabinLoader.java:55)  
at 
org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java:97)
  at 
org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:68)
  at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:195)
  at org.apache.solr.core.SolrCore.execute(SolrCore.java:2503)  at 
org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:711)  at 
org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:517)  at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:384)
  at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:330)
  at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1637)
  at 
org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:139)
  at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1637)
  at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:533) 
 at 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:188)
  at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1595)
  at 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:188)
  at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1253)
  at 
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:168)
  at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:473)  
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1564)
  at 
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:166)
  at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1155)
  at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)  
at 
org.eclipse.jetty.server.handler.gzip.GzipHandler.handle(GzipHandler.java:455)  
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132) 
 at org.eclipse.jetty.server.Server.handle(Server.java:530)  at 
org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:347) 

[jira] [Commented] (SOLR-12096) Inconsistent response format in subquery transform

2018-03-19 Thread Munendra S N (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16404941#comment-16404941
 ] 

Munendra S N commented on SOLR-12096:
-

[^SOLR-12096.patch] 
 On further debugging,
 * I found that response format is proper in case of *wt=xml*. The issue was 
with *JsonResponseWriter* (*wt=json*)
 In the writer, *SolrDocumentList* was being written like normal *List*


> Inconsistent response format in subquery transform
> --
>
> Key: SOLR-12096
> URL: https://issues.apache.org/jira/browse/SOLR-12096
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Munendra S N
>Priority: Major
> Attachments: SOLR-12096.patch
>
>
> Solr version - 6.6.2
> The response of subquery transform is inconsistent with multi-shard compared 
> to single-shard
> h1. Single Shard collection
> Request 
> {code:java}
> localhost:8983/solr/k_test/search?sort=score desc,uniqueId 
> desc&q.op=AND&wt=json&q={!parent which=parent_field:true score=max}({!edismax 
> v=$origQuery})&facet=false&fl=uniqueId&fl=score&fl=_children_:[subquery]&fl=uniqueId&origQuery=false&qf=parent_field&_children_.fl=uniqueId&_children_.fl=score&_children_.rows=3&spellcheck=false&_children_.q={!edismax
>  qf=parentId v=$row.uniqueId}&rows=1
> {code}
> Response for above request
> {code:json}
> {
> "responseHeader": {
> "zkConnected": true,
> "status": 0,
> "QTime": 0,
> "params": {
> "fl": [
> "uniqueId",
> "score",
> "_children_:[subquery]",
> "uniqueId"
> ],
> "origQuery": "false",
> "q.op": "AND",
> "_children_.rows": "3",
> "sort": "score desc,uniqueId desc",
> "rows": "1",
> "q": "{!parent which=parent_field:true score=max}({!edismax 
> v=$origQuery})",
> "qf": "parent_field",
> "spellcheck": "false",
> "_children_.q": "{!edismax qf=parentId v=$row.uniqueId}",
> "_children_.fl": [
> "uniqueId",
> "score"
> ],
> "wt": "json",
> "facet": "false"
> }
> },
> "response": {
> "numFound": 1,
> "start": 0,
> "maxScore": 0.5,
> "docs": [
> {
> "uniqueId": "10001677",
> "score": 0.5,
> "_children_": {
> "numFound": 9,
> "start": 0,
> "docs": [
> {
> "uniqueId": "100016771",
> "score": 0.5
> },
> {
> "uniqueId": "100016772",
> "score": 0.5
> },
> {
> "uniqueId": "100016773",
> "score": 0.5
> }
> ]
> }
> }
> ]
> }
> }
> {code}
> Here, *_children_* suquery response is as expected (Based on documentation)
> h1. Multi Shard collection(2)
> Request
> {code:java}
> localhost:8983/solr/k_test_2/search?sort=score desc,uniqueId 
> desc&q.op=AND&wt=json&q={!parent which=parent_field:true score=max}({!edismax 
> v=$origQuery})&facet=false&fl=uniqueId&fl=score&fl=_children_:[subquery]&fl=uniqueId&origQuery=false&qf=parent_field&_children_.fl=uniqueId&_children_.fl=score&_children_.rows=3&spellcheck=false&_children_.q={!edismax
>  qf=parentId v=$row.uniqueId}&rows=1
> {code}
> Response
> {code:json}
> {
> "responseHeader": {
> "zkConnected": true,
> "status": 0,
> "QTime": 11,
> "params": {
> "fl": [
> "uniqueId",
> "score",
> "_children_:[subquery]",
> "uniqueId"
> ],
> "origQuery": "false",
> "q.op": "AND",
> "_children_.rows": "3",
> "sort": "score desc,uniqueId desc",
> "rows": "1",
> "q": "{!parent which=parent_field:true score=max}({!edismax 
> v=$origQuery})",
> "qf": "parent_field",
> "spellcheck": "false",
> "_children_.q": "{!edismax qf=parentId v=$row.uniqueId}",
> "_children_.fl": [
> "uniqueId",
> "score"
> ],
> "wt": "json",
> "facet": "false"
> }
> },
> "response": {
> "numFound": 5,
> "start": 0,
> "maxScore": 0.5,
> "docs": [
> {
> "uniqueId"

[jira] [Updated] (SOLR-12096) Inconsistent response format in subquery transform

2018-03-19 Thread Munendra S N (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12096?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Munendra S N updated SOLR-12096:

Attachment: SOLR-12096.patch

> Inconsistent response format in subquery transform
> --
>
> Key: SOLR-12096
> URL: https://issues.apache.org/jira/browse/SOLR-12096
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Munendra S N
>Priority: Major
> Attachments: SOLR-12096.patch
>
>
> Solr version - 6.6.2
> The response of subquery transform is inconsistent with multi-shard compared 
> to single-shard
> h1. Single Shard collection
> Request 
> {code:java}
> localhost:8983/solr/k_test/search?sort=score desc,uniqueId 
> desc&q.op=AND&wt=json&q={!parent which=parent_field:true score=max}({!edismax 
> v=$origQuery})&facet=false&fl=uniqueId&fl=score&fl=_children_:[subquery]&fl=uniqueId&origQuery=false&qf=parent_field&_children_.fl=uniqueId&_children_.fl=score&_children_.rows=3&spellcheck=false&_children_.q={!edismax
>  qf=parentId v=$row.uniqueId}&rows=1
> {code}
> Response for above request
> {code:json}
> {
> "responseHeader": {
> "zkConnected": true,
> "status": 0,
> "QTime": 0,
> "params": {
> "fl": [
> "uniqueId",
> "score",
> "_children_:[subquery]",
> "uniqueId"
> ],
> "origQuery": "false",
> "q.op": "AND",
> "_children_.rows": "3",
> "sort": "score desc,uniqueId desc",
> "rows": "1",
> "q": "{!parent which=parent_field:true score=max}({!edismax 
> v=$origQuery})",
> "qf": "parent_field",
> "spellcheck": "false",
> "_children_.q": "{!edismax qf=parentId v=$row.uniqueId}",
> "_children_.fl": [
> "uniqueId",
> "score"
> ],
> "wt": "json",
> "facet": "false"
> }
> },
> "response": {
> "numFound": 1,
> "start": 0,
> "maxScore": 0.5,
> "docs": [
> {
> "uniqueId": "10001677",
> "score": 0.5,
> "_children_": {
> "numFound": 9,
> "start": 0,
> "docs": [
> {
> "uniqueId": "100016771",
> "score": 0.5
> },
> {
> "uniqueId": "100016772",
> "score": 0.5
> },
> {
> "uniqueId": "100016773",
> "score": 0.5
> }
> ]
> }
> }
> ]
> }
> }
> {code}
> Here, *_children_* suquery response is as expected (Based on documentation)
> h1. Multi Shard collection(2)
> Request
> {code:java}
> localhost:8983/solr/k_test_2/search?sort=score desc,uniqueId 
> desc&q.op=AND&wt=json&q={!parent which=parent_field:true score=max}({!edismax 
> v=$origQuery})&facet=false&fl=uniqueId&fl=score&fl=_children_:[subquery]&fl=uniqueId&origQuery=false&qf=parent_field&_children_.fl=uniqueId&_children_.fl=score&_children_.rows=3&spellcheck=false&_children_.q={!edismax
>  qf=parentId v=$row.uniqueId}&rows=1
> {code}
> Response
> {code:json}
> {
> "responseHeader": {
> "zkConnected": true,
> "status": 0,
> "QTime": 11,
> "params": {
> "fl": [
> "uniqueId",
> "score",
> "_children_:[subquery]",
> "uniqueId"
> ],
> "origQuery": "false",
> "q.op": "AND",
> "_children_.rows": "3",
> "sort": "score desc,uniqueId desc",
> "rows": "1",
> "q": "{!parent which=parent_field:true score=max}({!edismax 
> v=$origQuery})",
> "qf": "parent_field",
> "spellcheck": "false",
> "_children_.q": "{!edismax qf=parentId v=$row.uniqueId}",
> "_children_.fl": [
> "uniqueId",
> "score"
> ],
> "wt": "json",
> "facet": "false"
> }
> },
> "response": {
> "numFound": 5,
> "start": 0,
> "maxScore": 0.5,
> "docs": [
> {
> "uniqueId": "10006197",
> "_children_": [
> {
> "uniqueId": "100061971",
> "score": 0.5
> },
> {
> "uniqueId": "10006197

[jira] [Commented] (LUCENE-8215) Fix several fragile exception handling places in o.a.l.index

2018-03-19 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16404923#comment-16404923
 ] 

Michael McCandless commented on LUCENE-8215:


+1, thanks [~simonw]!

>  Fix several fragile exception handling places in o.a.l.index
> -
>
> Key: LUCENE-8215
> URL: https://issues.apache.org/jira/browse/LUCENE-8215
> Project: Lucene - Core
>  Issue Type: Improvement
>Affects Versions: 7.4, master (8.0)
>Reporter: Simon Willnauer
>Priority: Major
> Fix For: 7.4, master (8.0)
>
> Attachments: LUCENE-8215.patch
>
>
> Several places in the index package don't handle exceptions well or ignores 
> them. This change adds some utility methods and cuts over to make use of 
> try/with blocks to simplify exception handling.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8155) Add Java 9 support to smoke tester

2018-03-19 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16404918#comment-16404918
 ] 

ASF subversion and git services commented on LUCENE-8155:
-

Commit 0977743aeb0b366b376505352b9be73fd998cba5 in lucene-solr's branch 
refs/heads/branch_7_3 from [~thetaphi]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=0977743 ]

LUCENE-8155: Fix Solr example with Java 9 (was a problem when reverting an old 
commit)


> Add Java 9 support to smoke tester
> --
>
> Key: LUCENE-8155
> URL: https://issues.apache.org/jira/browse/LUCENE-8155
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
>Priority: Major
>  Labels: Java9
> Attachments: LUCENE-8155.patch
>
>
> After adding MR-JAR support with LUCENE-7966, we should test the release 
> candidates with Java 9. Therefore the already existing code in {{build.xml}} 
> that uses a separate environment variable to pass {{JAVA9_HOME}} should be 
> reenabled. This also requires reconfiguring Jenkins.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8155) Add Java 9 support to smoke tester

2018-03-19 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16404917#comment-16404917
 ] 

ASF subversion and git services commented on LUCENE-8155:
-

Commit e6a15db81aad5118fac184a359ce2987e1d175e3 in lucene-solr's branch 
refs/heads/branch_7x from [~thetaphi]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=e6a15db ]

LUCENE-8155: Fix Solr example with Java 9 (was a problem when reverting an old 
commit)


> Add Java 9 support to smoke tester
> --
>
> Key: LUCENE-8155
> URL: https://issues.apache.org/jira/browse/LUCENE-8155
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
>Priority: Major
>  Labels: Java9
> Attachments: LUCENE-8155.patch
>
>
> After adding MR-JAR support with LUCENE-7966, we should test the release 
> candidates with Java 9. Therefore the already existing code in {{build.xml}} 
> that uses a separate environment variable to pass {{JAVA9_HOME}} should be 
> reenabled. This also requires reconfiguring Jenkins.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8155) Add Java 9 support to smoke tester

2018-03-19 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16404916#comment-16404916
 ] 

Uwe Schindler commented on LUCENE-8155:
---

Fixed it. Sorry for inconvenience. I'll push to other branches shortly.

> Add Java 9 support to smoke tester
> --
>
> Key: LUCENE-8155
> URL: https://issues.apache.org/jira/browse/LUCENE-8155
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
>Priority: Major
>  Labels: Java9
> Attachments: LUCENE-8155.patch
>
>
> After adding MR-JAR support with LUCENE-7966, we should test the release 
> candidates with Java 9. Therefore the already existing code in {{build.xml}} 
> that uses a separate environment variable to pass {{JAVA9_HOME}} should be 
> reenabled. This also requires reconfiguring Jenkins.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8155) Add Java 9 support to smoke tester

2018-03-19 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16404914#comment-16404914
 ] 

ASF subversion and git services commented on LUCENE-8155:
-

Commit aae07d9572459b4a7142bb614d673783233699b9 in lucene-solr's branch 
refs/heads/master from [~thetaphi]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=aae07d9 ]

LUCENE-8155: Fix Solr example with Java 9 (was a problem when reverting an old 
commit)


> Add Java 9 support to smoke tester
> --
>
> Key: LUCENE-8155
> URL: https://issues.apache.org/jira/browse/LUCENE-8155
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
>Priority: Major
>  Labels: Java9
> Attachments: LUCENE-8155.patch
>
>
> After adding MR-JAR support with LUCENE-7966, we should test the release 
> candidates with Java 9. Therefore the already existing code in {{build.xml}} 
> that uses a separate environment variable to pass {{JAVA9_HOME}} should be 
> reenabled. This also requires reconfiguring Jenkins.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11407) AutoscalingHistoryHandlerTest fails frequently

2018-03-19 Thread Alan Woodward (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16404903#comment-16404903
 ] 

Alan Woodward commented on SOLR-11407:
--

My test failures are due to a recovery timing out, rather than the error 
message in the issue text above.  It seems that in testHistory() the 
waitForRecovery() method is expecting all down cores to be removed from the 
cluster state, but one remains.  There were two cores on the down node, and one 
of them is explicitly deleted from the cluster state (possibly by an 
autoscaling trigger?), so I'm guessing that the bug is that the other core 
should be removed as well.  [~ab]can you comment?

> AutoscalingHistoryHandlerTest fails frequently
> --
>
> Key: SOLR-11407
> URL: https://issues.apache.org/jira/browse/SOLR-11407
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
>Priority: Blocker
> Fix For: 7.3, master (8.0)
>
> Attachments: tests-failures.txt
>
>
> This test fails frequently on jenkins with a failed assertion (see also 
> SOLR-11378 for other failure mode):
> {code}
>[junit4] FAILURE 6.49s J2 | AutoscalingHistoryHandlerTest.testHistory <<<
>[junit4]> Throwable #1: java.lang.AssertionError: expected:<8> but 
> was:<6>
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([164F10BB7F145FDE:7BB3B446C55CA0D9]:0)
>[junit4]>  at 
> org.apache.solr.handler.admin.AutoscalingHistoryHandlerTest.testHistory(AutoscalingHistoryHandlerTest.java:194)
>[junit4]>  at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8155) Add Java 9 support to smoke tester

2018-03-19 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16404894#comment-16404894
 ] 

Uwe Schindler commented on LUCENE-8155:
---

Oh,
can you fix it or should i do it?

Uwe

> Add Java 9 support to smoke tester
> --
>
> Key: LUCENE-8155
> URL: https://issues.apache.org/jira/browse/LUCENE-8155
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
>Priority: Major
>  Labels: Java9
> Attachments: LUCENE-8155.patch
>
>
> After adding MR-JAR support with LUCENE-7966, we should test the release 
> candidates with Java 9. Therefore the already existing code in {{build.xml}} 
> that uses a separate environment variable to pass {{JAVA9_HOME}} should be 
> reenabled. This also requires reconfiguring Jenkins.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11670) Implement a periodic house-keeping task

2018-03-19 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16404885#comment-16404885
 ] 

ASF subversion and git services commented on SOLR-11670:


Commit ed2d3583300263fa6aff4ad41b262bb2c32ae01c in lucene-solr's branch 
refs/heads/master from [~ab]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=ed2d358 ]

SOLR-11670: Make sure defaults are applied in simulated cluster.


> Implement a periodic house-keeping task
> ---
>
> Key: SOLR-11670
> URL: https://issues.apache.org/jira/browse/SOLR-11670
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
>Priority: Major
> Fix For: 7.4, master (8.0)
>
> Attachments: SOLR-11670.patch, SOLR-11670.patch, SOLR-11670.patch
>
>
> Some high-impact cluster changes (such as split shard) leave the original 
> data and original state that is no longer actively used. This makes sense due 
> to safety reasons and to make it easier to roll-back the changes.
> However, this unused data will accumulate over time, especially when actions 
> like split shard are invoked automatically by the autoscaling framework. We 
> need a periodic task that would clean up this kind of data after a certain 
> period.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11670) Implement a periodic house-keeping task

2018-03-19 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16404888#comment-16404888
 ] 

ASF subversion and git services commented on SOLR-11670:


Commit f6319d6d0a80e5f82b26f6b340ad250618f6b565 in lucene-solr's branch 
refs/heads/branch_7x from [~ab]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=f6319d6 ]

SOLR-11670: Make sure defaults are applied in simulated cluster.


> Implement a periodic house-keeping task
> ---
>
> Key: SOLR-11670
> URL: https://issues.apache.org/jira/browse/SOLR-11670
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
>Priority: Major
> Fix For: 7.4, master (8.0)
>
> Attachments: SOLR-11670.patch, SOLR-11670.patch, SOLR-11670.patch
>
>
> Some high-impact cluster changes (such as split shard) leave the original 
> data and original state that is no longer actively used. This makes sense due 
> to safety reasons and to make it easier to roll-back the changes.
> However, this unused data will accumulate over time, especially when actions 
> like split shard are invoked automatically by the autoscaling framework. We 
> need a periodic task that would clean up this kind of data after a certain 
> period.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-SmokeRelease-7.3 - Build # 4 - Still Failing

2018-03-19 Thread Steve Rowe
ant clean server, I think.

I added a comment on https://issues.apache.org/jira/browse/LUCENE-8155

--
Steve
www.lucidworks.com

> On Mar 19, 2018, at 3:07 AM, Mikhail Khludnev  wrote:
> 
> Shouldn't it be   ant clean run-example ?  
> 
> On Mon, Mar 19, 2018 at 4:48 AM, Apache Jenkins Server 
>  wrote:
> Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-7.3/4/
> 
> No tests ran.
> 
> Build Log:
> [...truncated 30130 lines...]
> prepare-release-no-sign:
> [mkdir] Created dir: 
> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.3/lucene/build/smokeTestRelease/dist
>  [copy] Copying 491 files to 
> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.3/lucene/build/smokeTestRelease/dist/lucene
>  [copy] Copying 230 files to 
> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.3/lucene/build/smokeTestRelease/dist/solr
>[smoker] Java 1.8 JAVA_HOME=/home/jenkins/tools/java/latest1.8
>[smoker] Java 9 JAVA_HOME=/home/jenkins/tools/java/latest1.9
>[smoker] NOTE: output encoding is UTF-8
>[smoker]
>[smoker] Load release URL 
> "file:/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.3/lucene/build/smokeTestRelease/dist/"...
>[smoker]
>[smoker] Test Lucene...
>[smoker]   test basics...
>[smoker]   get KEYS
>[smoker] 0.2 MB in 0.01 sec (20.2 MB/sec)
>[smoker]   check changes HTML...
>[smoker]   download lucene-7.3.0-src.tgz...
>[smoker] 31.9 MB in 0.04 sec (875.7 MB/sec)
>[smoker] verify md5/sha1 digests
>[smoker]   download lucene-7.3.0.tgz...
>[smoker] 73.4 MB in 0.08 sec (866.7 MB/sec)
>[smoker] verify md5/sha1 digests
>[smoker]   download lucene-7.3.0.zip...
>[smoker] 83.9 MB in 0.09 sec (884.8 MB/sec)
>[smoker] verify md5/sha1 digests
>[smoker]   unpack lucene-7.3.0.tgz...
>[smoker] verify JAR metadata/identity/no javax.* or java.* classes...
>[smoker] test demo with 1.8...
>[smoker]   got 6300 hits for query "lucene"
>[smoker] checkindex with 1.8...
>[smoker] test demo with 9...
>[smoker]   got 6300 hits for query "lucene"
>[smoker] checkindex with 9...
>[smoker] check Lucene's javadoc JAR
>[smoker]   unpack lucene-7.3.0.zip...
>[smoker] verify JAR metadata/identity/no javax.* or java.* classes...
>[smoker] test demo with 1.8...
>[smoker]   got 6300 hits for query "lucene"
>[smoker] checkindex with 1.8...
>[smoker] test demo with 9...
>[smoker]   got 6300 hits for query "lucene"
>[smoker] checkindex with 9...
>[smoker] check Lucene's javadoc JAR
>[smoker]   unpack lucene-7.3.0-src.tgz...
>[smoker] make sure no JARs/WARs in src dist...
>[smoker] run "ant validate"
>[smoker] run tests w/ Java 8 and testArgs='-Dtests.badapples=false 
> -Dtests.slow=false'...
>[smoker] test demo with 1.8...
>[smoker]   got 217 hits for query "lucene"
>[smoker] checkindex with 1.8...
>[smoker] generate javadocs w/ Java 8...
>[smoker]
>[smoker] Crawl/parse...
>[smoker]
>[smoker] Verify...
>[smoker] run tests w/ Java 9 and testArgs='-Dtests.badapples=false 
> -Dtests.slow=false'...
>[smoker] test demo with 9...
>[smoker]   got 217 hits for query "lucene"
>[smoker] checkindex with 9...
>[smoker]   confirm all releases have coverage in TestBackwardsCompatibility
>[smoker] find all past Lucene releases...
>[smoker] run TestBackwardsCompatibility..
>[smoker] success!
>[smoker]
>[smoker] Test Solr...
>[smoker]   test basics...
>[smoker]   get KEYS
>[smoker] 0.2 MB in 0.01 sec (28.9 MB/sec)
>[smoker]   check changes HTML...
>[smoker]   download solr-7.3.0-src.tgz...
>[smoker] 55.4 MB in 0.40 sec (137.2 MB/sec)
>[smoker] verify md5/sha1 digests
>[smoker]   download solr-7.3.0.tgz...
>[smoker] 154.6 MB in 1.10 sec (140.7 MB/sec)
>[smoker] verify md5/sha1 digests
>[smoker]   download solr-7.3.0.zip...
>[smoker] 155.6 MB in 1.42 sec (109.9 MB/sec)
>[smoker] verify md5/sha1 digests
>[smoker]   unpack solr-7.3.0.tgz...
>[smoker] verify JAR metadata/identity/no javax.* or java.* classes...
>[smoker] unpack lucene-7.3.0.tgz...
>[smoker]   **WARNING**: skipping check of 
> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.3/lucene/build/smokeTestRelease/tmp/unpack/solr-7.3.0/contrib/dataimporthandler-extras/lib/javax.mail-1.5.1.jar:
>  it has javax.* classes
>[smoker]   **WARNING**: skipping check of 
> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.3/lucene/build/smokeTestRelease/tmp/unpack/solr-7.3.0/contrib/dataimporthandler-extras/lib/activation-1.1.1.jar:
>  it has javax.* classes
>[smoker] copying unpacked distribution for Java 8 ...
>[s

[jira] [Commented] (LUCENE-8155) Add Java 9 support to smoke tester

2018-03-19 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16404872#comment-16404872
 ] 

Steve Rowe commented on LUCENE-8155:


The newly added Java9 test block is improperly calling "ant example", which was 
fixed for Java8 in SOLR-6926 to call "ant server".

{noformat}
  print('test solr example w/ Java 8...')
  java.run_java8('ant clean server', '%s/antexample.log' % unpackPath)
  testSolrExample(unpackPath, java.java8_home, True)

  if java.run_java9:
[...]
print('test solr example w/ Java 9...')
java.run_java9('ant clean example', '%s/antexample.log' % unpackPath)
testSolrExample(unpackPath, java.java9_home, True)
{noformat}

As a result, Jenkins runs are failing like so from 
[https://builds.apache.org/job/Lucene-Solr-SmokeRelease-7.3/4/]:

{noformat}
  [smoker] BUILD FAILED
  [smoker] Target "example" does not exist in the project "solr". 
{noformat}

> Add Java 9 support to smoke tester
> --
>
> Key: LUCENE-8155
> URL: https://issues.apache.org/jira/browse/LUCENE-8155
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
>Priority: Major
>  Labels: Java9
> Attachments: LUCENE-8155.patch
>
>
> After adding MR-JAR support with LUCENE-7966, we should test the release 
> candidates with Java 9. Therefore the already existing code in {{build.xml}} 
> that uses a separate environment variable to pass {{JAVA9_HOME}} should be 
> reenabled. This also requires reconfiguring Jenkins.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8346) Upgrade Zookeeper to version 3.5.x

2018-03-19 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-8346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16404867#comment-16404867
 ] 

Jan Høydahl commented on SOLR-8346:
---

There seems to be some movement in the ZooKeeper camp towards a 3.5.4 release, 
although slowly. Do you guys want to wait for a stable release before starting 
the upgrade on the Solr side, or should we cut a feature branch for it right 
now off from master, setup some Jenkins on it and mature the whole ting to be 
ready?

> Upgrade Zookeeper to version 3.5.x
> --
>
> Key: SOLR-8346
> URL: https://issues.apache.org/jira/browse/SOLR-8346
> Project: Solr
>  Issue Type: Task
>  Components: SolrCloud
>Reporter: Jan Høydahl
>Priority: Major
>  Labels: security, zookeeper
> Attachments: SOLR_8346.patch
>
>
> Investigate upgrading ZooKeeper to 3.5.x, once released. Primary motivation 
> for this is SSL support. Currently a 3.5.3-beta is released (2017-04-17).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #333: SOLR-12064: NullPointerException in JSON face...

2018-03-19 Thread mrkarthik
Github user mrkarthik closed the pull request at:

https://github.com/apache/lucene-solr/pull/333


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr issue #333: SOLR-12064: NullPointerException in JSON facet

2018-03-19 Thread mrkarthik
Github user mrkarthik commented on the issue:

https://github.com/apache/lucene-solr/pull/333
  
Commit 68d8eb45046e01b511b45efbdc72323670956fbd


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #287: SOLR-11331: Ability to Debug Solr With Eclips...

2018-03-19 Thread mrkarthik
Github user mrkarthik closed the pull request at:

https://github.com/apache/lucene-solr/pull/287


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr issue #287: SOLR-11331: Ability to Debug Solr With Eclipse IDE

2018-03-19 Thread mrkarthik
Github user mrkarthik commented on the issue:

https://github.com/apache/lucene-solr/pull/287
  
Code committed by @uschindler. Commit 
d2152da59482b6cc338c60888b72e8f2996cb86a


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-4722) Highlighter which generates a list of query term position(s) for each item in a list of documents, or returns null if highlighting is disabled.

2018-03-19 Thread Bram Vereertbrugghen (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16404826#comment-16404826
 ] 

Bram Vereertbrugghen edited comment on SOLR-4722 at 3/19/18 1:55 PM:
-

As a response to [~TamerBoz]:

What I did was the following:
 1. place the solr-positionshighlighter.jar into /opt/solr/server/solr/lib, 
where Solr could find it. (Note: I'm running Solr in a docker container, so our 
paths will be different).

Then in mycore/conf/solrconfig.xml:

2. Create the searchComponent using the above snippets and name it 
'highlighter' (I didn't want to deal with duplicate naming or overwritten 
issues, so I played it safe).
{code:java}
 
    
 {code}
3. Either create a new requestHandler for testing, or append it to an existing 
one using  This is a new endpoint where the highlighting will be 
the new highlighter:
{code:java}
  
    
  true
  json
  true
    
    
  highlighter
    
  
{code}
4. Make the following call (change the url so it points to your core and solr 
instance):
{noformat}
http://localhost:8002/solr/mycore/test-point?hl.fl=tm_field_test&hl=on&indent=on&q=tm_field_test:test&wt=json{noformat}
This will output a highlighting array containing a number, and this number 
contains the position of the word in the document.
NOTE: I found very strange results when using this .jar on documents where the 
values were arrays. If this is your case, be prepared to rewrite some stuff (or 
find the correct interpretation of the results).

 

 


was (Author: darm):
As a response to [~TamerBoz]:

What I did was the following:
1. place the solr-positionshighlighter.jar into /opt/solr/server/solr/lib, 
where Solr could find it. (Note: I'm running Solr in a docker container, so our 
paths will be different).
2. Create the searchComponent using the above snippets and name it 
'highlighter' (so I could be able to outrule duplicate naming issues).
{code:java}
 
    
 {code}
3. Create a new requestHandler explicitly using this way of highlighting.
{code:java}
  
    
  true
  json
  true
    
    
  highlighter
    
  
{code}
4. Make the followinhg call:
{noformat}
http://localhost:8002/solr/mycore/test-point?hl.fl=tm_field_test&hl=on&indent=on&q=tm_field_test:test&wt=json{noformat}
This makes the call

 

 

> Highlighter which generates a list of query term position(s) for each item in 
> a list of documents, or returns null if highlighting is disabled.
> ---
>
> Key: SOLR-4722
> URL: https://issues.apache.org/jira/browse/SOLR-4722
> Project: Solr
>  Issue Type: New Feature
>  Components: highlighter
>Affects Versions: 4.3, 6.0
>Reporter: Tricia Jenkins
>Priority: Minor
> Attachments: PositionsSolrHighlighter.java, SOLR-4722.patch, 
> SOLR-4722.patch, solr-positionshighlighter.jar
>
>
> As an alternative to returning snippets, this highlighter provides the (term) 
> position for query matches.  One usecase for this is to reconcile the term 
> position from the Solr index with 'word' coordinates provided by an OCR 
> process.  In this way we are able to 'highlight' an image, like a page from a 
> book or an article from a newspaper, in the locations that match the user's 
> query.
> This is based on the FastVectorHighlighter and requires that termVectors, 
> termOffsets and termPositions be stored.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >