[jira] [Commented] (SOLR-13285) ByteArrayUtf8CharSequence cannot be cast to java.lang.String exception during replication
[ https://issues.apache.org/jira/browse/SOLR-13285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16816822#comment-16816822 ] ASF subversion and git services commented on SOLR-13285: Commit 08cc899096e4587bb574c4c96f416f2b8f2f2eb7 in lucene-solr's branch refs/heads/branch_7_7 from Noble Paul [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=08cc899 ] SOLR-13285: Updates with enum fields and javabin cause ClassCastException > ByteArrayUtf8CharSequence cannot be cast to java.lang.String exception during > replication > - > > Key: SOLR-13285 > URL: https://issues.apache.org/jira/browse/SOLR-13285 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: replication (java), SolrCloud, SolrJ >Affects Versions: 7.7, 7.7.1, 7.7.2, 8.0 > Environment: centos 7 > solrcloud 7.7.1, 8.1.0 >Reporter: Karl Stoney >Assignee: Noble Paul >Priority: Major > Labels: newbie, replication > Attachments: SOLR-13285.patch, SOLR-13285.patch > > > Since upgrading to 7.7 (also tried 7.7.1, and 8.1.0) from 6.6.4, we're seeing > the following errors in the SolrCloud elected master for a given collection > when updates are written. This was after a full reindex of data (fresh > build). > {code:java} > request: > http://solr-1.search-solr.preprod.k8.atcloud.io:80/solr/at-uk_shard1_replica_n2/update?update.distrib=FROMLEADER=http%3A%2F%2Fsolr-2.search-solr.preprod.k8.atcloud.io%3A80%2Fsolr%2Fat-uk_shard1_replica_n1%2F=javabin=2 > Remote error message: org.apache.solr.common.util.ByteArrayUtf8CharSequence > cannot be cast to java.lang.String > at > org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrClient$Runner.sendUpdateStream(ConcurrentUpdateSolrClient.java:385) > ~[solr-solrj-7.7.1.jar:7.7.1 5bf96d32f88eb8a2f5e775339885cd6ba84a3b58 - > ishan - 2019-02-23 02:39:09] > at > org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrClient$Runner.run(ConcurrentUpdateSolrClient.java:183) > ~[solr-solrj-7.7.1.jar:7.7.1 5bf96d32f88eb8a2f5e775339885cd6ba84a3b58 - > ishan - 2019-02-23 02:39:09] > at > com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:176) > ~[metrics-core-3.2.6.jar:3.2.6] > at > org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209) > ~[solr-solrj-7.7.1.jar:7.7.1 5bf96d32f88eb8a2f5e775339885cd6ba84a3b58 - > ishan - 2019-02-23 02:39:09] > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > ~[?:1.8.0_191] > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > ~[?:1.8.0_191] > at java.lang.Thread.run(Thread.java:748) [?:1.8.0_191] > {code} > Following this through to the replica, you'll see: > {code:java} > 08:35:22.060 [qtp1540374340-20] ERROR org.apache.solr.servlet.HttpSolrCall - > null:java.lang.ClassCastException: > org.apache.solr.common.util.ByteArrayUtf8CharSequence cannot be cast to > java.lang.String > at > org.apache.solr.common.util.JavaBinCodec.readEnumFieldValue(JavaBinCodec.java:813) > at > org.apache.solr.common.util.JavaBinCodec.readObject(JavaBinCodec.java:339) > at org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:278) > at > org.apache.solr.common.util.JavaBinCodec.readSolrInputDocument(JavaBinCodec.java:640) > at > org.apache.solr.common.util.JavaBinCodec.readObject(JavaBinCodec.java:337) > at org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:278) > at > org.apache.solr.common.util.JavaBinCodec.readMapEntry(JavaBinCodec.java:819) > at > org.apache.solr.common.util.JavaBinCodec.readObject(JavaBinCodec.java:341) > at org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:278) > at > org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$StreamingCodec.readOuterMostDocIterator(JavaBinUpdateRequestCodec.java:295) > at > org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$StreamingCodec.readIterator(JavaBinUpdateRequestCodec.java:280) > at > org.apache.solr.common.util.JavaBinCodec.readObject(JavaBinCodec.java:333) > at org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:278) > at > org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$StreamingCodec.readNamedList(JavaBinUpdateRequestCodec.java:235) > at > org.apache.solr.common.util.JavaBinCodec.readObject(JavaBinCodec.java:298) > at org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:278) > at > org.apache.solr.common.util.JavaBinCodec.unmarshal(JavaBinCodec.java:191) > at
[JENKINS] Lucene-Solr-master-Linux (64bit/jdk1.8.0_172) - Build # 23909 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/23909/ Java: 64bit/jdk1.8.0_172 -XX:+UseCompressedOops -XX:+UseParallelGC 4 tests failed. FAILED: org.apache.lucene.document.TestLatLonShapeEncoding.testRandomLineEncoding Error Message: Stack Trace: java.lang.AssertionError at __randomizedtesting.SeedInfo.seed([F0FA73114326B368:1DE3CBD39740E909]:0) at org.junit.Assert.fail(Assert.java:86) at org.junit.Assert.assertTrue(Assert.java:41) at org.junit.Assert.assertTrue(Assert.java:52) at org.apache.lucene.document.TestLatLonShapeEncoding.verifyEncoding(TestLatLonShapeEncoding.java:533) at org.apache.lucene.document.TestLatLonShapeEncoding.testRandomLineEncoding(TestLatLonShapeEncoding.java:475) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.lang.Thread.run(Thread.java:748) FAILED: org.apache.lucene.document.TestLatLonShapeEncoding.testRandomLineEncoding Error Message: Stack Trace: java.lang.AssertionError at __randomizedtesting.SeedInfo.seed([F0FA73114326B368:1DE3CBD39740E909]:0) at org.junit.Assert.fail(Assert.java:86) at org.junit.Assert.assertTrue(Assert.java:41) at org.junit.Assert.assertTrue(Assert.java:52) at org.apache.lucene.document.TestLatLonShapeEncoding.verifyEncoding(TestLatLonShapeEncoding.java:533) at
[jira] [Commented] (SOLR-13331) Atomic Update Multivalue remove does not work
[ https://issues.apache.org/jira/browse/SOLR-13331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16816802#comment-16816802 ] ASF subversion and git services commented on SOLR-13331: Commit cfddbe1126523f84b9d7f7f98c55af9e8fd0f405 in lucene-solr's branch refs/heads/branch_7_7 from Jason Gerlowski [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=cfddbe1 ] SOLR-13331: Fix AtomicUpdate 'remove' ops in SolrJ The racent change introducing ByteArrayUtf8CharSequence altered the NamedLists produced by atomic-update requests so that they include instances of this class for requests coming in as javabin. This is a problem for 'remove' atomic-updates, which need to be able to compare these ByteArrayUtf8CharSequence instances with existing field values represented as Strings. equals() would always return false, and 'remove' operations would have no effect. This commit converts items as necessary to allow atomic-update operations to work as expected. > Atomic Update Multivalue remove does not work > - > > Key: SOLR-13331 > URL: https://issues.apache.org/jira/browse/SOLR-13331 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: UpdateRequestProcessors >Affects Versions: 7.7, 7.7.1, 8.0 > Environment: Standalone Solr Server >Reporter: Thomas Wöckinger >Assignee: Jason Gerlowski >Priority: Critical > Labels: patch > Fix For: 8.1, master (9.0) > > Attachments: Fix-SOLR13331-Add-toNativeType-implementations.patch, > SOLR-13331.patch > > > When using JavaBinCodec the values of collections are of type > ByteArrayUtf8CharSequence, existing field values are Strings so the remove > Operation does not have any effect. > The relevant code is located in class AtomicUpdateDocumentMerger method > doRemove. > The method parameter fieldVal contains the collection values of type > ByteArrayUtf8CharSequence, the variable original contains the collection of > Strings -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS-EA] Lucene-Solr-8.x-Linux (64bit/jdk-13-ea+shipilev-fastdebug) - Build # 390 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-8.x-Linux/390/ Java: 64bit/jdk-13-ea+shipilev-fastdebug -XX:-UseCompressedOops -XX:+UseParallelGC 1 tests failed. FAILED: org.apache.solr.security.JWTAuthPluginIntegrationTest.testMetrics Error Message: Server returned HTTP response code: 401 for URL: http://127.0.0.1:40217/solr/jwtColl/query?q=*:* Stack Trace: java.io.IOException: Server returned HTTP response code: 401 for URL: http://127.0.0.1:40217/solr/jwtColl/query?q=*:* at __randomizedtesting.SeedInfo.seed([C4FE9E0AD16BE80F:3AEC4CA8CD619FBC]:0) at java.base/sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1913) at java.base/sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1509) at java.base/java.net.URLConnection.getContent(URLConnection.java:749) at org.apache.solr.security.JWTAuthPluginIntegrationTest.get(JWTAuthPluginIntegrationTest.java:207) at org.apache.solr.security.JWTAuthPluginIntegrationTest.testMetrics(JWTAuthPluginIntegrationTest.java:149) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:567) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at
[JENKINS] Lucene-Solr-BadApples-NightlyTests-master - Build # 58 - Unstable
Build: https://builds.apache.org/job/Lucene-Solr-BadApples-NightlyTests-master/58/ 2 tests failed. FAILED: org.apache.solr.cloud.api.collections.HdfsCollectionsAPIDistributedZkTest.testCollectionsAPI Error Message: Failed while waiting for active collection Timeout waiting to see state for collection=awhollynewcollection_0 :DocCollection(awhollynewcollection_0//collections/awhollynewcollection_0/state.json/8)={ "pullReplicas":"0", "replicationFactor":"2", "shards":{ "shard1":{ "range":"8000-b332", "state":"active", "replicas":{ "core_node5":{ "core":"awhollynewcollection_0_shard1_replica_n1", "base_url":"https://127.0.0.1:41595/solr;, "node_name":"127.0.0.1:41595_solr", "state":"down", "type":"NRT", "force_set_state":"false"}, "core_node7":{ "core":"awhollynewcollection_0_shard1_replica_n2", "base_url":"https://127.0.0.1:44582/solr;, "node_name":"127.0.0.1:44582_solr", "state":"down", "type":"NRT", "force_set_state":"false"}}}, "shard2":{ "range":"b333-e665", "state":"active", "replicas":{ "core_node9":{ "dataDir":"hdfs://lucene2-us-west.apache.org:35333/solr_hdfs_home/awhollynewcollection_0/core_node9/data/", "base_url":"https://127.0.0.1:36022/solr;, "node_name":"127.0.0.1:36022_solr", "type":"NRT", "force_set_state":"false", "ulogDir":"hdfs://lucene2-us-west.apache.org:35333/solr_hdfs_home/awhollynewcollection_0/core_node9/data/tlog", "core":"awhollynewcollection_0_shard2_replica_n3", "shared_storage":"true", "state":"active"}, "core_node11":{ "dataDir":"hdfs://lucene2-us-west.apache.org:35333/solr_hdfs_home/awhollynewcollection_0/core_node11/data/", "base_url":"https://127.0.0.1:43690/solr;, "node_name":"127.0.0.1:43690_solr", "type":"NRT", "force_set_state":"false", "ulogDir":"hdfs://lucene2-us-west.apache.org:35333/solr_hdfs_home/awhollynewcollection_0/core_node11/data/tlog", "core":"awhollynewcollection_0_shard2_replica_n4", "shared_storage":"true", "state":"active", "leader":"true"}}}, "shard3":{ "range":"e666-1998", "state":"active", "replicas":{ "core_node12":{ "core":"awhollynewcollection_0_shard3_replica_n6", "base_url":"https://127.0.0.1:41595/solr;, "node_name":"127.0.0.1:41595_solr", "state":"down", "type":"NRT", "force_set_state":"false"}, "core_node14":{ "core":"awhollynewcollection_0_shard3_replica_n8", "base_url":"https://127.0.0.1:44582/solr;, "node_name":"127.0.0.1:44582_solr", "state":"down", "type":"NRT", "force_set_state":"false"}}}, "shard4":{ "range":"1999-4ccb", "state":"active", "replicas":{ "core_node16":{ "core":"awhollynewcollection_0_shard4_replica_n10", "base_url":"https://127.0.0.1:36022/solr;, "node_name":"127.0.0.1:36022_solr", "state":"down", "type":"NRT", "force_set_state":"false"}, "core_node17":{ "core":"awhollynewcollection_0_shard4_replica_n13", "base_url":"https://127.0.0.1:43690/solr;, "node_name":"127.0.0.1:43690_solr", "state":"down", "type":"NRT", "force_set_state":"false"}}}, "shard5":{ "range":"4ccc-7fff", "state":"active", "replicas":{ "core_node18":{ "core":"awhollynewcollection_0_shard5_replica_n15", "base_url":"https://127.0.0.1:41595/solr;, "node_name":"127.0.0.1:41595_solr", "state":"down", "type":"NRT", "force_set_state":"false"}, "core_node20":{ "core":"awhollynewcollection_0_shard5_replica_n19", "base_url":"https://127.0.0.1:44582/solr;, "node_name":"127.0.0.1:44582_solr", "state":"down", "type":"NRT", "force_set_state":"false", "router":{"name":"compositeId"}, "maxShardsPerNode":"3", "autoAddReplicas":"false", "nrtReplicas":"2", "tlogReplicas":"0"} Live Nodes: [127.0.0.1:36022_solr, 127.0.0.1:41595_solr, 127.0.0.1:43690_solr, 127.0.0.1:44582_solr] Last available state: DocCollection(awhollynewcollection_0//collections/awhollynewcollection_0/state.json/8)={ "pullReplicas":"0", "replicationFactor":"2", "shards":{ "shard1":{ "range":"8000-b332", "state":"active", "replicas":{ "core_node5":{ "core":"awhollynewcollection_0_shard1_replica_n1", "base_url":"https://127.0.0.1:41595/solr;,
[JENKINS] Lucene-Solr-SmokeRelease-master - Build # 1307 - Still Failing
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-master/1307/ No tests ran. Build Log: [...truncated 23848 lines...] [asciidoctor:convert] asciidoctor: ERROR: about-this-guide.adoc: line 1: invalid part, must have at least one section (e.g., chapter, appendix, etc.) [asciidoctor:convert] asciidoctor: ERROR: solr-glossary.adoc: line 1: invalid part, must have at least one section (e.g., chapter, appendix, etc.) [java] Processed 2523 links (2064 relative) to 3354 anchors in 253 files [echo] Validated Links & Anchors via: /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr-ref-guide/bare-bones-html/ -dist-changes: [copy] Copying 4 files to /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/package/changes package: -unpack-solr-tgz: -ensure-solr-tgz-exists: [mkdir] Created dir: /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr.tgz.unpacked [untar] Expanding: /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/package/solr-9.0.0.tgz into /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr.tgz.unpacked generate-maven-artifacts: resolve: resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml resolve: resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail:
[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-11) - Build # 23908 - Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/23908/ Java: 64bit/jdk-11 -XX:+UseCompressedOops -XX:+UseG1GC 4 tests failed. FAILED: org.apache.lucene.document.TestLatLonShapeEncoding.testRandomLineEncoding Error Message: Stack Trace: java.lang.AssertionError at __randomizedtesting.SeedInfo.seed([E1E19C0941867D43:CF824CB95E02722]:0) at org.junit.Assert.fail(Assert.java:86) at org.junit.Assert.assertTrue(Assert.java:41) at org.junit.Assert.assertTrue(Assert.java:52) at org.apache.lucene.document.TestLatLonShapeEncoding.verifyEncoding(TestLatLonShapeEncoding.java:533) at org.apache.lucene.document.TestLatLonShapeEncoding.testRandomLineEncoding(TestLatLonShapeEncoding.java:475) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.base/java.lang.Thread.run(Thread.java:834) FAILED: org.apache.lucene.document.TestLatLonShapeEncoding.testRandomLineEncoding Error Message: Stack Trace: java.lang.AssertionError at __randomizedtesting.SeedInfo.seed([E1E19C0941867D43:CF824CB95E02722]:0) at org.junit.Assert.fail(Assert.java:86) at org.junit.Assert.assertTrue(Assert.java:41) at org.junit.Assert.assertTrue(Assert.java:52) at org.apache.lucene.document.TestLatLonShapeEncoding.verifyEncoding(TestLatLonShapeEncoding.java:533) at
[jira] [Commented] (SOLR-13285) ByteArrayUtf8CharSequence cannot be cast to java.lang.String exception during replication
[ https://issues.apache.org/jira/browse/SOLR-13285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16816717#comment-16816717 ] Noble Paul commented on SOLR-13285: --- I'll port it > ByteArrayUtf8CharSequence cannot be cast to java.lang.String exception during > replication > - > > Key: SOLR-13285 > URL: https://issues.apache.org/jira/browse/SOLR-13285 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: replication (java), SolrCloud, SolrJ >Affects Versions: 7.7, 7.7.1, 7.7.2, 8.0 > Environment: centos 7 > solrcloud 7.7.1, 8.1.0 >Reporter: Karl Stoney >Assignee: Noble Paul >Priority: Major > Labels: newbie, replication > Attachments: SOLR-13285.patch, SOLR-13285.patch > > > Since upgrading to 7.7 (also tried 7.7.1, and 8.1.0) from 6.6.4, we're seeing > the following errors in the SolrCloud elected master for a given collection > when updates are written. This was after a full reindex of data (fresh > build). > {code:java} > request: > http://solr-1.search-solr.preprod.k8.atcloud.io:80/solr/at-uk_shard1_replica_n2/update?update.distrib=FROMLEADER=http%3A%2F%2Fsolr-2.search-solr.preprod.k8.atcloud.io%3A80%2Fsolr%2Fat-uk_shard1_replica_n1%2F=javabin=2 > Remote error message: org.apache.solr.common.util.ByteArrayUtf8CharSequence > cannot be cast to java.lang.String > at > org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrClient$Runner.sendUpdateStream(ConcurrentUpdateSolrClient.java:385) > ~[solr-solrj-7.7.1.jar:7.7.1 5bf96d32f88eb8a2f5e775339885cd6ba84a3b58 - > ishan - 2019-02-23 02:39:09] > at > org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrClient$Runner.run(ConcurrentUpdateSolrClient.java:183) > ~[solr-solrj-7.7.1.jar:7.7.1 5bf96d32f88eb8a2f5e775339885cd6ba84a3b58 - > ishan - 2019-02-23 02:39:09] > at > com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:176) > ~[metrics-core-3.2.6.jar:3.2.6] > at > org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209) > ~[solr-solrj-7.7.1.jar:7.7.1 5bf96d32f88eb8a2f5e775339885cd6ba84a3b58 - > ishan - 2019-02-23 02:39:09] > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > ~[?:1.8.0_191] > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > ~[?:1.8.0_191] > at java.lang.Thread.run(Thread.java:748) [?:1.8.0_191] > {code} > Following this through to the replica, you'll see: > {code:java} > 08:35:22.060 [qtp1540374340-20] ERROR org.apache.solr.servlet.HttpSolrCall - > null:java.lang.ClassCastException: > org.apache.solr.common.util.ByteArrayUtf8CharSequence cannot be cast to > java.lang.String > at > org.apache.solr.common.util.JavaBinCodec.readEnumFieldValue(JavaBinCodec.java:813) > at > org.apache.solr.common.util.JavaBinCodec.readObject(JavaBinCodec.java:339) > at org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:278) > at > org.apache.solr.common.util.JavaBinCodec.readSolrInputDocument(JavaBinCodec.java:640) > at > org.apache.solr.common.util.JavaBinCodec.readObject(JavaBinCodec.java:337) > at org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:278) > at > org.apache.solr.common.util.JavaBinCodec.readMapEntry(JavaBinCodec.java:819) > at > org.apache.solr.common.util.JavaBinCodec.readObject(JavaBinCodec.java:341) > at org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:278) > at > org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$StreamingCodec.readOuterMostDocIterator(JavaBinUpdateRequestCodec.java:295) > at > org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$StreamingCodec.readIterator(JavaBinUpdateRequestCodec.java:280) > at > org.apache.solr.common.util.JavaBinCodec.readObject(JavaBinCodec.java:333) > at org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:278) > at > org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$StreamingCodec.readNamedList(JavaBinUpdateRequestCodec.java:235) > at > org.apache.solr.common.util.JavaBinCodec.readObject(JavaBinCodec.java:298) > at org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:278) > at > org.apache.solr.common.util.JavaBinCodec.unmarshal(JavaBinCodec.java:191) > at > org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec.unmarshal(JavaBinUpdateRequestCodec.java:126) > at > org.apache.solr.handler.loader.JavabinLoader.parseAndLoadDocs(JavabinLoader.java:123) > at >
[JENKINS] Lucene-Solr-Tests-8.x - Build # 122 - Still Unstable
Build: https://builds.apache.org/job/Lucene-Solr-Tests-8.x/122/ 1 tests failed. FAILED: org.apache.lucene.document.TestLatLonShapeEncoding.testRandomLineEncoding Error Message: Stack Trace: java.lang.AssertionError at __randomizedtesting.SeedInfo.seed([550FCB9348C3DEB3:B81673519CA584D2]:0) at org.junit.Assert.fail(Assert.java:86) at org.junit.Assert.assertTrue(Assert.java:41) at org.junit.Assert.assertTrue(Assert.java:52) at org.apache.lucene.document.TestLatLonShapeEncoding.verifyEncoding(TestLatLonShapeEncoding.java:533) at org.apache.lucene.document.TestLatLonShapeEncoding.testRandomLineEncoding(TestLatLonShapeEncoding.java:475) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.lang.Thread.run(Thread.java:748) Build Log: [...truncated 10208 lines...] [junit4] Suite: org.apache.lucene.document.TestLatLonShapeEncoding [junit4] 2> NOTE: reproduce with: ant test -Dtestcase=TestLatLonShapeEncoding -Dtests.method=testRandomLineEncoding -Dtests.seed=550FCB9348C3DEB3 -Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=ar-TN -Dtests.timezone=America/Ojinaga -Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1 [junit4] FAILURE 0.10s J0 | TestLatLonShapeEncoding.testRandomLineEncoding <<< [junit4]> Throwable #1: java.lang.AssertionError [junit4]>at
[JENKINS-EA] Lucene-Solr-8.x-Linux (64bit/jdk-13-ea+shipilev-fastdebug) - Build # 389 - Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-8.x-Linux/389/ Java: 64bit/jdk-13-ea+shipilev-fastdebug -XX:+UseCompressedOops -XX:+UseSerialGC 1 tests failed. FAILED: org.apache.solr.cloud.api.collections.ShardSplitTest.testSplitShardWithRuleLink Error Message: Error from server at http://127.0.0.1:44057: Could not find collection : shardSplitWithRule_link Stack Trace: org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error from server at http://127.0.0.1:44057: Could not find collection : shardSplitWithRule_link at __randomizedtesting.SeedInfo.seed([93BE32B9B6EB6BFA:99A287223C770D5F]:0) at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:649) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244) at org.apache.solr.client.solrj.impl.LBSolrClient.doRequest(LBSolrClient.java:368) at org.apache.solr.client.solrj.impl.LBSolrClient.request(LBSolrClient.java:296) at org.apache.solr.client.solrj.impl.BaseCloudSolrClient.sendRequest(BaseCloudSolrClient.java:1055) at org.apache.solr.client.solrj.impl.BaseCloudSolrClient.requestWithRetryOnStaleState(BaseCloudSolrClient.java:830) at org.apache.solr.client.solrj.impl.BaseCloudSolrClient.request(BaseCloudSolrClient.java:763) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:207) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:224) at org.apache.solr.cloud.api.collections.ShardSplitTest.doSplitShardWithRule(ShardSplitTest.java:661) at org.apache.solr.cloud.api.collections.ShardSplitTest.testSplitShardWithRuleLink(ShardSplitTest.java:633) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:567) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:1082) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:1054) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at
[jira] [Commented] (LUCENE-8738) Bump minimum Java version requirement to 11
[ https://issues.apache.org/jira/browse/LUCENE-8738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16816623#comment-16816623 ] Erick Erickson commented on LUCENE-8738: Been a busy couple of days, sorry I didn't get back to this sooner. This change looks good. As far as simplifying... well, I have to hop in the "way back" machine so it's fuzzy. I'll try to loop in someone who worked on a custom impl. The critical bit is that over in SolrCores, there has to be synchronization around updating "pendingCloses", and those sync methods shouldn't be exposed to the impl. I don't think there's any good way for the transient impl to _get_ the solrCores object from CoreContainer (which it does have), and I don't think exposing solrCores for just this is a good idea. So I hit on the Observer pattern as an "end around" that problem and once it worked never went back and thought about it again. Looking at it with fresh eyes, though, it seems like adding the queueCoreClose to coreContainer which _does_ have access to the solrCores object and can do anything it wants with it in a controlled manner would solve that problem much more simply. And the transient impl has access to coreContainer so. So AFAIC, let's rip the complexity out and replace it with a call. I agree that doing that in a separate Jira is a good thing. But let's wait a bit to see if I can loop in people who are living with the current setup. Thanks for your work here! Erick > Bump minimum Java version requirement to 11 > --- > > Key: LUCENE-8738 > URL: https://issues.apache.org/jira/browse/LUCENE-8738 > Project: Lucene - Core > Issue Type: Improvement > Components: general/build >Reporter: Adrien Grand >Priority: Minor > Labels: Java11 > Fix For: master (9.0) > > Attachments: LUCENE-8738-solr-CoreCloseListener.patch > > > See vote thread for reference: https://markmail.org/message/q6ubdycqscpl43aq. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-13397) Solr Syncing Script/Function
[ https://issues.apache.org/jira/browse/SOLR-13397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16816607#comment-16816607 ] Jan Høydahl commented on SOLR-13397: Closing. Please ask your questions on the mailing list. Jira is not a support system but a place to report bugs. > Solr Syncing Script/Function > > > Key: SOLR-13397 > URL: https://issues.apache.org/jira/browse/SOLR-13397 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Anuj B >Priority: Major > > A syncing script/function would be a nice addon feature. It should > automatically check the MySql database and index the contents according to > the changes/additions/deletions made in the main MySql database -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-13397) Solr Syncing Script/Function
[ https://issues.apache.org/jira/browse/SOLR-13397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jan Høydahl resolved SOLR-13397. Resolution: Invalid > Solr Syncing Script/Function > > > Key: SOLR-13397 > URL: https://issues.apache.org/jira/browse/SOLR-13397 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Anuj B >Priority: Major > > A syncing script/function would be a nice addon feature. It should > automatically check the MySql database and index the contents according to > the changes/additions/deletions made in the main MySql database -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[GitHub] [lucene-solr] cpoerschke commented on a change in pull request #300: SOLR-11831: Skip second grouping step if group.limit is 1 (aka Las Vegas Patch)
cpoerschke commented on a change in pull request #300: SOLR-11831: Skip second grouping step if group.limit is 1 (aka Las Vegas Patch) URL: https://github.com/apache/lucene-solr/pull/300#discussion_r275043339 ## File path: solr/core/src/java/org/apache/solr/search/grouping/distributed/shardresultserializer/SearchGroupsResultTransformer.java ## @@ -34,17 +41,37 @@ /** * Implementation for transforming {@link SearchGroup} into a {@link NamedList} structure and visa versa. */ -public class SearchGroupsResultTransformer implements ShardResultTransformer, Map> { +public abstract class SearchGroupsResultTransformer implements ShardResultTransformer, Map> { Review comment: > bloomberg#231 sketches the idea, what do you think? A pull request for a pull request, or a diff for a diff, at first glance that spooked me a little :-) And at second glance I skipped 'the middle bit' and looked end-to-end https://github.com/apache/lucene-solr/compare/1071d093360b2c5869a918de743c7089952094f4...5a326a19eb7aa92754d6ccf7b321d3c04d3b9f50 with focus on the `SearchGroupsResultTransformer` class. There seemed to be a relatively noticable amount of code similarity or duplication between `DefaultSearchResultResultTransformer` and `SkipSecondStepSearchResultResultTransformer` and so i took a step back and considered what the _difference_ (conceptually) between the two transformers is, with two aha! moments: 1. the `getConvertedSortValues` method used by `SkipSecondStepSearchResultResultTransformer` is pretty similar to the `ShardResultTransformerUtils.marshalSortValue` from https://issues.apache.org/jira/browse/SOLR-9890 in Solr 6.5 2. the `SkipSecondStepSearchResultResultTransformer.serializeSearchGroup` method appears to construct exactly what `DefaultSearchResultResultTransformer.serializeSearchGroup` constructs and then it 'wraps' it together with an id and a score in a 'group info' object The https://github.com/cpoerschke/lucene-solr/commit/10fbfd1dcf16065688c3610b26a55f2aa9c99f8a commit sketches how a `SearchGroupsResultTransformer.serializeOneSearchGroup` method could be factored out. What do you think? (If factoring out such a method makes sense then I could (next week) attempt a similar approach for the `transformToNative` method. And if that works out good we could revisit how the `[Default|SkipSecondStep]SearchResultResultTransformer` class trio is arranged in terms of hierarchy? And if it does not work out good then that would be insightful too in terms of why etc.) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-repro - Build # 3155 - Failure
Build: https://builds.apache.org/job/Lucene-Solr-repro/3155/ [...truncated 29 lines...] [repro] Jenkins log URL: https://builds.apache.org/job/Lucene-Solr-BadApples-NightlyTests-8.x/13/consoleText [repro] Revision: c58787d045d5ab0f463ccd09e76eb8d66e14ee96 [repro] Ant options: -Dtests.multiplier=2 -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-NightlyTests-8.x/test-data/enwiki.random.lines.txt [repro] Encountered IncompleteRead exception, pausing and then retrying... [repro] Jenkins log URL: https://builds.apache.org/job/Lucene-Solr-BadApples-NightlyTests-8.x/13/consoleText [repro] Revision: c58787d045d5ab0f463ccd09e76eb8d66e14ee96 [repro] Ant options: -Dtests.multiplier=2 -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-NightlyTests-8.x/test-data/enwiki.random.lines.txt [repro] Encountered IncompleteRead exception, pausing and then retrying... [repro] Jenkins log URL: https://builds.apache.org/job/Lucene-Solr-BadApples-NightlyTests-8.x/13/consoleText [repro] Revision: c58787d045d5ab0f463ccd09e76eb8d66e14ee96 [repro] Ant options: -Dtests.multiplier=2 -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-NightlyTests-8.x/test-data/enwiki.random.lines.txt [repro] Encountered IncompleteRead exception, aborting after too many retries. [...truncated 64 lines...] raise RuntimeError('ERROR: fetching %s : %s' % (url, e)) RuntimeError: ERROR: fetching https://builds.apache.org/job/Lucene-Solr-BadApples-NightlyTests-8.x/13/consoleText : IncompleteRead(0 bytes read) Build step 'Execute shell' marked build as failure Archiving artifacts Recording test results Email was triggered for: Failure - Any Sending email for trigger: Failure - Any - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Reopened] (LUCENE-8736) LatLonShapePolygonQuery returning incorrect WITHIN results with shared boundaries
[ https://issues.apache.org/jira/browse/LUCENE-8736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nicholas Knize reopened LUCENE-8736: Assignee: Nicholas Knize Lucene Fields: New (was: New,Patch Available) [~rcmuir] yeah I was thinking about that while working these changes. On the one hand explicitly accepting boundary failures (especially for such common cases as shown here and in the test) is annoying (though I get the reasoning). On the other hand, the determinant equality approach could also [suffer from overflow|https://www.cs.cmu.edu/~quake/robust.html]. The problem with the latter is that we don't have a good feel for the rate at which overflow occurs or the percentage of false positives for when points outside the polygon are erroneously determined as colinear by an overflowed determinant. I think that deserves some further exploration. So for now, I can revert the change for points but I'm curious to your thoughts around exploring the [adaptive orientation|https://www.cs.cmu.edu/afs/cs/project/quake/public/code/predicates.c] (_orient2dadap_) approach to solve the overflow issue. I have a [very rough port|https://gist.github.com/nknize/dd1dc8a1ccaa8900b70c478cce846f29] that I can experiment with for points in a separate issue? > LatLonShapePolygonQuery returning incorrect WITHIN results with shared > boundaries > - > > Key: LUCENE-8736 > URL: https://issues.apache.org/jira/browse/LUCENE-8736 > Project: Lucene - Core > Issue Type: Bug >Reporter: Nicholas Knize >Assignee: Nicholas Knize >Priority: Major > Fix For: 8.1, master (9.0) > > Attachments: LUCENE-8736.patch, LUCENE-8736.patch, > adaptive-decoding.patch > > > Triangles that are {{WITHIN}} a target polygon query that also share a > boundary with the polygon are incorrectly reported as {{CROSSES}} instead of > {{INSIDE}}. This leads to incorrect {{WITHIN}} query results as demonstrated > in the following test: > {code:java} > public void testWithinFailure() throws Exception { > Directory dir = newDirectory(); > RandomIndexWriter w = new RandomIndexWriter(random(), dir); > // test polygons: > Polygon indexPoly1 = new Polygon(new double[] {4d, 4d, 3d, 3d, 4d}, new > double[] {3d, 4d, 4d, 3d, 3d}); > Polygon indexPoly2 = new Polygon(new double[] {2d, 2d, 1d, 1d, 2d}, new > double[] {6d, 7d, 7d, 6d, 6d}); > Polygon indexPoly3 = new Polygon(new double[] {1d, 1d, 0d, 0d, 1d}, new > double[] {3d, 4d, 4d, 3d, 3d}); > Polygon indexPoly4 = new Polygon(new double[] {2d, 2d, 1d, 1d, 2d}, new > double[] {0d, 1d, 1d, 0d, 0d}); > // index polygons: > Document doc; > addPolygonsToDoc(FIELDNAME, doc = new Document(), indexPoly1); > w.addDocument(doc); > addPolygonsToDoc(FIELDNAME, doc = new Document(), indexPoly2); > w.addDocument(doc); > addPolygonsToDoc(FIELDNAME, doc = new Document(), indexPoly3); > w.addDocument(doc); > addPolygonsToDoc(FIELDNAME, doc = new Document(), indexPoly4); > w.addDocument(doc); > / search // > IndexReader reader = w.getReader(); > w.close(); > IndexSearcher searcher = newSearcher(reader); > Polygon[] searchPoly = new Polygon[] {new Polygon(new double[] {4d, 4d, > 0d, 0d, 4d}, new double[] {0d, 7d, 7d, 0d, 0d})}; > Query q = LatLonShape.newPolygonQuery(FIELDNAME, QueryRelation.WITHIN, > searchPoly); > assertEquals(4, searcher.count(q)); > IOUtils.close(w, reader, dir); > } > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-13285) ByteArrayUtf8CharSequence cannot be cast to java.lang.String exception during replication
[ https://issues.apache.org/jira/browse/SOLR-13285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16816510#comment-16816510 ] Karl Stoney commented on SOLR-13285: Brill, thanks [~gerlowskija] > ByteArrayUtf8CharSequence cannot be cast to java.lang.String exception during > replication > - > > Key: SOLR-13285 > URL: https://issues.apache.org/jira/browse/SOLR-13285 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: replication (java), SolrCloud, SolrJ >Affects Versions: 7.7, 7.7.1, 7.7.2, 8.0 > Environment: centos 7 > solrcloud 7.7.1, 8.1.0 >Reporter: Karl Stoney >Assignee: Noble Paul >Priority: Major > Labels: newbie, replication > Attachments: SOLR-13285.patch, SOLR-13285.patch > > > Since upgrading to 7.7 (also tried 7.7.1, and 8.1.0) from 6.6.4, we're seeing > the following errors in the SolrCloud elected master for a given collection > when updates are written. This was after a full reindex of data (fresh > build). > {code:java} > request: > http://solr-1.search-solr.preprod.k8.atcloud.io:80/solr/at-uk_shard1_replica_n2/update?update.distrib=FROMLEADER=http%3A%2F%2Fsolr-2.search-solr.preprod.k8.atcloud.io%3A80%2Fsolr%2Fat-uk_shard1_replica_n1%2F=javabin=2 > Remote error message: org.apache.solr.common.util.ByteArrayUtf8CharSequence > cannot be cast to java.lang.String > at > org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrClient$Runner.sendUpdateStream(ConcurrentUpdateSolrClient.java:385) > ~[solr-solrj-7.7.1.jar:7.7.1 5bf96d32f88eb8a2f5e775339885cd6ba84a3b58 - > ishan - 2019-02-23 02:39:09] > at > org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrClient$Runner.run(ConcurrentUpdateSolrClient.java:183) > ~[solr-solrj-7.7.1.jar:7.7.1 5bf96d32f88eb8a2f5e775339885cd6ba84a3b58 - > ishan - 2019-02-23 02:39:09] > at > com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:176) > ~[metrics-core-3.2.6.jar:3.2.6] > at > org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209) > ~[solr-solrj-7.7.1.jar:7.7.1 5bf96d32f88eb8a2f5e775339885cd6ba84a3b58 - > ishan - 2019-02-23 02:39:09] > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > ~[?:1.8.0_191] > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > ~[?:1.8.0_191] > at java.lang.Thread.run(Thread.java:748) [?:1.8.0_191] > {code} > Following this through to the replica, you'll see: > {code:java} > 08:35:22.060 [qtp1540374340-20] ERROR org.apache.solr.servlet.HttpSolrCall - > null:java.lang.ClassCastException: > org.apache.solr.common.util.ByteArrayUtf8CharSequence cannot be cast to > java.lang.String > at > org.apache.solr.common.util.JavaBinCodec.readEnumFieldValue(JavaBinCodec.java:813) > at > org.apache.solr.common.util.JavaBinCodec.readObject(JavaBinCodec.java:339) > at org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:278) > at > org.apache.solr.common.util.JavaBinCodec.readSolrInputDocument(JavaBinCodec.java:640) > at > org.apache.solr.common.util.JavaBinCodec.readObject(JavaBinCodec.java:337) > at org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:278) > at > org.apache.solr.common.util.JavaBinCodec.readMapEntry(JavaBinCodec.java:819) > at > org.apache.solr.common.util.JavaBinCodec.readObject(JavaBinCodec.java:341) > at org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:278) > at > org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$StreamingCodec.readOuterMostDocIterator(JavaBinUpdateRequestCodec.java:295) > at > org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$StreamingCodec.readIterator(JavaBinUpdateRequestCodec.java:280) > at > org.apache.solr.common.util.JavaBinCodec.readObject(JavaBinCodec.java:333) > at org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:278) > at > org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$StreamingCodec.readNamedList(JavaBinUpdateRequestCodec.java:235) > at > org.apache.solr.common.util.JavaBinCodec.readObject(JavaBinCodec.java:298) > at org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:278) > at > org.apache.solr.common.util.JavaBinCodec.unmarshal(JavaBinCodec.java:191) > at > org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec.unmarshal(JavaBinUpdateRequestCodec.java:126) > at > org.apache.solr.handler.loader.JavabinLoader.parseAndLoadDocs(JavabinLoader.java:123) > at >
[jira] [Updated] (SOLR-13285) ByteArrayUtf8CharSequence cannot be cast to java.lang.String exception during replication
[ https://issues.apache.org/jira/browse/SOLR-13285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Karl Stoney updated SOLR-13285: --- Affects Version/s: 7.7.2 > ByteArrayUtf8CharSequence cannot be cast to java.lang.String exception during > replication > - > > Key: SOLR-13285 > URL: https://issues.apache.org/jira/browse/SOLR-13285 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: replication (java), SolrCloud, SolrJ >Affects Versions: 7.7, 7.7.1, 7.7.2, 8.0 > Environment: centos 7 > solrcloud 7.7.1, 8.1.0 >Reporter: Karl Stoney >Assignee: Noble Paul >Priority: Major > Labels: newbie, replication > Attachments: SOLR-13285.patch, SOLR-13285.patch > > > Since upgrading to 7.7 (also tried 7.7.1, and 8.1.0) from 6.6.4, we're seeing > the following errors in the SolrCloud elected master for a given collection > when updates are written. This was after a full reindex of data (fresh > build). > {code:java} > request: > http://solr-1.search-solr.preprod.k8.atcloud.io:80/solr/at-uk_shard1_replica_n2/update?update.distrib=FROMLEADER=http%3A%2F%2Fsolr-2.search-solr.preprod.k8.atcloud.io%3A80%2Fsolr%2Fat-uk_shard1_replica_n1%2F=javabin=2 > Remote error message: org.apache.solr.common.util.ByteArrayUtf8CharSequence > cannot be cast to java.lang.String > at > org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrClient$Runner.sendUpdateStream(ConcurrentUpdateSolrClient.java:385) > ~[solr-solrj-7.7.1.jar:7.7.1 5bf96d32f88eb8a2f5e775339885cd6ba84a3b58 - > ishan - 2019-02-23 02:39:09] > at > org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrClient$Runner.run(ConcurrentUpdateSolrClient.java:183) > ~[solr-solrj-7.7.1.jar:7.7.1 5bf96d32f88eb8a2f5e775339885cd6ba84a3b58 - > ishan - 2019-02-23 02:39:09] > at > com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:176) > ~[metrics-core-3.2.6.jar:3.2.6] > at > org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209) > ~[solr-solrj-7.7.1.jar:7.7.1 5bf96d32f88eb8a2f5e775339885cd6ba84a3b58 - > ishan - 2019-02-23 02:39:09] > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > ~[?:1.8.0_191] > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > ~[?:1.8.0_191] > at java.lang.Thread.run(Thread.java:748) [?:1.8.0_191] > {code} > Following this through to the replica, you'll see: > {code:java} > 08:35:22.060 [qtp1540374340-20] ERROR org.apache.solr.servlet.HttpSolrCall - > null:java.lang.ClassCastException: > org.apache.solr.common.util.ByteArrayUtf8CharSequence cannot be cast to > java.lang.String > at > org.apache.solr.common.util.JavaBinCodec.readEnumFieldValue(JavaBinCodec.java:813) > at > org.apache.solr.common.util.JavaBinCodec.readObject(JavaBinCodec.java:339) > at org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:278) > at > org.apache.solr.common.util.JavaBinCodec.readSolrInputDocument(JavaBinCodec.java:640) > at > org.apache.solr.common.util.JavaBinCodec.readObject(JavaBinCodec.java:337) > at org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:278) > at > org.apache.solr.common.util.JavaBinCodec.readMapEntry(JavaBinCodec.java:819) > at > org.apache.solr.common.util.JavaBinCodec.readObject(JavaBinCodec.java:341) > at org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:278) > at > org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$StreamingCodec.readOuterMostDocIterator(JavaBinUpdateRequestCodec.java:295) > at > org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$StreamingCodec.readIterator(JavaBinUpdateRequestCodec.java:280) > at > org.apache.solr.common.util.JavaBinCodec.readObject(JavaBinCodec.java:333) > at org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:278) > at > org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$StreamingCodec.readNamedList(JavaBinUpdateRequestCodec.java:235) > at > org.apache.solr.common.util.JavaBinCodec.readObject(JavaBinCodec.java:298) > at org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:278) > at > org.apache.solr.common.util.JavaBinCodec.unmarshal(JavaBinCodec.java:191) > at > org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec.unmarshal(JavaBinUpdateRequestCodec.java:126) > at > org.apache.solr.handler.loader.JavabinLoader.parseAndLoadDocs(JavabinLoader.java:123) > at > org.apache.solr.handler.loader.JavabinLoader.load(JavabinLoader.java:70) > at >
[jira] [Commented] (SOLR-13285) ByteArrayUtf8CharSequence cannot be cast to java.lang.String exception during replication
[ https://issues.apache.org/jira/browse/SOLR-13285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16816508#comment-16816508 ] Jason Gerlowski commented on SOLR-13285: Yeah, I'll try to take a look at it this weekend. I need to backport SOLR-13331 anyways. > ByteArrayUtf8CharSequence cannot be cast to java.lang.String exception during > replication > - > > Key: SOLR-13285 > URL: https://issues.apache.org/jira/browse/SOLR-13285 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: replication (java), SolrCloud, SolrJ >Affects Versions: 7.7, 7.7.1, 8.0 > Environment: centos 7 > solrcloud 7.7.1, 8.1.0 >Reporter: Karl Stoney >Assignee: Noble Paul >Priority: Major > Labels: newbie, replication > Attachments: SOLR-13285.patch, SOLR-13285.patch > > > Since upgrading to 7.7 (also tried 7.7.1, and 8.1.0) from 6.6.4, we're seeing > the following errors in the SolrCloud elected master for a given collection > when updates are written. This was after a full reindex of data (fresh > build). > {code:java} > request: > http://solr-1.search-solr.preprod.k8.atcloud.io:80/solr/at-uk_shard1_replica_n2/update?update.distrib=FROMLEADER=http%3A%2F%2Fsolr-2.search-solr.preprod.k8.atcloud.io%3A80%2Fsolr%2Fat-uk_shard1_replica_n1%2F=javabin=2 > Remote error message: org.apache.solr.common.util.ByteArrayUtf8CharSequence > cannot be cast to java.lang.String > at > org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrClient$Runner.sendUpdateStream(ConcurrentUpdateSolrClient.java:385) > ~[solr-solrj-7.7.1.jar:7.7.1 5bf96d32f88eb8a2f5e775339885cd6ba84a3b58 - > ishan - 2019-02-23 02:39:09] > at > org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrClient$Runner.run(ConcurrentUpdateSolrClient.java:183) > ~[solr-solrj-7.7.1.jar:7.7.1 5bf96d32f88eb8a2f5e775339885cd6ba84a3b58 - > ishan - 2019-02-23 02:39:09] > at > com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:176) > ~[metrics-core-3.2.6.jar:3.2.6] > at > org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209) > ~[solr-solrj-7.7.1.jar:7.7.1 5bf96d32f88eb8a2f5e775339885cd6ba84a3b58 - > ishan - 2019-02-23 02:39:09] > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > ~[?:1.8.0_191] > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > ~[?:1.8.0_191] > at java.lang.Thread.run(Thread.java:748) [?:1.8.0_191] > {code} > Following this through to the replica, you'll see: > {code:java} > 08:35:22.060 [qtp1540374340-20] ERROR org.apache.solr.servlet.HttpSolrCall - > null:java.lang.ClassCastException: > org.apache.solr.common.util.ByteArrayUtf8CharSequence cannot be cast to > java.lang.String > at > org.apache.solr.common.util.JavaBinCodec.readEnumFieldValue(JavaBinCodec.java:813) > at > org.apache.solr.common.util.JavaBinCodec.readObject(JavaBinCodec.java:339) > at org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:278) > at > org.apache.solr.common.util.JavaBinCodec.readSolrInputDocument(JavaBinCodec.java:640) > at > org.apache.solr.common.util.JavaBinCodec.readObject(JavaBinCodec.java:337) > at org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:278) > at > org.apache.solr.common.util.JavaBinCodec.readMapEntry(JavaBinCodec.java:819) > at > org.apache.solr.common.util.JavaBinCodec.readObject(JavaBinCodec.java:341) > at org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:278) > at > org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$StreamingCodec.readOuterMostDocIterator(JavaBinUpdateRequestCodec.java:295) > at > org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$StreamingCodec.readIterator(JavaBinUpdateRequestCodec.java:280) > at > org.apache.solr.common.util.JavaBinCodec.readObject(JavaBinCodec.java:333) > at org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:278) > at > org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$StreamingCodec.readNamedList(JavaBinUpdateRequestCodec.java:235) > at > org.apache.solr.common.util.JavaBinCodec.readObject(JavaBinCodec.java:298) > at org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:278) > at > org.apache.solr.common.util.JavaBinCodec.unmarshal(JavaBinCodec.java:191) > at > org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec.unmarshal(JavaBinUpdateRequestCodec.java:126) > at >
[jira] [Commented] (SOLR-13293) org.apache.solr.client.solrj.impl.ConcurrentUpdateHttp2SolrClient - Error consuming and closing http response stream.
[ https://issues.apache.org/jira/browse/SOLR-13293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16816499#comment-16816499 ] Kevin Risden commented on SOLR-13293: - Ah sorry I missed the "ConcurrentUpdate" part. I saw "metrics-core" and thought eh maybe related to metrics. Sorry don't have any other ideas right now. > org.apache.solr.client.solrj.impl.ConcurrentUpdateHttp2SolrClient - Error > consuming and closing http response stream. > - > > Key: SOLR-13293 > URL: https://issues.apache.org/jira/browse/SOLR-13293 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrJ >Affects Versions: 8.0 >Reporter: Karl Stoney >Priority: Minor > > Hi, > Testing out branch_8x, we're randomly seeing the following errors on a simple > 3 node cluster. It doesn't appear to affect replication (the cluster remains > green). > They come in (mass, literally 1000s at a time) bulk. > There we no network issues at the time. > {code:java} > 16:53:01.492 [updateExecutor-4-thread-34-processing-x:at-uk_shard1_replica_n1 > r:core_node3 null n:solr-2.search-solr.preprod.k8.atcloud.io:80_solr c:at-uk > s:shard1] ERROR > org.apache.solr.client.solrj.impl.ConcurrentUpdateHttp2SolrClient - Error > consuming and closing http response stream. > java.nio.channels.AsynchronousCloseException: null > at > org.eclipse.jetty.client.util.InputStreamResponseListener$Input.read(InputStreamResponseListener.java:316) > ~[jetty-client-9.4.14.v20181114.jar:9.4.14.v20181114] > at java.io.InputStream.read(InputStream.java:101) ~[?:1.8.0_191] > at > org.eclipse.jetty.client.util.InputStreamResponseListener$Input.read(InputStreamResponseListener.java:287) > ~[jetty-client-9.4.14.v20181114.jar:9.4.14.v20181114] > at > org.apache.solr.client.solrj.impl.ConcurrentUpdateHttp2SolrClient$Runner.sendUpdateStream(ConcurrentUpdateHttp2SolrClient.java:283) > ~[solr-solrj-8.1.0-SNAPSHOT.jar:8.1.0-SNAPSHOT > b14748e61fd147ea572f6545265b883fa69ed27f - root > - 2019-03-04 16:30:04] > at > org.apache.solr.client.solrj.impl.ConcurrentUpdateHttp2SolrClient$Runner.run(ConcurrentUpdateHttp2SolrClient.java:176) > ~[solr-solrj-8.1.0-SNAPSHOT.jar:8.1.0-SNAPSHOT > b14748e61fd147ea572f6545265b883fa69ed27f - root - 2019-03-04 > 16:30:04] > at > com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:176) > ~[metrics-core-3.2.6.jar:3.2.6] > at > org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209) > ~[solr-solrj-8.1.0-SNAPSHOT.jar:8.1.0-SNAPSHOT > b14748e61fd147ea572f6545265b883fa69ed27f - root - 2019-03-04 16:30:04] > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > [?:1.8.0_191] > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > [?:1.8.0_191] > at java.lang.Thread.run(Thread.java:748) [?:1.8.0_191] > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-13293) org.apache.solr.client.solrj.impl.ConcurrentUpdateHttp2SolrClient - Error consuming and closing http response stream.
[ https://issues.apache.org/jira/browse/SOLR-13293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16816494#comment-16816494 ] Karl Stoney commented on SOLR-13293: I don't see how they're related, I mean this error is in ConcurrentUpdateHttp2SolrClient, which is what's being used for solrcloud replication right? Not sure how metrics queries from prometheus exporter would cause this? > org.apache.solr.client.solrj.impl.ConcurrentUpdateHttp2SolrClient - Error > consuming and closing http response stream. > - > > Key: SOLR-13293 > URL: https://issues.apache.org/jira/browse/SOLR-13293 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrJ >Affects Versions: 8.0 >Reporter: Karl Stoney >Priority: Minor > > Hi, > Testing out branch_8x, we're randomly seeing the following errors on a simple > 3 node cluster. It doesn't appear to affect replication (the cluster remains > green). > They come in (mass, literally 1000s at a time) bulk. > There we no network issues at the time. > {code:java} > 16:53:01.492 [updateExecutor-4-thread-34-processing-x:at-uk_shard1_replica_n1 > r:core_node3 null n:solr-2.search-solr.preprod.k8.atcloud.io:80_solr c:at-uk > s:shard1] ERROR > org.apache.solr.client.solrj.impl.ConcurrentUpdateHttp2SolrClient - Error > consuming and closing http response stream. > java.nio.channels.AsynchronousCloseException: null > at > org.eclipse.jetty.client.util.InputStreamResponseListener$Input.read(InputStreamResponseListener.java:316) > ~[jetty-client-9.4.14.v20181114.jar:9.4.14.v20181114] > at java.io.InputStream.read(InputStream.java:101) ~[?:1.8.0_191] > at > org.eclipse.jetty.client.util.InputStreamResponseListener$Input.read(InputStreamResponseListener.java:287) > ~[jetty-client-9.4.14.v20181114.jar:9.4.14.v20181114] > at > org.apache.solr.client.solrj.impl.ConcurrentUpdateHttp2SolrClient$Runner.sendUpdateStream(ConcurrentUpdateHttp2SolrClient.java:283) > ~[solr-solrj-8.1.0-SNAPSHOT.jar:8.1.0-SNAPSHOT > b14748e61fd147ea572f6545265b883fa69ed27f - root > - 2019-03-04 16:30:04] > at > org.apache.solr.client.solrj.impl.ConcurrentUpdateHttp2SolrClient$Runner.run(ConcurrentUpdateHttp2SolrClient.java:176) > ~[solr-solrj-8.1.0-SNAPSHOT.jar:8.1.0-SNAPSHOT > b14748e61fd147ea572f6545265b883fa69ed27f - root - 2019-03-04 > 16:30:04] > at > com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:176) > ~[metrics-core-3.2.6.jar:3.2.6] > at > org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209) > ~[solr-solrj-8.1.0-SNAPSHOT.jar:8.1.0-SNAPSHOT > b14748e61fd147ea572f6545265b883fa69ed27f - root - 2019-03-04 16:30:04] > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > [?:1.8.0_191] > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > [?:1.8.0_191] > at java.lang.Thread.run(Thread.java:748) [?:1.8.0_191] > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-13293) org.apache.solr.client.solrj.impl.ConcurrentUpdateHttp2SolrClient - Error consuming and closing http response stream.
[ https://issues.apache.org/jira/browse/SOLR-13293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16816490#comment-16816490 ] Kevin Risden commented on SOLR-13293: - No I understand the root cause is different - I meant more are these bulk HTTP requests from metrics somehow? > org.apache.solr.client.solrj.impl.ConcurrentUpdateHttp2SolrClient - Error > consuming and closing http response stream. > - > > Key: SOLR-13293 > URL: https://issues.apache.org/jira/browse/SOLR-13293 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrJ >Affects Versions: 8.0 >Reporter: Karl Stoney >Priority: Minor > > Hi, > Testing out branch_8x, we're randomly seeing the following errors on a simple > 3 node cluster. It doesn't appear to affect replication (the cluster remains > green). > They come in (mass, literally 1000s at a time) bulk. > There we no network issues at the time. > {code:java} > 16:53:01.492 [updateExecutor-4-thread-34-processing-x:at-uk_shard1_replica_n1 > r:core_node3 null n:solr-2.search-solr.preprod.k8.atcloud.io:80_solr c:at-uk > s:shard1] ERROR > org.apache.solr.client.solrj.impl.ConcurrentUpdateHttp2SolrClient - Error > consuming and closing http response stream. > java.nio.channels.AsynchronousCloseException: null > at > org.eclipse.jetty.client.util.InputStreamResponseListener$Input.read(InputStreamResponseListener.java:316) > ~[jetty-client-9.4.14.v20181114.jar:9.4.14.v20181114] > at java.io.InputStream.read(InputStream.java:101) ~[?:1.8.0_191] > at > org.eclipse.jetty.client.util.InputStreamResponseListener$Input.read(InputStreamResponseListener.java:287) > ~[jetty-client-9.4.14.v20181114.jar:9.4.14.v20181114] > at > org.apache.solr.client.solrj.impl.ConcurrentUpdateHttp2SolrClient$Runner.sendUpdateStream(ConcurrentUpdateHttp2SolrClient.java:283) > ~[solr-solrj-8.1.0-SNAPSHOT.jar:8.1.0-SNAPSHOT > b14748e61fd147ea572f6545265b883fa69ed27f - root > - 2019-03-04 16:30:04] > at > org.apache.solr.client.solrj.impl.ConcurrentUpdateHttp2SolrClient$Runner.run(ConcurrentUpdateHttp2SolrClient.java:176) > ~[solr-solrj-8.1.0-SNAPSHOT.jar:8.1.0-SNAPSHOT > b14748e61fd147ea572f6545265b883fa69ed27f - root - 2019-03-04 > 16:30:04] > at > com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:176) > ~[metrics-core-3.2.6.jar:3.2.6] > at > org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209) > ~[solr-solrj-8.1.0-SNAPSHOT.jar:8.1.0-SNAPSHOT > b14748e61fd147ea572f6545265b883fa69ed27f - root - 2019-03-04 16:30:04] > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > [?:1.8.0_191] > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > [?:1.8.0_191] > at java.lang.Thread.run(Thread.java:748) [?:1.8.0_191] > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-13293) org.apache.solr.client.solrj.impl.ConcurrentUpdateHttp2SolrClient - Error consuming and closing http response stream.
[ https://issues.apache.org/jira/browse/SOLR-13293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16816490#comment-16816490 ] Kevin Risden edited comment on SOLR-13293 at 4/12/19 5:13 PM: -- No I understand the root cause is different - I meant more are these bulk HTTP requests from metrics somehow? Like if metrics are disabled do these errors go away. was (Author: risdenk): No I understand the root cause is different - I meant more are these bulk HTTP requests from metrics somehow? > org.apache.solr.client.solrj.impl.ConcurrentUpdateHttp2SolrClient - Error > consuming and closing http response stream. > - > > Key: SOLR-13293 > URL: https://issues.apache.org/jira/browse/SOLR-13293 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrJ >Affects Versions: 8.0 >Reporter: Karl Stoney >Priority: Minor > > Hi, > Testing out branch_8x, we're randomly seeing the following errors on a simple > 3 node cluster. It doesn't appear to affect replication (the cluster remains > green). > They come in (mass, literally 1000s at a time) bulk. > There we no network issues at the time. > {code:java} > 16:53:01.492 [updateExecutor-4-thread-34-processing-x:at-uk_shard1_replica_n1 > r:core_node3 null n:solr-2.search-solr.preprod.k8.atcloud.io:80_solr c:at-uk > s:shard1] ERROR > org.apache.solr.client.solrj.impl.ConcurrentUpdateHttp2SolrClient - Error > consuming and closing http response stream. > java.nio.channels.AsynchronousCloseException: null > at > org.eclipse.jetty.client.util.InputStreamResponseListener$Input.read(InputStreamResponseListener.java:316) > ~[jetty-client-9.4.14.v20181114.jar:9.4.14.v20181114] > at java.io.InputStream.read(InputStream.java:101) ~[?:1.8.0_191] > at > org.eclipse.jetty.client.util.InputStreamResponseListener$Input.read(InputStreamResponseListener.java:287) > ~[jetty-client-9.4.14.v20181114.jar:9.4.14.v20181114] > at > org.apache.solr.client.solrj.impl.ConcurrentUpdateHttp2SolrClient$Runner.sendUpdateStream(ConcurrentUpdateHttp2SolrClient.java:283) > ~[solr-solrj-8.1.0-SNAPSHOT.jar:8.1.0-SNAPSHOT > b14748e61fd147ea572f6545265b883fa69ed27f - root > - 2019-03-04 16:30:04] > at > org.apache.solr.client.solrj.impl.ConcurrentUpdateHttp2SolrClient$Runner.run(ConcurrentUpdateHttp2SolrClient.java:176) > ~[solr-solrj-8.1.0-SNAPSHOT.jar:8.1.0-SNAPSHOT > b14748e61fd147ea572f6545265b883fa69ed27f - root - 2019-03-04 > 16:30:04] > at > com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:176) > ~[metrics-core-3.2.6.jar:3.2.6] > at > org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209) > ~[solr-solrj-8.1.0-SNAPSHOT.jar:8.1.0-SNAPSHOT > b14748e61fd147ea572f6545265b883fa69ed27f - root - 2019-03-04 16:30:04] > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > [?:1.8.0_191] > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > [?:1.8.0_191] > at java.lang.Thread.run(Thread.java:748) [?:1.8.0_191] > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-13293) org.apache.solr.client.solrj.impl.ConcurrentUpdateHttp2SolrClient - Error consuming and closing http response stream.
[ https://issues.apache.org/jira/browse/SOLR-13293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16816487#comment-16816487 ] Karl Stoney commented on SOLR-13293: [~krisden] that issue is totally unrelated, it's a classpath compilation error on the prometheus exporter contrib - it doesn't even start - and I've narrowed it to a particular commit. This issue came before the prometheus one! > org.apache.solr.client.solrj.impl.ConcurrentUpdateHttp2SolrClient - Error > consuming and closing http response stream. > - > > Key: SOLR-13293 > URL: https://issues.apache.org/jira/browse/SOLR-13293 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrJ >Affects Versions: 8.0 >Reporter: Karl Stoney >Priority: Minor > > Hi, > Testing out branch_8x, we're randomly seeing the following errors on a simple > 3 node cluster. It doesn't appear to affect replication (the cluster remains > green). > They come in (mass, literally 1000s at a time) bulk. > There we no network issues at the time. > {code:java} > 16:53:01.492 [updateExecutor-4-thread-34-processing-x:at-uk_shard1_replica_n1 > r:core_node3 null n:solr-2.search-solr.preprod.k8.atcloud.io:80_solr c:at-uk > s:shard1] ERROR > org.apache.solr.client.solrj.impl.ConcurrentUpdateHttp2SolrClient - Error > consuming and closing http response stream. > java.nio.channels.AsynchronousCloseException: null > at > org.eclipse.jetty.client.util.InputStreamResponseListener$Input.read(InputStreamResponseListener.java:316) > ~[jetty-client-9.4.14.v20181114.jar:9.4.14.v20181114] > at java.io.InputStream.read(InputStream.java:101) ~[?:1.8.0_191] > at > org.eclipse.jetty.client.util.InputStreamResponseListener$Input.read(InputStreamResponseListener.java:287) > ~[jetty-client-9.4.14.v20181114.jar:9.4.14.v20181114] > at > org.apache.solr.client.solrj.impl.ConcurrentUpdateHttp2SolrClient$Runner.sendUpdateStream(ConcurrentUpdateHttp2SolrClient.java:283) > ~[solr-solrj-8.1.0-SNAPSHOT.jar:8.1.0-SNAPSHOT > b14748e61fd147ea572f6545265b883fa69ed27f - root > - 2019-03-04 16:30:04] > at > org.apache.solr.client.solrj.impl.ConcurrentUpdateHttp2SolrClient$Runner.run(ConcurrentUpdateHttp2SolrClient.java:176) > ~[solr-solrj-8.1.0-SNAPSHOT.jar:8.1.0-SNAPSHOT > b14748e61fd147ea572f6545265b883fa69ed27f - root - 2019-03-04 > 16:30:04] > at > com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:176) > ~[metrics-core-3.2.6.jar:3.2.6] > at > org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209) > ~[solr-solrj-8.1.0-SNAPSHOT.jar:8.1.0-SNAPSHOT > b14748e61fd147ea572f6545265b883fa69ed27f - root - 2019-03-04 16:30:04] > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > [?:1.8.0_191] > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > [?:1.8.0_191] > at java.lang.Thread.run(Thread.java:748) [?:1.8.0_191] > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-2562) Make Luke a Lucene/Solr Module
[ https://issues.apache.org/jira/browse/LUCENE-2562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16816450#comment-16816450 ] Uwe Schindler commented on LUCENE-2562: --- [~Tomoko Uchida]: Thanks for fixing this! Don't be afraid that your first commit caused such issues! That's just normal. Whenever somebody added a new module in the last few years this always caused many followup commits to fix the distribution like maven & co. This is so complex so you can not easily do this right - not even the long-term committers. There are too many release-specific things that need to be taken care, but Jenkins helps us to find the issues (especially as I personally dont want to block my whole system for several hours running Solr test cases and smoke testers). > Make Luke a Lucene/Solr Module > -- > > Key: LUCENE-2562 > URL: https://issues.apache.org/jira/browse/LUCENE-2562 > Project: Lucene - Core > Issue Type: Task >Reporter: Mark Miller >Assignee: Tomoko Uchida >Priority: Major > Labels: gsoc2014 > Fix For: 8.1, master (9.0) > > Attachments: LUCENE-2562-Ivy.patch, LUCENE-2562-Ivy.patch, > LUCENE-2562-Ivy.patch, LUCENE-2562-ivy.patch, LUCENE-2562.patch, > LUCENE-2562.patch, LUCENE-2562.patch, Luke-ALE-1.png, Luke-ALE-2.png, > Luke-ALE-3.png, Luke-ALE-4.png, Luke-ALE-5.png, luke-javafx1.png, > luke-javafx2.png, luke-javafx3.png, luke1.jpg, luke2.jpg, luke3.jpg, > lukeALE-documents.png, screenshot-1.png, スクリーンショット 2018-11-05 9.19.47.png > > Time Spent: 50m > Remaining Estimate: 0h > > see > "RE: Luke - in need of maintainer": > http://markmail.org/message/m4gsto7giltvrpuf > "Web-based Luke": http://markmail.org/message/4xwps7p7ifltme5q > I think it would be great if there was a version of Luke that always worked > with trunk - and it would also be great if it was easier to match Luke jars > with Lucene versions. > While I'd like to get GWT Luke into the mix as well, I think the easiest > starting point is to straight port Luke to another UI toolkit before > abstracting out DTO objects that both GWT Luke and Pivot Luke could share. > I've started slowly converting Luke's use of thinlet to Apache Pivot. I > haven't/don't have a lot of time for this at the moment, but I've plugged > away here and there over the past work or two. There is still a *lot* to do. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-2562) Make Luke a Lucene/Solr Module
[ https://issues.apache.org/jira/browse/LUCENE-2562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16816438#comment-16816438 ] Tomoko Uchida commented on LUCENE-2562: --- {{ant nightly-smoke}} also succeeded for me: bq. [smoker] SUCCESS! [1:14:24.029762] I cherry picked the changes to master & branch_8x (as the ASF bot say.) > Make Luke a Lucene/Solr Module > -- > > Key: LUCENE-2562 > URL: https://issues.apache.org/jira/browse/LUCENE-2562 > Project: Lucene - Core > Issue Type: Task >Reporter: Mark Miller >Assignee: Tomoko Uchida >Priority: Major > Labels: gsoc2014 > Fix For: 8.1, master (9.0) > > Attachments: LUCENE-2562-Ivy.patch, LUCENE-2562-Ivy.patch, > LUCENE-2562-Ivy.patch, LUCENE-2562-ivy.patch, LUCENE-2562.patch, > LUCENE-2562.patch, LUCENE-2562.patch, Luke-ALE-1.png, Luke-ALE-2.png, > Luke-ALE-3.png, Luke-ALE-4.png, Luke-ALE-5.png, luke-javafx1.png, > luke-javafx2.png, luke-javafx3.png, luke1.jpg, luke2.jpg, luke3.jpg, > lukeALE-documents.png, screenshot-1.png, スクリーンショット 2018-11-05 9.19.47.png > > Time Spent: 50m > Remaining Estimate: 0h > > see > "RE: Luke - in need of maintainer": > http://markmail.org/message/m4gsto7giltvrpuf > "Web-based Luke": http://markmail.org/message/4xwps7p7ifltme5q > I think it would be great if there was a version of Luke that always worked > with trunk - and it would also be great if it was easier to match Luke jars > with Lucene versions. > While I'd like to get GWT Luke into the mix as well, I think the easiest > starting point is to straight port Luke to another UI toolkit before > abstracting out DTO objects that both GWT Luke and Pivot Luke could share. > I've started slowly converting Luke's use of thinlet to Apache Pivot. I > haven't/don't have a lot of time for this at the moment, but I've plugged > away here and there over the past work or two. There is still a *lot* to do. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-13396) SolrCloud will delete the core data for any core that is not referenced in the clusterstate
[ https://issues.apache.org/jira/browse/SOLR-13396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16816433#comment-16816433 ] Shawn Heisey commented on SOLR-13396: - If I'm not mistaken, I think that delete operations happen through the overseer. I'm guessing that we don't want operations that couldn't be handled to stick around in the overseer queue ... but maybe we could create a secondary queue for things (like deletes) that were never acknowledged, and the overseer can occasionally revisit those items to see if it's possible to complete them. > SolrCloud will delete the core data for any core that is not referenced in > the clusterstate > --- > > Key: SOLR-13396 > URL: https://issues.apache.org/jira/browse/SOLR-13396 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrCloud >Affects Versions: 7.3.1, 8.0 >Reporter: Shawn Heisey >Priority: Major > > SOLR-12066 is an improvement designed to delete core data for replicas that > were deleted while the node was down -- better cleanup. > In practice, that change causes SolrCloud to delete all core data for cores > that are not referenced in the ZK clusterstate. If all the ZK data gets > deleted or the Solr instance is pointed at a ZK ensemble with no data, it > will proceed to delete all of the cores in the solr home, with no possibility > of recovery. > I do not think that Solr should ever delete core data unless an explicit > DELETE action has been made and the node is operational at the time of the > request. If a core exists during startup that cannot be found in the ZK > clusterstate, it should be ignored (not started) and a helpful message should > be logged. I think that message should probably be at WARN so that it shows > up in the admin UI logging tab with default settings. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-2562) Make Luke a Lucene/Solr Module
[ https://issues.apache.org/jira/browse/LUCENE-2562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16816431#comment-16816431 ] ASF subversion and git services commented on LUCENE-2562: - Commit 811aae60cd79b9e5e931516219de7e7f10363bed in lucene-solr's branch refs/heads/branch_8x from Tomoko Uchida [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=811aae6 ] LUCENE-2562: Fix smoker for 'luke' module. > Make Luke a Lucene/Solr Module > -- > > Key: LUCENE-2562 > URL: https://issues.apache.org/jira/browse/LUCENE-2562 > Project: Lucene - Core > Issue Type: Task >Reporter: Mark Miller >Assignee: Tomoko Uchida >Priority: Major > Labels: gsoc2014 > Fix For: 8.1, master (9.0) > > Attachments: LUCENE-2562-Ivy.patch, LUCENE-2562-Ivy.patch, > LUCENE-2562-Ivy.patch, LUCENE-2562-ivy.patch, LUCENE-2562.patch, > LUCENE-2562.patch, LUCENE-2562.patch, Luke-ALE-1.png, Luke-ALE-2.png, > Luke-ALE-3.png, Luke-ALE-4.png, Luke-ALE-5.png, luke-javafx1.png, > luke-javafx2.png, luke-javafx3.png, luke1.jpg, luke2.jpg, luke3.jpg, > lukeALE-documents.png, screenshot-1.png, スクリーンショット 2018-11-05 9.19.47.png > > Time Spent: 50m > Remaining Estimate: 0h > > see > "RE: Luke - in need of maintainer": > http://markmail.org/message/m4gsto7giltvrpuf > "Web-based Luke": http://markmail.org/message/4xwps7p7ifltme5q > I think it would be great if there was a version of Luke that always worked > with trunk - and it would also be great if it was easier to match Luke jars > with Lucene versions. > While I'd like to get GWT Luke into the mix as well, I think the easiest > starting point is to straight port Luke to another UI toolkit before > abstracting out DTO objects that both GWT Luke and Pivot Luke could share. > I've started slowly converting Luke's use of thinlet to Apache Pivot. I > haven't/don't have a lot of time for this at the moment, but I've plugged > away here and there over the past work or two. There is still a *lot* to do. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-2562) Make Luke a Lucene/Solr Module
[ https://issues.apache.org/jira/browse/LUCENE-2562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16816430#comment-16816430 ] ASF subversion and git services commented on LUCENE-2562: - Commit 06f7aff8b1a076602d9de26d5ccf6a8128c388fd in lucene-solr's branch refs/heads/branch_8x from Tomoko Uchida [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=06f7aff ] LUCENE-2562: Luke has no Maven artifacts > Make Luke a Lucene/Solr Module > -- > > Key: LUCENE-2562 > URL: https://issues.apache.org/jira/browse/LUCENE-2562 > Project: Lucene - Core > Issue Type: Task >Reporter: Mark Miller >Assignee: Tomoko Uchida >Priority: Major > Labels: gsoc2014 > Fix For: 8.1, master (9.0) > > Attachments: LUCENE-2562-Ivy.patch, LUCENE-2562-Ivy.patch, > LUCENE-2562-Ivy.patch, LUCENE-2562-ivy.patch, LUCENE-2562.patch, > LUCENE-2562.patch, LUCENE-2562.patch, Luke-ALE-1.png, Luke-ALE-2.png, > Luke-ALE-3.png, Luke-ALE-4.png, Luke-ALE-5.png, luke-javafx1.png, > luke-javafx2.png, luke-javafx3.png, luke1.jpg, luke2.jpg, luke3.jpg, > lukeALE-documents.png, screenshot-1.png, スクリーンショット 2018-11-05 9.19.47.png > > Time Spent: 50m > Remaining Estimate: 0h > > see > "RE: Luke - in need of maintainer": > http://markmail.org/message/m4gsto7giltvrpuf > "Web-based Luke": http://markmail.org/message/4xwps7p7ifltme5q > I think it would be great if there was a version of Luke that always worked > with trunk - and it would also be great if it was easier to match Luke jars > with Lucene versions. > While I'd like to get GWT Luke into the mix as well, I think the easiest > starting point is to straight port Luke to another UI toolkit before > abstracting out DTO objects that both GWT Luke and Pivot Luke could share. > I've started slowly converting Luke's use of thinlet to Apache Pivot. I > haven't/don't have a lot of time for this at the moment, but I've plugged > away here and there over the past work or two. There is still a *lot* to do. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-2562) Make Luke a Lucene/Solr Module
[ https://issues.apache.org/jira/browse/LUCENE-2562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16816420#comment-16816420 ] ASF subversion and git services commented on LUCENE-2562: - Commit 6e28cd60a8247ad1339bea2ae9dfbb912507594b in lucene-solr's branch refs/heads/master from Tomoko Uchida [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=6e28cd6 ] LUCENE-2562: Fix smoker for 'luke' module. > Make Luke a Lucene/Solr Module > -- > > Key: LUCENE-2562 > URL: https://issues.apache.org/jira/browse/LUCENE-2562 > Project: Lucene - Core > Issue Type: Task >Reporter: Mark Miller >Assignee: Tomoko Uchida >Priority: Major > Labels: gsoc2014 > Fix For: 8.1, master (9.0) > > Attachments: LUCENE-2562-Ivy.patch, LUCENE-2562-Ivy.patch, > LUCENE-2562-Ivy.patch, LUCENE-2562-ivy.patch, LUCENE-2562.patch, > LUCENE-2562.patch, LUCENE-2562.patch, Luke-ALE-1.png, Luke-ALE-2.png, > Luke-ALE-3.png, Luke-ALE-4.png, Luke-ALE-5.png, luke-javafx1.png, > luke-javafx2.png, luke-javafx3.png, luke1.jpg, luke2.jpg, luke3.jpg, > lukeALE-documents.png, screenshot-1.png, スクリーンショット 2018-11-05 9.19.47.png > > Time Spent: 50m > Remaining Estimate: 0h > > see > "RE: Luke - in need of maintainer": > http://markmail.org/message/m4gsto7giltvrpuf > "Web-based Luke": http://markmail.org/message/4xwps7p7ifltme5q > I think it would be great if there was a version of Luke that always worked > with trunk - and it would also be great if it was easier to match Luke jars > with Lucene versions. > While I'd like to get GWT Luke into the mix as well, I think the easiest > starting point is to straight port Luke to another UI toolkit before > abstracting out DTO objects that both GWT Luke and Pivot Luke could share. > I've started slowly converting Luke's use of thinlet to Apache Pivot. I > haven't/don't have a lot of time for this at the moment, but I've plugged > away here and there over the past work or two. There is still a *lot* to do. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-2562) Make Luke a Lucene/Solr Module
[ https://issues.apache.org/jira/browse/LUCENE-2562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16816419#comment-16816419 ] ASF subversion and git services commented on LUCENE-2562: - Commit f85c08224b47e10f3482f27f3811b44dcae3be59 in lucene-solr's branch refs/heads/master from Tomoko Uchida [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=f85c082 ] LUCENE-2562: Luke has no Maven artifacts > Make Luke a Lucene/Solr Module > -- > > Key: LUCENE-2562 > URL: https://issues.apache.org/jira/browse/LUCENE-2562 > Project: Lucene - Core > Issue Type: Task >Reporter: Mark Miller >Assignee: Tomoko Uchida >Priority: Major > Labels: gsoc2014 > Fix For: 8.1, master (9.0) > > Attachments: LUCENE-2562-Ivy.patch, LUCENE-2562-Ivy.patch, > LUCENE-2562-Ivy.patch, LUCENE-2562-ivy.patch, LUCENE-2562.patch, > LUCENE-2562.patch, LUCENE-2562.patch, Luke-ALE-1.png, Luke-ALE-2.png, > Luke-ALE-3.png, Luke-ALE-4.png, Luke-ALE-5.png, luke-javafx1.png, > luke-javafx2.png, luke-javafx3.png, luke1.jpg, luke2.jpg, luke3.jpg, > lukeALE-documents.png, screenshot-1.png, スクリーンショット 2018-11-05 9.19.47.png > > Time Spent: 50m > Remaining Estimate: 0h > > see > "RE: Luke - in need of maintainer": > http://markmail.org/message/m4gsto7giltvrpuf > "Web-based Luke": http://markmail.org/message/4xwps7p7ifltme5q > I think it would be great if there was a version of Luke that always worked > with trunk - and it would also be great if it was easier to match Luke jars > with Lucene versions. > While I'd like to get GWT Luke into the mix as well, I think the easiest > starting point is to straight port Luke to another UI toolkit before > abstracting out DTO objects that both GWT Luke and Pivot Luke could share. > I've started slowly converting Luke's use of thinlet to Apache Pivot. I > haven't/don't have a lot of time for this at the moment, but I've plugged > away here and there over the past work or two. There is still a *lot* to do. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-13293) org.apache.solr.client.solrj.impl.ConcurrentUpdateHttp2SolrClient - Error consuming and closing http response stream.
[ https://issues.apache.org/jira/browse/SOLR-13293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16816413#comment-16816413 ] Kevin Risden commented on SOLR-13293: - [~kstoney] - I saw you posted about prometheus as well. Is it possible these are metrics related? > org.apache.solr.client.solrj.impl.ConcurrentUpdateHttp2SolrClient - Error > consuming and closing http response stream. > - > > Key: SOLR-13293 > URL: https://issues.apache.org/jira/browse/SOLR-13293 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrJ >Affects Versions: 8.0 >Reporter: Karl Stoney >Priority: Minor > > Hi, > Testing out branch_8x, we're randomly seeing the following errors on a simple > 3 node cluster. It doesn't appear to affect replication (the cluster remains > green). > They come in (mass, literally 1000s at a time) bulk. > There we no network issues at the time. > {code:java} > 16:53:01.492 [updateExecutor-4-thread-34-processing-x:at-uk_shard1_replica_n1 > r:core_node3 null n:solr-2.search-solr.preprod.k8.atcloud.io:80_solr c:at-uk > s:shard1] ERROR > org.apache.solr.client.solrj.impl.ConcurrentUpdateHttp2SolrClient - Error > consuming and closing http response stream. > java.nio.channels.AsynchronousCloseException: null > at > org.eclipse.jetty.client.util.InputStreamResponseListener$Input.read(InputStreamResponseListener.java:316) > ~[jetty-client-9.4.14.v20181114.jar:9.4.14.v20181114] > at java.io.InputStream.read(InputStream.java:101) ~[?:1.8.0_191] > at > org.eclipse.jetty.client.util.InputStreamResponseListener$Input.read(InputStreamResponseListener.java:287) > ~[jetty-client-9.4.14.v20181114.jar:9.4.14.v20181114] > at > org.apache.solr.client.solrj.impl.ConcurrentUpdateHttp2SolrClient$Runner.sendUpdateStream(ConcurrentUpdateHttp2SolrClient.java:283) > ~[solr-solrj-8.1.0-SNAPSHOT.jar:8.1.0-SNAPSHOT > b14748e61fd147ea572f6545265b883fa69ed27f - root > - 2019-03-04 16:30:04] > at > org.apache.solr.client.solrj.impl.ConcurrentUpdateHttp2SolrClient$Runner.run(ConcurrentUpdateHttp2SolrClient.java:176) > ~[solr-solrj-8.1.0-SNAPSHOT.jar:8.1.0-SNAPSHOT > b14748e61fd147ea572f6545265b883fa69ed27f - root - 2019-03-04 > 16:30:04] > at > com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:176) > ~[metrics-core-3.2.6.jar:3.2.6] > at > org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209) > ~[solr-solrj-8.1.0-SNAPSHOT.jar:8.1.0-SNAPSHOT > b14748e61fd147ea572f6545265b883fa69ed27f - root - 2019-03-04 16:30:04] > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > [?:1.8.0_191] > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > [?:1.8.0_191] > at java.lang.Thread.run(Thread.java:748) [?:1.8.0_191] > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-2562) Make Luke a Lucene/Solr Module
[ https://issues.apache.org/jira/browse/LUCENE-2562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16816398#comment-16816398 ] Tomoko Uchida commented on LUCENE-2562: --- I'm now running {{ant nightly-smoke}} on my box. I will cherry pick the two changes (in luke/build.xml and smoketestRelease.py) as soon as the smoker finishes. (Seems it takes a long time on my old PC...) > Make Luke a Lucene/Solr Module > -- > > Key: LUCENE-2562 > URL: https://issues.apache.org/jira/browse/LUCENE-2562 > Project: Lucene - Core > Issue Type: Task >Reporter: Mark Miller >Assignee: Tomoko Uchida >Priority: Major > Labels: gsoc2014 > Fix For: 8.1, master (9.0) > > Attachments: LUCENE-2562-Ivy.patch, LUCENE-2562-Ivy.patch, > LUCENE-2562-Ivy.patch, LUCENE-2562-ivy.patch, LUCENE-2562.patch, > LUCENE-2562.patch, LUCENE-2562.patch, Luke-ALE-1.png, Luke-ALE-2.png, > Luke-ALE-3.png, Luke-ALE-4.png, Luke-ALE-5.png, luke-javafx1.png, > luke-javafx2.png, luke-javafx3.png, luke1.jpg, luke2.jpg, luke3.jpg, > lukeALE-documents.png, screenshot-1.png, スクリーンショット 2018-11-05 9.19.47.png > > Time Spent: 50m > Remaining Estimate: 0h > > see > "RE: Luke - in need of maintainer": > http://markmail.org/message/m4gsto7giltvrpuf > "Web-based Luke": http://markmail.org/message/4xwps7p7ifltme5q > I think it would be great if there was a version of Luke that always worked > with trunk - and it would also be great if it was easier to match Luke jars > with Lucene versions. > While I'd like to get GWT Luke into the mix as well, I think the easiest > starting point is to straight port Luke to another UI toolkit before > abstracting out DTO objects that both GWT Luke and Pivot Luke could share. > I've started slowly converting Luke's use of thinlet to Apache Pivot. I > haven't/don't have a lot of time for this at the moment, but I've plugged > away here and there over the past work or two. There is still a *lot* to do. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[GitHub] [lucene-solr] mikemccand commented on a change in pull request #579: LUCENE-8681: prorated early termination
mikemccand commented on a change in pull request #579: LUCENE-8681: prorated early termination URL: https://github.com/apache/lucene-solr/pull/579#discussion_r274964396 ## File path: lucene/core/src/java/org/apache/lucene/search/TopFieldCollector.java ## @@ -165,11 +169,35 @@ public void collect(int doc) throws IOException { updateMinCompetitiveScore(scorer); } } + if (canEarlyTerminate) { + // When early terminating, stop collecting hits from this leaf once we have its prorated hits. + if (leafHits > leafHitsThreshold) { + totalHitsRelation = Relation.GREATER_THAN_OR_EQUAL_TO; + throw new CollectionTerminatedException(); + } + } } }; } +/** The total number of documents that matched this query; may be a lower bound in case of early termination. */ +@Override +public int getTotalHits() { + return totalHits; +} + +private int prorateForSegment(int topK, LeafReaderContext leafCtx) { +// prorate number of hits to collect based on proportion of documents in this leaf (segment). +// p := probability of a top-k document (or any document) being in this segment +double p = (double) leafCtx.reader().numDocs() / leafCtx.parent.reader().numDocs(); Review comment: Ahh yes we should use `numDocs` -- you can get this from `leafCtx.reader().numDocs()`. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-13396) SolrCloud will delete the core data for any core that is not referenced in the clusterstate
[ https://issues.apache.org/jira/browse/SOLR-13396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16816372#comment-16816372 ] Erick Erickson commented on SOLR-13396: --- Hmmm, actually this seems like it would be an overseer task, look at the queue and delete what's reasonable. which is really sending a core admin request to each node. I don't think in the normal state there's really any work here. In the usual case, Solr starts up and each core is found to be part of a collection and no znode is written. Likewise if the list is empty there's nothing to do as far as the overseer is concerned and nothing to report as potential problems. Admittedly in my above scenario there'd be a zillion znodes written but who cares in that case? ;) > SolrCloud will delete the core data for any core that is not referenced in > the clusterstate > --- > > Key: SOLR-13396 > URL: https://issues.apache.org/jira/browse/SOLR-13396 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrCloud >Affects Versions: 7.3.1, 8.0 >Reporter: Shawn Heisey >Priority: Major > > SOLR-12066 is an improvement designed to delete core data for replicas that > were deleted while the node was down -- better cleanup. > In practice, that change causes SolrCloud to delete all core data for cores > that are not referenced in the ZK clusterstate. If all the ZK data gets > deleted or the Solr instance is pointed at a ZK ensemble with no data, it > will proceed to delete all of the cores in the solr home, with no possibility > of recovery. > I do not think that Solr should ever delete core data unless an explicit > DELETE action has been made and the node is operational at the time of the > request. If a core exists during startup that cannot be found in the ZK > clusterstate, it should be ignored (not started) and a helpful message should > be logged. I think that message should probably be at WARN so that it shows > up in the admin UI logging tab with default settings. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-13182) NullPointerException due to an invariant violation in org/apache/lucene/search/BooleanClause.java[60]
[ https://issues.apache.org/jira/browse/SOLR-13182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16816368#comment-16816368 ] Charles Sanders commented on SOLR-13182: Added a patch. Not sure the desired action when the NPE is raised. The patch is issuing a SyntaxError which returns a 400 response. > NullPointerException due to an invariant violation in > org/apache/lucene/search/BooleanClause.java[60] > - > > Key: SOLR-13182 > URL: https://issues.apache.org/jira/browse/SOLR-13182 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: master (9.0) > Environment: h1. Steps to reproduce > * Use a Linux machine. > * Build commit {{ea2c8ba}} of Solr as described in the section below. > * Build the films collection as described below. > * Start the server using the command {{./bin/solr start -f -p 8983 -s > /tmp/home}} > * Request the URL given in the bug description. > h1. Compiling the server > {noformat} > git clone https://github.com/apache/lucene-solr > cd lucene-solr > git checkout ea2c8ba > ant compile > cd solr > ant server > {noformat} > h1. Building the collection > We followed [Exercise > 2|http://lucene.apache.org/solr/guide/7_5/solr-tutorial.html#exercise-2] from > the [Solr > Tutorial|http://lucene.apache.org/solr/guide/7_5/solr-tutorial.html]. The > attached file ({{home.zip}}) gives the contents of folder {{/tmp/home}} that > you will obtain by following the steps below: > {noformat} > mkdir -p /tmp/home > echo '' > > /tmp/home/solr.xml > {noformat} > In one terminal start a Solr instance in foreground: > {noformat} > ./bin/solr start -f -p 8983 -s /tmp/home > {noformat} > In another terminal, create a collection of movies, with no shards and no > replication, and initialize it: > {noformat} > bin/solr create -c films > curl -X POST -H 'Content-type:application/json' --data-binary '{"add-field": > {"name":"name", "type":"text_general", "multiValued":false, "stored":true}}' > http://localhost:8983/solr/films/schema > curl -X POST -H 'Content-type:application/json' --data-binary > '{"add-copy-field" : {"source":"*","dest":"_text_"}}' > http://localhost:8983/solr/films/schema > ./bin/post -c films example/films/films.json > {noformat} >Reporter: Marek >Priority: Minor > Labels: diffblue, newdev > Attachments: SOLR-13182.patch, home.zip > > > Requesting the following URL causes Solr to return an HTTP 500 error response: > {noformat} > http://localhost:8983/solr/films/select?q={!child%20q={} > {noformat} > The error response seems to be caused by the following uncaught exception: > {noformat} > ERROR (qtp689401025-14) [ x:films] o.a.s.h.RequestHandlerBase > java.lang.NullPointerException: Query must not be null > at java.util.Objects.requireNonNull(Objects.java:228) > at org.apache.lucene.search.BooleanClause.(BooleanClause.java:60) > at org.apache.lucene.search.BooleanQuery$Builder.add(BooleanQuery.java:127) > at > org.apache.solr.search.join.BlockJoinChildQParser.noClausesQuery(BlockJoinChildQParser.java:50) > at org.apache.solr.search.join.FiltersQParser.parse(FiltersQParser.java:60) > at org.apache.solr.search.QParser.getQuery(QParser.java:173) > at > org.apache.solr.handler.component.QueryComponent.prepare(QueryComponent.java:158) > at > org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:272) > at > org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199) > at org.apache.solr.core.SolrCore.execute(SolrCore.java:2559) > at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:711) > at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:516) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:394) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:340) > [...] > {noformat} > In org/apache/solr/search/join/BlockJoinChildQParser.java[47] there is > computed query variable 'parents', which receives value null from call to > 'parseParentFilter()'. The null value is then passed to > 'org.apache.lucene.search.BooleanQuery.Builder.add' method at line 50. That > method calls the constructor where 'Objects.requireNonNull' failes > (the exception is thrown). > The call to 'parseParentFilter()' evaluates to null, because: > # In org/apache/solr/search/join/BlockJoinParentQParser.java[59] null is > set to string 'filter' (becase "which" is not in 'localParams' map). > # The parser 'parentParser' obtained in the next line has member 'qstr' set > to null, because the 'filter' passed to 'subQuery' is passed as the first > argument to
[jira] [Commented] (SOLR-13396) SolrCloud will delete the core data for any core that is not referenced in the clusterstate
[ https://issues.apache.org/jira/browse/SOLR-13396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16816367#comment-16816367 ] Kevin Risden commented on SOLR-13396: - I agree that arbitrarily deleting data is bad. The other issue is how do you clean up if you JUST have the error/warn. Would be nice to know what you needed to do in addition that it was a problem. So I will caveat this by saying I have no idea how this works today, but when I read this I thought it would make sense for each node responsible for a shard/collection would have to "ack" that the operation was complete. If the node was down at the time, when it comes up it should know it needs to do "xyz" and finish the operation. Again not sure of the ZK details, but some rough ideas: * Create a znode for each node with list of operations it needs to complete - this would be written to by the leader? * Keep track of which operations each node completed on existing list before deleting? - I think this could be hard since leader could change? Some of the concerns would be added load on ZK for reading/writing operations. The above could have already been thought about when building Solr Cloud so it might be a nonstarter. > SolrCloud will delete the core data for any core that is not referenced in > the clusterstate > --- > > Key: SOLR-13396 > URL: https://issues.apache.org/jira/browse/SOLR-13396 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrCloud >Affects Versions: 7.3.1, 8.0 >Reporter: Shawn Heisey >Priority: Major > > SOLR-12066 is an improvement designed to delete core data for replicas that > were deleted while the node was down -- better cleanup. > In practice, that change causes SolrCloud to delete all core data for cores > that are not referenced in the ZK clusterstate. If all the ZK data gets > deleted or the Solr instance is pointed at a ZK ensemble with no data, it > will proceed to delete all of the cores in the solr home, with no possibility > of recovery. > I do not think that Solr should ever delete core data unless an explicit > DELETE action has been made and the node is operational at the time of the > request. If a core exists during startup that cannot be found in the ZK > clusterstate, it should be ignored (not started) and a helpful message should > be logged. I think that message should probably be at WARN so that it shows > up in the admin UI logging tab with default settings. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-13182) NullPointerException due to an invariant violation in org/apache/lucene/search/BooleanClause.java[60]
[ https://issues.apache.org/jira/browse/SOLR-13182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Charles Sanders updated SOLR-13182: --- Attachment: SOLR-13182.patch > NullPointerException due to an invariant violation in > org/apache/lucene/search/BooleanClause.java[60] > - > > Key: SOLR-13182 > URL: https://issues.apache.org/jira/browse/SOLR-13182 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: master (9.0) > Environment: h1. Steps to reproduce > * Use a Linux machine. > * Build commit {{ea2c8ba}} of Solr as described in the section below. > * Build the films collection as described below. > * Start the server using the command {{./bin/solr start -f -p 8983 -s > /tmp/home}} > * Request the URL given in the bug description. > h1. Compiling the server > {noformat} > git clone https://github.com/apache/lucene-solr > cd lucene-solr > git checkout ea2c8ba > ant compile > cd solr > ant server > {noformat} > h1. Building the collection > We followed [Exercise > 2|http://lucene.apache.org/solr/guide/7_5/solr-tutorial.html#exercise-2] from > the [Solr > Tutorial|http://lucene.apache.org/solr/guide/7_5/solr-tutorial.html]. The > attached file ({{home.zip}}) gives the contents of folder {{/tmp/home}} that > you will obtain by following the steps below: > {noformat} > mkdir -p /tmp/home > echo '' > > /tmp/home/solr.xml > {noformat} > In one terminal start a Solr instance in foreground: > {noformat} > ./bin/solr start -f -p 8983 -s /tmp/home > {noformat} > In another terminal, create a collection of movies, with no shards and no > replication, and initialize it: > {noformat} > bin/solr create -c films > curl -X POST -H 'Content-type:application/json' --data-binary '{"add-field": > {"name":"name", "type":"text_general", "multiValued":false, "stored":true}}' > http://localhost:8983/solr/films/schema > curl -X POST -H 'Content-type:application/json' --data-binary > '{"add-copy-field" : {"source":"*","dest":"_text_"}}' > http://localhost:8983/solr/films/schema > ./bin/post -c films example/films/films.json > {noformat} >Reporter: Marek >Priority: Minor > Labels: diffblue, newdev > Attachments: SOLR-13182.patch, home.zip > > > Requesting the following URL causes Solr to return an HTTP 500 error response: > {noformat} > http://localhost:8983/solr/films/select?q={!child%20q={} > {noformat} > The error response seems to be caused by the following uncaught exception: > {noformat} > ERROR (qtp689401025-14) [ x:films] o.a.s.h.RequestHandlerBase > java.lang.NullPointerException: Query must not be null > at java.util.Objects.requireNonNull(Objects.java:228) > at org.apache.lucene.search.BooleanClause.(BooleanClause.java:60) > at org.apache.lucene.search.BooleanQuery$Builder.add(BooleanQuery.java:127) > at > org.apache.solr.search.join.BlockJoinChildQParser.noClausesQuery(BlockJoinChildQParser.java:50) > at org.apache.solr.search.join.FiltersQParser.parse(FiltersQParser.java:60) > at org.apache.solr.search.QParser.getQuery(QParser.java:173) > at > org.apache.solr.handler.component.QueryComponent.prepare(QueryComponent.java:158) > at > org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:272) > at > org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199) > at org.apache.solr.core.SolrCore.execute(SolrCore.java:2559) > at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:711) > at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:516) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:394) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:340) > [...] > {noformat} > In org/apache/solr/search/join/BlockJoinChildQParser.java[47] there is > computed query variable 'parents', which receives value null from call to > 'parseParentFilter()'. The null value is then passed to > 'org.apache.lucene.search.BooleanQuery.Builder.add' method at line 50. That > method calls the constructor where 'Objects.requireNonNull' failes > (the exception is thrown). > The call to 'parseParentFilter()' evaluates to null, because: > # In org/apache/solr/search/join/BlockJoinParentQParser.java[59] null is > set to string 'filter' (becase "which" is not in 'localParams' map). > # The parser 'parentParser' obtained in the next line has member 'qstr' set > to null, because the 'filter' passed to 'subQuery' is passed as the first > argument to 'org.apache.solr.search.QParserPlugin.createParser'. > # Subsequnt call to 'org.apache.solr.search.QParser.getQuery' on the > 'parentParser' at >
[jira] [Updated] (SOLR-13047) Add facet2D Streaming Expression
[ https://issues.apache.org/jira/browse/SOLR-13047?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joel Bernstein updated SOLR-13047: -- Description: The current facet expression is a generic tool for creating multi-dimension aggregations. The *facet2D* Streaming Expression has semantics specific for 2 dimensional facets which are designed to be *pivoted* into a matrix and operated on by *Math Expressions*. facet2D will use the json facet API under the covers. Proposed syntax: {code:java} facet2D(medrecords, q=*:*, x=diseases, y=symptoms, dimensions="300, 10", count(*)){code} The example above will return tuples containing the top 300 diseases and the top ten symptoms for each disease. Using math expression the tuples can be *pivoted* into a matrix where the rows of the matrix are the diseases, the columns of the matrix are the symptoms and the cells in the matrix contain the counts. This matrix can then be *clustered* to find clusters of *diseases* that are correlated by *symptoms*. {code:java} let(a=facet2D(medrecords, q=*:*, x=diseases, y=symptoms, dimensions="300, 10", count(*)), b=pivot(a, diseases, symptoms, count(*)), c=kmeans(b, 10)){code} *Implementation Note:* The implementation plan for this ticket is to create a new stream called Facet2DStream. The FacetStream code is a good starting point for the new implementation and can be adapted for the Facet2D parameters. Similar tests to the FacetStream can be added to StreamExpressionTest was: The current facet expression is a generic tool for creating multi-dimension aggregations. The *facet2D* Streaming Expression has semantics specific for 2 dimensional facets which are designed to be *pivoted* into a matrix and operated on by *Math Expressions*. facet2D will use the json facet API under the covers. Proposed syntax: {code:java} facet2D(medrecords, q=*:*, x=diseases, y=symptoms, dimensions="300, 10", count(*)){code} The example above will return tuples containing the top 300 diseases and the top ten symptoms for each disease. Using math expression the tuples can be *pivoted* into a matrix where the rows of the matrix are the diseases, the columns of the matrix are the symptoms and the cells in the matrix contain the counts. This matrix can then be *clustered* to find clusters of *diseases* that are correlated by *symptoms*. {code:java} let(a=facet2D(medrecords, q=*:*, x=diseases, y=symptoms, dimensions="300, 10", count(*)), b=pivot(a, diseases, symptoms, count(*)), c=kmeans(b, 10)){code} *Implementation Note:* The implementation plan for this ticket is to create a new stream called Facet2DStream. The FacetStream code is a good starting point for the new implementation and can be adapted for the Facet2D parameters. > Add facet2D Streaming Expression > > > Key: SOLR-13047 > URL: https://issues.apache.org/jira/browse/SOLR-13047 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Joel Bernstein >Assignee: Joel Bernstein >Priority: Major > > The current facet expression is a generic tool for creating multi-dimension > aggregations. The *facet2D* Streaming Expression has semantics specific for 2 > dimensional facets which are designed to be *pivoted* into a matrix and > operated on by *Math Expressions*. > facet2D will use the json facet API under the covers. > Proposed syntax: > {code:java} > facet2D(medrecords, q=*:*, x=diseases, y=symptoms, dimensions="300, 10", > count(*)){code} > The example above will return tuples containing the top 300 diseases and the > top ten symptoms for each disease. > Using math expression the tuples can be *pivoted* into a matrix where the > rows of the matrix are the diseases, the columns of the matrix are the > symptoms and the cells in the matrix contain the counts. This matrix can then > be *clustered* to find clusters of *diseases* that are correlated by > *symptoms*. > {code:java} > let(a=facet2D(medrecords, q=*:*, x=diseases, y=symptoms, dimensions="300, > 10", count(*)), > b=pivot(a, diseases, symptoms, count(*)), > c=kmeans(b, 10)){code} > > *Implementation Note:* > The implementation plan for this ticket is to create a new stream called > Facet2DStream. The FacetStream code is a good starting point for the new > implementation and can be adapted for the Facet2D parameters. Similar tests > to the FacetStream can be added to StreamExpressionTest > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-13047) Add facet2D Streaming Expression
[ https://issues.apache.org/jira/browse/SOLR-13047?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joel Bernstein updated SOLR-13047: -- Description: The current facet expression is a generic tool for creating multi-dimension aggregations. The *facet2D* Streaming Expression has semantics specific for 2 dimensional facets which are designed to be *pivoted* into a matrix and operated on by *Math Expressions*. facet2D will use the json facet API under the covers. Proposed syntax: {code:java} facet2D(medrecords, q=*:*, x=diseases, y=symptoms, dimensions="300, 10", count(*)){code} The example above will return tuples containing the top 300 diseases and the top ten symptoms for each disease. Using math expression the tuples can be *pivoted* into a matrix where the rows of the matrix are the diseases, the columns of the matrix are the symptoms and the cells in the matrix contain the counts. This matrix can then be *clustered* to find clusters of *diseases* that are correlated by *symptoms*. {code:java} let(a=facet2D(medrecords, q=*:*, x=diseases, y=symptoms, dimensions="300, 10", count(*)), b=pivot(a, diseases, symptoms, count(*)), c=kmeans(b, 10)){code} *Implementation Note:* The implementation plan for this ticket is to create a new stream called Facet2DStream. The FacetStream code is a good starting point for the new implementation and can be adapted for the Facet2D parameters. was: The current facet expression is a generic tool for creating multi-dimension aggregations. The *facet2D* Streaming Expression has semantics specific for 2 dimensional facets which are designed to be *pivoted* into a matrix and operated on by *Math Expressions*. facet2D will use the json facet API under the covers. Proposed syntax: {code:java} facet2D(medrecords, q=*:*, x=diseases, y=symptoms, dimensions="300, 10", count(*)){code} The example above will return tuples containing the top 300 diseases and the top ten symptoms for each disease. Using math expression the tuples can be *pivoted* into a matrix where the rows of the matrix are the diseases, the columns of the matrix are the symptoms and the cells in the matrix contain the counts. This matrix can then be *clustered* to find clusters of *diseases* that are correlated by *symptoms*. {code:java} let(a=facet2D(medrecords, q=*:*, x=diseases, y=symptoms, dimensions="300, 10", count(*)), b=pivot(a, diseases, symptoms, count(*)), c=kmeans(b, 10)){code} *Implementation Note:* The implementation plan for this ticket is to create a new stream called Facet2DStream. The FacetStream code is a good starting point for the new implementation and can be adapt for the Facet2D parameters. > Add facet2D Streaming Expression > > > Key: SOLR-13047 > URL: https://issues.apache.org/jira/browse/SOLR-13047 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Joel Bernstein >Assignee: Joel Bernstein >Priority: Major > > The current facet expression is a generic tool for creating multi-dimension > aggregations. The *facet2D* Streaming Expression has semantics specific for 2 > dimensional facets which are designed to be *pivoted* into a matrix and > operated on by *Math Expressions*. > facet2D will use the json facet API under the covers. > Proposed syntax: > {code:java} > facet2D(medrecords, q=*:*, x=diseases, y=symptoms, dimensions="300, 10", > count(*)){code} > The example above will return tuples containing the top 300 diseases and the > top ten symptoms for each disease. > Using math expression the tuples can be *pivoted* into a matrix where the > rows of the matrix are the diseases, the columns of the matrix are the > symptoms and the cells in the matrix contain the counts. This matrix can then > be *clustered* to find clusters of *diseases* that are correlated by > *symptoms*. > {code:java} > let(a=facet2D(medrecords, q=*:*, x=diseases, y=symptoms, dimensions="300, > 10", count(*)), > b=pivot(a, diseases, symptoms, count(*)), > c=kmeans(b, 10)){code} > > *Implementation Note:* > The implementation plan for this ticket is to create a new stream called > Facet2DStream. The FacetStream code is a good starting point for the new > implementation and can be adapted for the Facet2D parameters. > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-13047) Add facet2D Streaming Expression
[ https://issues.apache.org/jira/browse/SOLR-13047?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joel Bernstein updated SOLR-13047: -- Description: The current facet expression is a generic tool for creating multi-dimension aggregations. The *facet2D* Streaming Expression has semantics specific for 2 dimensional facets which are designed to be *pivoted* into a matrix and operated on by *Math Expressions*. facet2D will use the json facet API under the covers. Proposed syntax: {code:java} facet2D(medrecords, q=*:*, x=diseases, y=symptoms, dimensions="300, 10", count(*)){code} The example above will return tuples containing the top 300 diseases and the top ten symptoms for each disease. Using math expression the tuples can be *pivoted* into a matrix where the rows of the matrix are the diseases, the columns of the matrix are the symptoms and the cells in the matrix contain the counts. This matrix can then be *clustered* to find clusters of *diseases* that are correlated by *symptoms*. {code:java} let(a=facet2D(medrecords, q=*:*, x=diseases, y=symptoms, dimensions="300, 10", count(*)), b=pivot(a, diseases, symptoms, count(*)), c=kmeans(b, 10)){code} *Implementation Note:* The implementation plan for this ticket is to create a new stream called Facet2DStream. The FacetStream code is a good starting point for the new implementation and can be adapt for the Facet2D parameters. was: The current facet expression is a generic tool for creating multi-dimension aggregations. The *facet2D* Streaming Expression has semantics specific for 2 dimensional facets which are designed to be *pivoted* into a matrix and operated on by *Math Expressions*. facet2D will use the json facet API under the covers. Proposed syntax: {code:java} facet2D(medrecords, q=*:*, x=diseases, y=symptoms, dimensions="300, 10", count(*)){code} The example above will return tuples containing the top 300 diseases and the top ten symptoms for each disease. Using math expression the tuples can be *pivoted* into a matrix where the rows of the matrix are the diseases, the columns of the matrix are the symptoms and the cells in the matrix contain the counts. This matrix can then be *clustered* to find clusters of *diseases* that are correlated by *symptoms*. {code:java} let(a=facet2D(medrecords, q=*:*, x=diseases, y=symptoms, dimensions="300, 10", count(*)), b=pivot(a, diseases, symptoms, count(*)), c=kmeans(b, 10)){code} > Add facet2D Streaming Expression > > > Key: SOLR-13047 > URL: https://issues.apache.org/jira/browse/SOLR-13047 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Joel Bernstein >Assignee: Joel Bernstein >Priority: Major > > The current facet expression is a generic tool for creating multi-dimension > aggregations. The *facet2D* Streaming Expression has semantics specific for 2 > dimensional facets which are designed to be *pivoted* into a matrix and > operated on by *Math Expressions*. > facet2D will use the json facet API under the covers. > Proposed syntax: > {code:java} > facet2D(medrecords, q=*:*, x=diseases, y=symptoms, dimensions="300, 10", > count(*)){code} > The example above will return tuples containing the top 300 diseases and the > top ten symptoms for each disease. > Using math expression the tuples can be *pivoted* into a matrix where the > rows of the matrix are the diseases, the columns of the matrix are the > symptoms and the cells in the matrix contain the counts. This matrix can then > be *clustered* to find clusters of *diseases* that are correlated by > *symptoms*. > {code:java} > let(a=facet2D(medrecords, q=*:*, x=diseases, y=symptoms, dimensions="300, > 10", count(*)), > b=pivot(a, diseases, symptoms, count(*)), > c=kmeans(b, 10)){code} > > *Implementation Note:* > The implementation plan for this ticket is to create a new stream called > Facet2DStream. The FacetStream code is a good starting point for the new > implementation and can be adapt for the Facet2D parameters. > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-13293) org.apache.solr.client.solrj.impl.ConcurrentUpdateHttp2SolrClient - Error consuming and closing http response stream.
[ https://issues.apache.org/jira/browse/SOLR-13293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16816353#comment-16816353 ] Karl Stoney commented on SOLR-13293: Anyone have any ideas about this? We're unable to upgrade to 8x as a result. > org.apache.solr.client.solrj.impl.ConcurrentUpdateHttp2SolrClient - Error > consuming and closing http response stream. > - > > Key: SOLR-13293 > URL: https://issues.apache.org/jira/browse/SOLR-13293 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrJ >Affects Versions: 8.0 >Reporter: Karl Stoney >Priority: Minor > > Hi, > Testing out branch_8x, we're randomly seeing the following errors on a simple > 3 node cluster. It doesn't appear to affect replication (the cluster remains > green). > They come in (mass, literally 1000s at a time) bulk. > There we no network issues at the time. > {code:java} > 16:53:01.492 [updateExecutor-4-thread-34-processing-x:at-uk_shard1_replica_n1 > r:core_node3 null n:solr-2.search-solr.preprod.k8.atcloud.io:80_solr c:at-uk > s:shard1] ERROR > org.apache.solr.client.solrj.impl.ConcurrentUpdateHttp2SolrClient - Error > consuming and closing http response stream. > java.nio.channels.AsynchronousCloseException: null > at > org.eclipse.jetty.client.util.InputStreamResponseListener$Input.read(InputStreamResponseListener.java:316) > ~[jetty-client-9.4.14.v20181114.jar:9.4.14.v20181114] > at java.io.InputStream.read(InputStream.java:101) ~[?:1.8.0_191] > at > org.eclipse.jetty.client.util.InputStreamResponseListener$Input.read(InputStreamResponseListener.java:287) > ~[jetty-client-9.4.14.v20181114.jar:9.4.14.v20181114] > at > org.apache.solr.client.solrj.impl.ConcurrentUpdateHttp2SolrClient$Runner.sendUpdateStream(ConcurrentUpdateHttp2SolrClient.java:283) > ~[solr-solrj-8.1.0-SNAPSHOT.jar:8.1.0-SNAPSHOT > b14748e61fd147ea572f6545265b883fa69ed27f - root > - 2019-03-04 16:30:04] > at > org.apache.solr.client.solrj.impl.ConcurrentUpdateHttp2SolrClient$Runner.run(ConcurrentUpdateHttp2SolrClient.java:176) > ~[solr-solrj-8.1.0-SNAPSHOT.jar:8.1.0-SNAPSHOT > b14748e61fd147ea572f6545265b883fa69ed27f - root - 2019-03-04 > 16:30:04] > at > com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:176) > ~[metrics-core-3.2.6.jar:3.2.6] > at > org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209) > ~[solr-solrj-8.1.0-SNAPSHOT.jar:8.1.0-SNAPSHOT > b14748e61fd147ea572f6545265b883fa69ed27f - root - 2019-03-04 16:30:04] > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > [?:1.8.0_191] > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > [?:1.8.0_191] > at java.lang.Thread.run(Thread.java:748) [?:1.8.0_191] > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-13285) ByteArrayUtf8CharSequence cannot be cast to java.lang.String exception during replication
[ https://issues.apache.org/jira/browse/SOLR-13285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16816348#comment-16816348 ] Karl Stoney edited comment on SOLR-13285 at 4/12/19 3:18 PM: - Is there any plan to back port this to solr `7.7.2`? The fact we're having to build solr 7 from a branch just to monkeypatch this is kinda frustrating as 7x is LTS? I've even provided the patch for 7x above which fixes it, I just need someone to actually apply it. [~noble.paul] [~gerlowskija]? was (Author: kstoney): Is there any plan to back port this to solr `7.7.2`? The fact we're having to build solr 7 from a branch just to monkeypatch this is kinda frustrating as 7x is LTS? [~noble.paul] [~gerlowskija]? > ByteArrayUtf8CharSequence cannot be cast to java.lang.String exception during > replication > - > > Key: SOLR-13285 > URL: https://issues.apache.org/jira/browse/SOLR-13285 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: replication (java), SolrCloud, SolrJ >Affects Versions: 7.7, 7.7.1, 8.0 > Environment: centos 7 > solrcloud 7.7.1, 8.1.0 >Reporter: Karl Stoney >Assignee: Noble Paul >Priority: Major > Labels: newbie, replication > Attachments: SOLR-13285.patch, SOLR-13285.patch > > > Since upgrading to 7.7 (also tried 7.7.1, and 8.1.0) from 6.6.4, we're seeing > the following errors in the SolrCloud elected master for a given collection > when updates are written. This was after a full reindex of data (fresh > build). > {code:java} > request: > http://solr-1.search-solr.preprod.k8.atcloud.io:80/solr/at-uk_shard1_replica_n2/update?update.distrib=FROMLEADER=http%3A%2F%2Fsolr-2.search-solr.preprod.k8.atcloud.io%3A80%2Fsolr%2Fat-uk_shard1_replica_n1%2F=javabin=2 > Remote error message: org.apache.solr.common.util.ByteArrayUtf8CharSequence > cannot be cast to java.lang.String > at > org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrClient$Runner.sendUpdateStream(ConcurrentUpdateSolrClient.java:385) > ~[solr-solrj-7.7.1.jar:7.7.1 5bf96d32f88eb8a2f5e775339885cd6ba84a3b58 - > ishan - 2019-02-23 02:39:09] > at > org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrClient$Runner.run(ConcurrentUpdateSolrClient.java:183) > ~[solr-solrj-7.7.1.jar:7.7.1 5bf96d32f88eb8a2f5e775339885cd6ba84a3b58 - > ishan - 2019-02-23 02:39:09] > at > com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:176) > ~[metrics-core-3.2.6.jar:3.2.6] > at > org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209) > ~[solr-solrj-7.7.1.jar:7.7.1 5bf96d32f88eb8a2f5e775339885cd6ba84a3b58 - > ishan - 2019-02-23 02:39:09] > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > ~[?:1.8.0_191] > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > ~[?:1.8.0_191] > at java.lang.Thread.run(Thread.java:748) [?:1.8.0_191] > {code} > Following this through to the replica, you'll see: > {code:java} > 08:35:22.060 [qtp1540374340-20] ERROR org.apache.solr.servlet.HttpSolrCall - > null:java.lang.ClassCastException: > org.apache.solr.common.util.ByteArrayUtf8CharSequence cannot be cast to > java.lang.String > at > org.apache.solr.common.util.JavaBinCodec.readEnumFieldValue(JavaBinCodec.java:813) > at > org.apache.solr.common.util.JavaBinCodec.readObject(JavaBinCodec.java:339) > at org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:278) > at > org.apache.solr.common.util.JavaBinCodec.readSolrInputDocument(JavaBinCodec.java:640) > at > org.apache.solr.common.util.JavaBinCodec.readObject(JavaBinCodec.java:337) > at org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:278) > at > org.apache.solr.common.util.JavaBinCodec.readMapEntry(JavaBinCodec.java:819) > at > org.apache.solr.common.util.JavaBinCodec.readObject(JavaBinCodec.java:341) > at org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:278) > at > org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$StreamingCodec.readOuterMostDocIterator(JavaBinUpdateRequestCodec.java:295) > at > org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$StreamingCodec.readIterator(JavaBinUpdateRequestCodec.java:280) > at > org.apache.solr.common.util.JavaBinCodec.readObject(JavaBinCodec.java:333) > at org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:278) > at >
[GitHub] [lucene-solr] joel-bernstein commented on issue #643: SOLR-13391: Add variance and standard deviation stream evaluators
joel-bernstein commented on issue #643: SOLR-13391: Add variance and standard deviation stream evaluators URL: https://github.com/apache/lucene-solr/pull/643#issuecomment-482613805 Ok, this is a great ticket to work on: https://issues.apache.org/jira/browse/SOLR-13047 I'll update it with some thoughts on how to get started. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-13392) Unable to start prometheus-exporter in 7x branch
[ https://issues.apache.org/jira/browse/SOLR-13392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16816350#comment-16816350 ] Karl Stoney commented on SOLR-13392: [~ichattopadhyaya] any input? As far as I can tell this will be broken for 7.7.2 when released > Unable to start prometheus-exporter in 7x branch > > > Key: SOLR-13392 > URL: https://issues.apache.org/jira/browse/SOLR-13392 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: metrics >Affects Versions: 7.7.2 >Reporter: Karl Stoney >Priority: Major > > Hi, > prometheus-exporter doesn't start in branch 7x on commit > 7dfe1c093b65f77407c2df4c2a1120a213aef166, it does work on > 26b498d0a9d25626a15e25b0cf97c8339114263a so something has changed between > those two commits causing this. > I am presuming it is > https://github.com/apache/lucene-solr/commit/e1eeafb5dc077976646b06f4cba4d77534963fa9#diff-3f7b27f0f087632739effa2aa508d77eR34 > Exception in thread "main" java.lang.NoClassDefFoundError: > org/apache/lucene/util/IOUtils > at > org.apache.solr.core.SolrResourceLoader.close(SolrResourceLoader.java:881) > at > org.apache.solr.prometheus.exporter.SolrExporter.loadMetricsConfiguration(SolrExporter.java:221) > at > org.apache.solr.prometheus.exporter.SolrExporter.main(SolrExporter.java:205) > Caused by: java.lang.ClassNotFoundException: org.apache.lucene.util.IOUtils > at java.net.URLClassLoader.findClass(URLClassLoader.java:382) > at java.lang.ClassLoader.loadClass(ClassLoader.java:424) > at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349) > at java.lang.ClassLoader.loadClass(ClassLoader.java:357) > ... 3 more -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-13285) ByteArrayUtf8CharSequence cannot be cast to java.lang.String exception during replication
[ https://issues.apache.org/jira/browse/SOLR-13285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16816348#comment-16816348 ] Karl Stoney commented on SOLR-13285: Is there any plan to back port this to solr `7.7.2`? The fact we're having to build solr 7 from a branch just to monkeypatch this is kinda frustrating as 7x is LTS? [~noble.paul] [~gerlowskija]? > ByteArrayUtf8CharSequence cannot be cast to java.lang.String exception during > replication > - > > Key: SOLR-13285 > URL: https://issues.apache.org/jira/browse/SOLR-13285 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: replication (java), SolrCloud, SolrJ >Affects Versions: 7.7, 7.7.1, 8.0 > Environment: centos 7 > solrcloud 7.7.1, 8.1.0 >Reporter: Karl Stoney >Assignee: Noble Paul >Priority: Major > Labels: newbie, replication > Attachments: SOLR-13285.patch, SOLR-13285.patch > > > Since upgrading to 7.7 (also tried 7.7.1, and 8.1.0) from 6.6.4, we're seeing > the following errors in the SolrCloud elected master for a given collection > when updates are written. This was after a full reindex of data (fresh > build). > {code:java} > request: > http://solr-1.search-solr.preprod.k8.atcloud.io:80/solr/at-uk_shard1_replica_n2/update?update.distrib=FROMLEADER=http%3A%2F%2Fsolr-2.search-solr.preprod.k8.atcloud.io%3A80%2Fsolr%2Fat-uk_shard1_replica_n1%2F=javabin=2 > Remote error message: org.apache.solr.common.util.ByteArrayUtf8CharSequence > cannot be cast to java.lang.String > at > org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrClient$Runner.sendUpdateStream(ConcurrentUpdateSolrClient.java:385) > ~[solr-solrj-7.7.1.jar:7.7.1 5bf96d32f88eb8a2f5e775339885cd6ba84a3b58 - > ishan - 2019-02-23 02:39:09] > at > org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrClient$Runner.run(ConcurrentUpdateSolrClient.java:183) > ~[solr-solrj-7.7.1.jar:7.7.1 5bf96d32f88eb8a2f5e775339885cd6ba84a3b58 - > ishan - 2019-02-23 02:39:09] > at > com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:176) > ~[metrics-core-3.2.6.jar:3.2.6] > at > org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209) > ~[solr-solrj-7.7.1.jar:7.7.1 5bf96d32f88eb8a2f5e775339885cd6ba84a3b58 - > ishan - 2019-02-23 02:39:09] > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > ~[?:1.8.0_191] > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > ~[?:1.8.0_191] > at java.lang.Thread.run(Thread.java:748) [?:1.8.0_191] > {code} > Following this through to the replica, you'll see: > {code:java} > 08:35:22.060 [qtp1540374340-20] ERROR org.apache.solr.servlet.HttpSolrCall - > null:java.lang.ClassCastException: > org.apache.solr.common.util.ByteArrayUtf8CharSequence cannot be cast to > java.lang.String > at > org.apache.solr.common.util.JavaBinCodec.readEnumFieldValue(JavaBinCodec.java:813) > at > org.apache.solr.common.util.JavaBinCodec.readObject(JavaBinCodec.java:339) > at org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:278) > at > org.apache.solr.common.util.JavaBinCodec.readSolrInputDocument(JavaBinCodec.java:640) > at > org.apache.solr.common.util.JavaBinCodec.readObject(JavaBinCodec.java:337) > at org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:278) > at > org.apache.solr.common.util.JavaBinCodec.readMapEntry(JavaBinCodec.java:819) > at > org.apache.solr.common.util.JavaBinCodec.readObject(JavaBinCodec.java:341) > at org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:278) > at > org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$StreamingCodec.readOuterMostDocIterator(JavaBinUpdateRequestCodec.java:295) > at > org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$StreamingCodec.readIterator(JavaBinUpdateRequestCodec.java:280) > at > org.apache.solr.common.util.JavaBinCodec.readObject(JavaBinCodec.java:333) > at org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:278) > at > org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$StreamingCodec.readNamedList(JavaBinUpdateRequestCodec.java:235) > at > org.apache.solr.common.util.JavaBinCodec.readObject(JavaBinCodec.java:298) > at org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:278) > at > org.apache.solr.common.util.JavaBinCodec.unmarshal(JavaBinCodec.java:191) > at >
[jira] [Commented] (SOLR-13366) AutoScalingConfig 'Invalid stage name' warnings after upgrade
[ https://issues.apache.org/jira/browse/SOLR-13366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16816346#comment-16816346 ] ASF subversion and git services commented on SOLR-13366: Commit cae323629e437c47855c4c8578f76310fd2b7b84 in lucene-solr's branch refs/heads/branch_8x from Christine Poerschke [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=cae3236 ] SOLR-13366: Clarify 'Invalid stage name' warning logging in AutoScalingConfig > AutoScalingConfig 'Invalid stage name' warnings after upgrade > - > > Key: SOLR-13366 > URL: https://issues.apache.org/jira/browse/SOLR-13366 > Project: Solr > Issue Type: Bug > Components: AutoScaling >Reporter: Christine Poerschke >Assignee: Christine Poerschke >Priority: Minor > Attachments: SOLR-13366.patch, SOLR-13366.patch > > > I noticed WARNings like this in some of our logs: > {code:java} > ... OverseerAutoScalingTriggerThread ... o.a.s.c.s.c.a.AutoScalingConfig > Invalid stage name '.auto_add_replicas.system' in listener config, skipping: > {beforeAction=[], afterAction=[], trigger=.auto_add_replicas, stage=[WAITING, > STARTED, ABORTED, SUCCEEDED, FAILED, BEFORE_ACTION, AFTER_ACTION], > class=org.apache.solr.cloud.autoscaling.SystemLogListener} > {code} > After some detective work I think I've tracked this down to 7.1.0 > [TriggerEventProcessorStage|https://github.com/apache/lucene-solr/blob/releases/lucene-solr/7.1.0/solr/solrj/src/java/org/apache/solr/client/solrj/cloud/autoscaling/TriggerEventProcessorStage.java] > having a {{WAITING}} stage and that stage having been removed in 7.2.0 > [TriggerEventProcessorStage|https://github.com/apache/lucene-solr/blob/releases/lucene-solr/7.2.0/solr/solrj/src/java/org/apache/solr/client/solrj/cloud/autoscaling/TriggerEventProcessorStage.java] > via the SOLR-11320 changes. Haven't tried to reproduce it but my theory is > that the listener got auto-created (with the {{WAITING}} stage) when the > cloud was running pre-7.2.0 code and then after upgrading the warnings start > to appear. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-13396) SolrCloud will delete the core data for any core that is not referenced in the clusterstate
[ https://issues.apache.org/jira/browse/SOLR-13396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16816343#comment-16816343 ] Erick Erickson commented on SOLR-13396: --- This is a sticky wicket. Let's claim I have a 200 node cluster hosting 1,000 collections. Keeping track of all the cores that aren't _really_ part of a collection and manually cleaning them up is an onerous task. Yet it's pretty horrible to have one mistake (someone edits the startup script and messes up the ZK parameter and pushes it out to all the Solr nodes and restarts the cluster) one could delete everything everywhere. More thinking out loud, and I have no clue how it'd interact with autoscaling. It seems odd but we _could_ use ZooKeeper to keep a list of potential nodes to delete and have 1> a way to view/list them 2> a button to push or a collections API command to issue or.. to say "delete them". 3> some kind of very visible warning that this list is not empty. "But wait!!" you cry, The whole problem is that you can't get to ZooKeeper in the first place!" Which is perfectly fine, since we're presupposing a bogus ZK address anyway. That way the nodes to delete would be tied to the proper ZK instance. When the ZK address was corrected, there wouldn't be anything in the queue. I think I like this a little better than some sort of scheduled-in-the-future event, for people who cared a cron job that issued the collections API call could be done. One could even attach a date to the znode for the potential core to delete with an expiration date. > SolrCloud will delete the core data for any core that is not referenced in > the clusterstate > --- > > Key: SOLR-13396 > URL: https://issues.apache.org/jira/browse/SOLR-13396 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrCloud >Affects Versions: 7.3.1, 8.0 >Reporter: Shawn Heisey >Priority: Major > > SOLR-12066 is an improvement designed to delete core data for replicas that > were deleted while the node was down -- better cleanup. > In practice, that change causes SolrCloud to delete all core data for cores > that are not referenced in the ZK clusterstate. If all the ZK data gets > deleted or the Solr instance is pointed at a ZK ensemble with no data, it > will proceed to delete all of the cores in the solr home, with no possibility > of recovery. > I do not think that Solr should ever delete core data unless an explicit > DELETE action has been made and the node is operational at the time of the > request. If a core exists during startup that cannot be found in the ZK > clusterstate, it should be ignored (not started) and a helpful message should > be logged. I think that message should probably be at WARN so that it shows > up in the admin UI logging tab with default settings. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-13366) AutoScalingConfig 'Invalid stage name' warnings after upgrade
[ https://issues.apache.org/jira/browse/SOLR-13366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16816342#comment-16816342 ] ASF subversion and git services commented on SOLR-13366: Commit fe1a1094763a8b21c11a9a21ed81df46e5e135e7 in lucene-solr's branch refs/heads/master from Christine Poerschke [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=fe1a109 ] SOLR-13366: Clarify 'Invalid stage name' warning logging in AutoScalingConfig > AutoScalingConfig 'Invalid stage name' warnings after upgrade > - > > Key: SOLR-13366 > URL: https://issues.apache.org/jira/browse/SOLR-13366 > Project: Solr > Issue Type: Bug > Components: AutoScaling >Reporter: Christine Poerschke >Assignee: Christine Poerschke >Priority: Minor > Attachments: SOLR-13366.patch, SOLR-13366.patch > > > I noticed WARNings like this in some of our logs: > {code:java} > ... OverseerAutoScalingTriggerThread ... o.a.s.c.s.c.a.AutoScalingConfig > Invalid stage name '.auto_add_replicas.system' in listener config, skipping: > {beforeAction=[], afterAction=[], trigger=.auto_add_replicas, stage=[WAITING, > STARTED, ABORTED, SUCCEEDED, FAILED, BEFORE_ACTION, AFTER_ACTION], > class=org.apache.solr.cloud.autoscaling.SystemLogListener} > {code} > After some detective work I think I've tracked this down to 7.1.0 > [TriggerEventProcessorStage|https://github.com/apache/lucene-solr/blob/releases/lucene-solr/7.1.0/solr/solrj/src/java/org/apache/solr/client/solrj/cloud/autoscaling/TriggerEventProcessorStage.java] > having a {{WAITING}} stage and that stage having been removed in 7.2.0 > [TriggerEventProcessorStage|https://github.com/apache/lucene-solr/blob/releases/lucene-solr/7.2.0/solr/solrj/src/java/org/apache/solr/client/solrj/cloud/autoscaling/TriggerEventProcessorStage.java] > via the SOLR-11320 changes. Haven't tried to reproduce it but my theory is > that the listener got auto-created (with the {{WAITING}} stage) when the > cloud was running pre-7.2.0 code and then after upgrading the warnings start > to appear. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-13270) SolrJ does not send "Expect: 100-continue" header
[ https://issues.apache.org/jira/browse/SOLR-13270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16816338#comment-16816338 ] Jason Gerlowski commented on SOLR-13270: Spent some more time testing the attached patch, and I'm still not confident it's working the way we hoped. I debugged things through SolrJ and verified that with the patch the custom RequestConfig does _not_ get overridden by the default RequestConfig. The custom RequestConfig makes it into HttpClient-land. Which is good. But I'm still not seeing an "Expect" header from the request. I spent a bit of time tracing it through the HttpComponent code last night, but couldn't find anywhere that uses RequestConfig's "expect" getter. Surely I'm just missing it, but I'm not sure how to proceed without help from someone with more HttpComponent know-how. [~erlendfg] can you double check me, and verify whether the patch fixes the problem for you? > SolrJ does not send "Expect: 100-continue" header > - > > Key: SOLR-13270 > URL: https://issues.apache.org/jira/browse/SOLR-13270 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrJ >Affects Versions: 7.7 >Reporter: Erlend Garåsen >Assignee: Jason Gerlowski >Priority: Major > Attachments: SOLR-13270.patch > > > SolrJ does not set the "Expect: 100-continue" header, even though it's > configured in HttpClient: > {code:java} > builder.setDefaultRequestConfig(RequestConfig.custom().setExpectContinueEnabled(true).build());{code} > A HttpClient developer has reviewed the code and says we're setting up > the client correctly, so we have a reason to believe there is a bug in > SolrJ. It's actually a problem we are facing in ManifoldCF, explained in: > https://issues.apache.org/jira/browse/CONNECTORS-1564 > The problem can be reproduced by building and running the following small > Maven project: > [http://folk.uio.no/erlendfg/solr/missing-header.zip] > The application runs SolrJ code where the header does not show up and > HttpClient code where the header is present. > > {code:java} > HttpClientBuilder builder = HttpClients.custom(); > // This should add an Expect: 100-continue header: > builder.setDefaultRequestConfig(RequestConfig.custom().setExpectContinueEnabled(true).build()); > HttpClient httpClient = builder.build(); > // Start Solr and create a core named "test". > String baseUrl = "http://localhost:8983/solr/test;; > // Test using SolrJ — no expect 100 header > HttpSolrClient client = new HttpSolrClient.Builder() > .withHttpClient(httpClient) > .withBaseSolrUrl(baseUrl).build(); > SolrQuery query = new SolrQuery(); > query.setQuery("*:*"); > client.query(query); > // Test using HttpClient directly — expect 100 header shows up: > HttpPost httpPost = new HttpPost(baseUrl); > HttpEntity entity = new InputStreamEntity(new > ByteArrayInputStream("test".getBytes())); > httpPost.setEntity(entity); > httpClient.execute(httpPost); > {code} > When using the last HttpClient test, the expect 100 header appears in > missing-header.log: > {noformat} > http-outgoing-1 >> Expect: 100-continue{noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-10682) Add variance Stream Evaluator
[ https://issues.apache.org/jira/browse/SOLR-10682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joel Bernstein resolved SOLR-10682. --- Resolution: Duplicate > Add variance Stream Evaluator > - > > Key: SOLR-10682 > URL: https://issues.apache.org/jira/browse/SOLR-10682 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Joel Bernstein >Priority: Major > > The variance Stream Evaluator will calculate the variance of a vector of > numbers. > {code} > v = var(colA) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-13399) compositeId support for shard splitting
Yonik Seeley created SOLR-13399: --- Summary: compositeId support for shard splitting Key: SOLR-13399 URL: https://issues.apache.org/jira/browse/SOLR-13399 Project: Solr Issue Type: New Feature Security Level: Public (Default Security Level. Issues are Public) Reporter: Yonik Seeley Shard splitting does not currently have a way to automatically take into account the actual distribution (number of documents) in each hash bucket created by using compositeId hashing. We should probably add a parameter *splitByPrefix* to the *SPLITSHARD* command that would look at the number of docs sharing each compositeId prefix and use that to create roughly equal sized buckets by document count rather than just assuming an equal distribution across the entire hash range. Like normal shard splitting, we should bias against splitting within hash buckets unless necessary (since that leads to larger query fanout.) . Perhaps this warrants a parameter that would control how much of a size mismatch is tolerable before resorting to splitting within a bucket. *allowedSizeDifference*? To more quickly calculate the number of docs in each bucket, we could index the prefix in a different field. Iterating over the terms for this field would quickly give us the number of docs in each (i.e lucene keeps track of the doc count for each term already.) Perhaps the implementation could be a flag on the *id* field... something like *indexPrefixes* and poly-fields that would cause the indexing to be automatically done and alleviate having to pass in an additional field during indexing and during the call to *SPLITSHARD*. This whole part is an optimization though and could be split off into its own issue if desired. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-2562) Make Luke a Lucene/Solr Module
[ https://issues.apache.org/jira/browse/LUCENE-2562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16816335#comment-16816335 ] Steve Rowe commented on LUCENE-2562: {{ant nightly-smoke}} succeeded for me: bq.[smoker] SUCCESS! [0:37:23.510122] Uwe is right - {{verifyPOMperBinaryArtifact()}} just makes sure that each binary artifact *in the {{maven/}} directory* has a POM, and since Luke doesn't put anything in there, no problem is detected. +1 to merge/cherry-pick. > Make Luke a Lucene/Solr Module > -- > > Key: LUCENE-2562 > URL: https://issues.apache.org/jira/browse/LUCENE-2562 > Project: Lucene - Core > Issue Type: Task >Reporter: Mark Miller >Assignee: Tomoko Uchida >Priority: Major > Labels: gsoc2014 > Fix For: 8.1, master (9.0) > > Attachments: LUCENE-2562-Ivy.patch, LUCENE-2562-Ivy.patch, > LUCENE-2562-Ivy.patch, LUCENE-2562-ivy.patch, LUCENE-2562.patch, > LUCENE-2562.patch, LUCENE-2562.patch, Luke-ALE-1.png, Luke-ALE-2.png, > Luke-ALE-3.png, Luke-ALE-4.png, Luke-ALE-5.png, luke-javafx1.png, > luke-javafx2.png, luke-javafx3.png, luke1.jpg, luke2.jpg, luke3.jpg, > lukeALE-documents.png, screenshot-1.png, スクリーンショット 2018-11-05 9.19.47.png > > Time Spent: 50m > Remaining Estimate: 0h > > see > "RE: Luke - in need of maintainer": > http://markmail.org/message/m4gsto7giltvrpuf > "Web-based Luke": http://markmail.org/message/4xwps7p7ifltme5q > I think it would be great if there was a version of Luke that always worked > with trunk - and it would also be great if it was easier to match Luke jars > with Lucene versions. > While I'd like to get GWT Luke into the mix as well, I think the easiest > starting point is to straight port Luke to another UI toolkit before > abstracting out DTO objects that both GWT Luke and Pivot Luke could share. > I've started slowly converting Luke's use of thinlet to Apache Pivot. I > haven't/don't have a lot of time for this at the moment, but I've plugged > away here and there over the past work or two. There is still a *lot* to do. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-Tests-master - Build # 3267 - Still unstable
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/3267/ 1 tests failed. FAILED: org.apache.lucene.document.TestLatLonShapeEncoding.testRandomLineEncoding Error Message: Stack Trace: java.lang.AssertionError at __randomizedtesting.SeedInfo.seed([5EE09112E6DD5B04:B3F929D032BB0165]:0) at org.junit.Assert.fail(Assert.java:86) at org.junit.Assert.assertTrue(Assert.java:41) at org.junit.Assert.assertTrue(Assert.java:52) at org.apache.lucene.document.TestLatLonShapeEncoding.verifyEncoding(TestLatLonShapeEncoding.java:533) at org.apache.lucene.document.TestLatLonShapeEncoding.testRandomLineEncoding(TestLatLonShapeEncoding.java:475) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.lang.Thread.run(Thread.java:748) Build Log: [...truncated 10207 lines...] [junit4] Suite: org.apache.lucene.document.TestLatLonShapeEncoding [junit4] 2> NOTE: reproduce with: ant test -Dtestcase=TestLatLonShapeEncoding -Dtests.method=testRandomLineEncoding -Dtests.seed=5EE09112E6DD5B04 -Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=pt-BR -Dtests.timezone=Africa/Ouagadougou -Dtests.asserts=true -Dtests.file.encoding=UTF-8 [junit4] FAILURE 0.03s J2 | TestLatLonShapeEncoding.testRandomLineEncoding <<< [junit4]> Throwable #1: java.lang.AssertionError [junit4]>at
[jira] [Updated] (SOLR-13366) AutoScalingConfig 'Invalid stage name' warnings after upgrade
[ https://issues.apache.org/jira/browse/SOLR-13366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Christine Poerschke updated SOLR-13366: --- Attachment: SOLR-13366.patch > AutoScalingConfig 'Invalid stage name' warnings after upgrade > - > > Key: SOLR-13366 > URL: https://issues.apache.org/jira/browse/SOLR-13366 > Project: Solr > Issue Type: Bug > Components: AutoScaling >Reporter: Christine Poerschke >Assignee: Christine Poerschke >Priority: Minor > Attachments: SOLR-13366.patch, SOLR-13366.patch > > > I noticed WARNings like this in some of our logs: > {code:java} > ... OverseerAutoScalingTriggerThread ... o.a.s.c.s.c.a.AutoScalingConfig > Invalid stage name '.auto_add_replicas.system' in listener config, skipping: > {beforeAction=[], afterAction=[], trigger=.auto_add_replicas, stage=[WAITING, > STARTED, ABORTED, SUCCEEDED, FAILED, BEFORE_ACTION, AFTER_ACTION], > class=org.apache.solr.cloud.autoscaling.SystemLogListener} > {code} > After some detective work I think I've tracked this down to 7.1.0 > [TriggerEventProcessorStage|https://github.com/apache/lucene-solr/blob/releases/lucene-solr/7.1.0/solr/solrj/src/java/org/apache/solr/client/solrj/cloud/autoscaling/TriggerEventProcessorStage.java] > having a {{WAITING}} stage and that stage having been removed in 7.2.0 > [TriggerEventProcessorStage|https://github.com/apache/lucene-solr/blob/releases/lucene-solr/7.2.0/solr/solrj/src/java/org/apache/solr/client/solrj/cloud/autoscaling/TriggerEventProcessorStage.java] > via the SOLR-11320 changes. Haven't tried to reproduce it but my theory is > that the listener got auto-created (with the {{WAITING}} stage) when the > cloud was running pre-7.2.0 code and then after upgrading the warnings start > to appear. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[GitHub] [lucene-solr] NazerkeBS commented on issue #643: SOLR-13391: Add variance and standard deviation stream evaluators
NazerkeBS commented on issue #643: SOLR-13391: Add variance and standard deviation stream evaluators URL: https://github.com/apache/lucene-solr/pull/643#issuecomment-482605657 I would be so grateful if you could give me a few suggestions to work on. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-2562) Make Luke a Lucene/Solr Module
[ https://issues.apache.org/jira/browse/LUCENE-2562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16816304#comment-16816304 ] Steve Rowe commented on LUCENE-2562: [~Tomoko Uchida], I'm re-running {{ant nightly-smoke}} from the branch after your latest commit - you should be able to do the same. > Make Luke a Lucene/Solr Module > -- > > Key: LUCENE-2562 > URL: https://issues.apache.org/jira/browse/LUCENE-2562 > Project: Lucene - Core > Issue Type: Task >Reporter: Mark Miller >Assignee: Tomoko Uchida >Priority: Major > Labels: gsoc2014 > Fix For: 8.1, master (9.0) > > Attachments: LUCENE-2562-Ivy.patch, LUCENE-2562-Ivy.patch, > LUCENE-2562-Ivy.patch, LUCENE-2562-ivy.patch, LUCENE-2562.patch, > LUCENE-2562.patch, LUCENE-2562.patch, Luke-ALE-1.png, Luke-ALE-2.png, > Luke-ALE-3.png, Luke-ALE-4.png, Luke-ALE-5.png, luke-javafx1.png, > luke-javafx2.png, luke-javafx3.png, luke1.jpg, luke2.jpg, luke3.jpg, > lukeALE-documents.png, screenshot-1.png, スクリーンショット 2018-11-05 9.19.47.png > > Time Spent: 50m > Remaining Estimate: 0h > > see > "RE: Luke - in need of maintainer": > http://markmail.org/message/m4gsto7giltvrpuf > "Web-based Luke": http://markmail.org/message/4xwps7p7ifltme5q > I think it would be great if there was a version of Luke that always worked > with trunk - and it would also be great if it was easier to match Luke jars > with Lucene versions. > While I'd like to get GWT Luke into the mix as well, I think the easiest > starting point is to straight port Luke to another UI toolkit before > abstracting out DTO objects that both GWT Luke and Pivot Luke could share. > I've started slowly converting Luke's use of thinlet to Apache Pivot. I > haven't/don't have a lot of time for this at the moment, but I've plugged > away here and there over the past work or two. There is still a *lot* to do. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (LUCENE-2562) Make Luke a Lucene/Solr Module
[ https://issues.apache.org/jira/browse/LUCENE-2562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16816300#comment-16816300 ] Tomoko Uchida edited comment on LUCENE-2562 at 4/12/19 2:17 PM: {quote}Tomoko Uchida: can you just fix the above code and commit that, too. You just need to add "luke". {quote} I added 'luke' to the line: [https://github.com/apache/lucene-solr/commit/f85819985b5a9c0c10c0e810c1826cd40be735d2] (did not do any confirmation, just fixed the python code...) was (Author: tomoko uchida): bq. Tomoko Uchida: can you just fix the above code and commit that, too. You just need to add "luke". I added 'luke' to the line: https://github.com/apache/lucene-solr/commit/f85819985b5a9c0c10c0e810c1826cd40be735d2 (did not do any confirmation, just fix the python code...) > Make Luke a Lucene/Solr Module > -- > > Key: LUCENE-2562 > URL: https://issues.apache.org/jira/browse/LUCENE-2562 > Project: Lucene - Core > Issue Type: Task >Reporter: Mark Miller >Assignee: Tomoko Uchida >Priority: Major > Labels: gsoc2014 > Fix For: 8.1, master (9.0) > > Attachments: LUCENE-2562-Ivy.patch, LUCENE-2562-Ivy.patch, > LUCENE-2562-Ivy.patch, LUCENE-2562-ivy.patch, LUCENE-2562.patch, > LUCENE-2562.patch, LUCENE-2562.patch, Luke-ALE-1.png, Luke-ALE-2.png, > Luke-ALE-3.png, Luke-ALE-4.png, Luke-ALE-5.png, luke-javafx1.png, > luke-javafx2.png, luke-javafx3.png, luke1.jpg, luke2.jpg, luke3.jpg, > lukeALE-documents.png, screenshot-1.png, スクリーンショット 2018-11-05 9.19.47.png > > Time Spent: 50m > Remaining Estimate: 0h > > see > "RE: Luke - in need of maintainer": > http://markmail.org/message/m4gsto7giltvrpuf > "Web-based Luke": http://markmail.org/message/4xwps7p7ifltme5q > I think it would be great if there was a version of Luke that always worked > with trunk - and it would also be great if it was easier to match Luke jars > with Lucene versions. > While I'd like to get GWT Luke into the mix as well, I think the easiest > starting point is to straight port Luke to another UI toolkit before > abstracting out DTO objects that both GWT Luke and Pivot Luke could share. > I've started slowly converting Luke's use of thinlet to Apache Pivot. I > haven't/don't have a lot of time for this at the moment, but I've plugged > away here and there over the past work or two. There is still a *lot* to do. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-2562) Make Luke a Lucene/Solr Module
[ https://issues.apache.org/jira/browse/LUCENE-2562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16816300#comment-16816300 ] Tomoko Uchida commented on LUCENE-2562: --- bq. Tomoko Uchida: can you just fix the above code and commit that, too. You just need to add "luke". I added 'luke' to the line: https://github.com/apache/lucene-solr/commit/f85819985b5a9c0c10c0e810c1826cd40be735d2 (did not do any confirmation, just fix the python code...) > Make Luke a Lucene/Solr Module > -- > > Key: LUCENE-2562 > URL: https://issues.apache.org/jira/browse/LUCENE-2562 > Project: Lucene - Core > Issue Type: Task >Reporter: Mark Miller >Assignee: Tomoko Uchida >Priority: Major > Labels: gsoc2014 > Fix For: 8.1, master (9.0) > > Attachments: LUCENE-2562-Ivy.patch, LUCENE-2562-Ivy.patch, > LUCENE-2562-Ivy.patch, LUCENE-2562-ivy.patch, LUCENE-2562.patch, > LUCENE-2562.patch, LUCENE-2562.patch, Luke-ALE-1.png, Luke-ALE-2.png, > Luke-ALE-3.png, Luke-ALE-4.png, Luke-ALE-5.png, luke-javafx1.png, > luke-javafx2.png, luke-javafx3.png, luke1.jpg, luke2.jpg, luke3.jpg, > lukeALE-documents.png, screenshot-1.png, スクリーンショット 2018-11-05 9.19.47.png > > Time Spent: 50m > Remaining Estimate: 0h > > see > "RE: Luke - in need of maintainer": > http://markmail.org/message/m4gsto7giltvrpuf > "Web-based Luke": http://markmail.org/message/4xwps7p7ifltme5q > I think it would be great if there was a version of Luke that always worked > with trunk - and it would also be great if it was easier to match Luke jars > with Lucene versions. > While I'd like to get GWT Luke into the mix as well, I think the easiest > starting point is to straight port Luke to another UI toolkit before > abstracting out DTO objects that both GWT Luke and Pivot Luke could share. > I've started slowly converting Luke's use of thinlet to Apache Pivot. I > haven't/don't have a lot of time for this at the moment, but I've plugged > away here and there over the past work or two. There is still a *lot* to do. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[GitHub] [lucene-solr] joel-bernstein commented on issue #643: SOLR-13391: Add variance and standard deviation stream evaluators
joel-bernstein commented on issue #643: SOLR-13391: Add variance and standard deviation stream evaluators URL: https://github.com/apache/lucene-solr/pull/643#issuecomment-482589219 > I think, that is fine. In the future I would like to contribute more for streaming expressions as I am working on this topic. Ok, ping me if you have a PR to look at. Also let me know if you want suggestions for things to work on. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-13391) Add variance and standard deviation stream evaluators
[ https://issues.apache.org/jira/browse/SOLR-13391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joel Bernstein resolved SOLR-13391. --- Resolution: Resolved Fix Version/s: 8.1 > Add variance and standard deviation stream evaluators > - > > Key: SOLR-13391 > URL: https://issues.apache.org/jira/browse/SOLR-13391 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: streaming expressions >Reporter: Nazerke Seidan >Priority: Minor > Labels: pull-request-available > Fix For: 8.1 > > Time Spent: 1h 50m > Remaining Estimate: 0h > > It seems variance and standard deviation stream evaluators are not supported > by any of the solr version. For example, > let(echo="m,v,sd", arr=array(1,3,3), m=mean(a), v=var(a), > sd=stddev(a)) > So far, only the mean function is implemented. I think it is useful to have > var and sttdev functions separately as a stream evaluator. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-13391) Add variance and standard deviation stream evaluators
[ https://issues.apache.org/jira/browse/SOLR-13391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16816294#comment-16816294 ] Joel Bernstein commented on SOLR-13391: --- [~snazerke], thanks for the contribution! > Add variance and standard deviation stream evaluators > - > > Key: SOLR-13391 > URL: https://issues.apache.org/jira/browse/SOLR-13391 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: streaming expressions >Reporter: Nazerke Seidan >Priority: Minor > Labels: pull-request-available > Time Spent: 1h 50m > Remaining Estimate: 0h > > It seems variance and standard deviation stream evaluators are not supported > by any of the solr version. For example, > let(echo="m,v,sd", arr=array(1,3,3), m=mean(a), v=var(a), > sd=stddev(a)) > So far, only the mean function is implemented. I think it is useful to have > var and sttdev functions separately as a stream evaluator. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-13391) Add variance and standard deviation stream evaluators
[ https://issues.apache.org/jira/browse/SOLR-13391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16816292#comment-16816292 ] ASF subversion and git services commented on SOLR-13391: Commit 9d6c4cb986078db58cdb428e3d0a8a106188d06f in lucene-solr's branch refs/heads/branch_8x from Joel Bernstein [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=9d6c4cb ] SOLR-13391: Update CHANGES.txt > Add variance and standard deviation stream evaluators > - > > Key: SOLR-13391 > URL: https://issues.apache.org/jira/browse/SOLR-13391 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: streaming expressions >Reporter: Nazerke Seidan >Priority: Minor > Labels: pull-request-available > Time Spent: 1h 50m > Remaining Estimate: 0h > > It seems variance and standard deviation stream evaluators are not supported > by any of the solr version. For example, > let(echo="m,v,sd", arr=array(1,3,3), m=mean(a), v=var(a), > sd=stddev(a)) > So far, only the mean function is implemented. I think it is useful to have > var and sttdev functions separately as a stream evaluator. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-13391) Add variance and standard deviation stream evaluators
[ https://issues.apache.org/jira/browse/SOLR-13391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16816291#comment-16816291 ] ASF subversion and git services commented on SOLR-13391: Commit 6c62fbf25f13b1078bb89f3eef8386a10f197b5a in lucene-solr's branch refs/heads/master from Joel Bernstein [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=6c62fbf ] SOLR-13391: Update CHANGES.txt > Add variance and standard deviation stream evaluators > - > > Key: SOLR-13391 > URL: https://issues.apache.org/jira/browse/SOLR-13391 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: streaming expressions >Reporter: Nazerke Seidan >Priority: Minor > Labels: pull-request-available > Time Spent: 1h 50m > Remaining Estimate: 0h > > It seems variance and standard deviation stream evaluators are not supported > by any of the solr version. For example, > let(echo="m,v,sd", arr=array(1,3,3), m=mean(a), v=var(a), > sd=stddev(a)) > So far, only the mean function is implemented. I think it is useful to have > var and sttdev functions separately as a stream evaluator. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-13391) Add variance and standard deviation stream evaluators
[ https://issues.apache.org/jira/browse/SOLR-13391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16816289#comment-16816289 ] ASF subversion and git services commented on SOLR-13391: Commit 4f6e78282f797dd327b8b154e742e92b77a2de09 in lucene-solr's branch refs/heads/branch_8x from Joel Bernstein [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=4f6e782 ] SOLR-13391: Add variance and standard deviation stream evaluators Squashed commit of the following: commit 406d4b959a42e4830ac1c52836ccbcfc1b614b46 Author: Nazerke Date: Fri Apr 12 14:03:34 2019 +0200 added missing package commit 32c239687c39c5da3e4f2d0f25df73127331fa99 Author: Nazerke Date: Fri Apr 12 14:03:14 2019 +0200 added package commit 7b3f9bd415002969a4ec5d87a9ffbfd6fcff6e92 Author: Nazerke Date: Fri Apr 12 14:02:28 2019 +0200 added var and stddev functions commit 77c4f9fdd9f111862a55b645aad960457291414c Author: Nazerke Date: Fri Apr 12 14:00:59 2019 +0200 added test for the variance and standard deviation stream evaluators commit 2d9692c178590b65e46cfd9e04ca0384c7d39ec5 Author: naz Date: Wed Apr 10 19:50:30 2019 +0200 added var and stddev new evaluators commit d265225747bce9a0eabd713994ddd4990dbbbfa2 Author: naz Date: Wed Apr 10 19:49:23 2019 +0200 variance streaming evaluator commit a3330064bb62b5723b9125334ef1d61fc3b098d3 Author: naz Date: Wed Apr 10 19:49:02 2019 +0200 standard deviation streaming evaluator > Add variance and standard deviation stream evaluators > - > > Key: SOLR-13391 > URL: https://issues.apache.org/jira/browse/SOLR-13391 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: streaming expressions >Reporter: Nazerke Seidan >Priority: Minor > Labels: pull-request-available > Time Spent: 1h 50m > Remaining Estimate: 0h > > It seems variance and standard deviation stream evaluators are not supported > by any of the solr version. For example, > let(echo="m,v,sd", arr=array(1,3,3), m=mean(a), v=var(a), > sd=stddev(a)) > So far, only the mean function is implemented. I think it is useful to have > var and sttdev functions separately as a stream evaluator. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-13391) Add variance and standard deviation stream evaluators
[ https://issues.apache.org/jira/browse/SOLR-13391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16816286#comment-16816286 ] ASF subversion and git services commented on SOLR-13391: Commit 58001bfc870a6f5f04cc200853df7ffe04473866 in lucene-solr's branch refs/heads/master from Joel Bernstein [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=58001bf ] SOLR-13391: Add variance and standard deviation stream evaluators Squashed commit of the following: commit 406d4b959a42e4830ac1c52836ccbcfc1b614b46 Author: Nazerke Date: Fri Apr 12 14:03:34 2019 +0200 added missing package commit 32c239687c39c5da3e4f2d0f25df73127331fa99 Author: Nazerke Date: Fri Apr 12 14:03:14 2019 +0200 added package commit 7b3f9bd415002969a4ec5d87a9ffbfd6fcff6e92 Author: Nazerke Date: Fri Apr 12 14:02:28 2019 +0200 added var and stddev functions commit 77c4f9fdd9f111862a55b645aad960457291414c Author: Nazerke Date: Fri Apr 12 14:00:59 2019 +0200 added test for the variance and standard deviation stream evaluators commit 2d9692c178590b65e46cfd9e04ca0384c7d39ec5 Author: naz Date: Wed Apr 10 19:50:30 2019 +0200 added var and stddev new evaluators commit d265225747bce9a0eabd713994ddd4990dbbbfa2 Author: naz Date: Wed Apr 10 19:49:23 2019 +0200 variance streaming evaluator commit a3330064bb62b5723b9125334ef1d61fc3b098d3 Author: naz Date: Wed Apr 10 19:49:02 2019 +0200 standard deviation streaming evaluator > Add variance and standard deviation stream evaluators > - > > Key: SOLR-13391 > URL: https://issues.apache.org/jira/browse/SOLR-13391 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: streaming expressions >Reporter: Nazerke Seidan >Priority: Minor > Labels: pull-request-available > Time Spent: 1h 50m > Remaining Estimate: 0h > > It seems variance and standard deviation stream evaluators are not supported > by any of the solr version. For example, > let(echo="m,v,sd", arr=array(1,3,3), m=mean(a), v=var(a), > sd=stddev(a)) > So far, only the mean function is implemented. I think it is useful to have > var and sttdev functions separately as a stream evaluator. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-2562) Make Luke a Lucene/Solr Module
[ https://issues.apache.org/jira/browse/LUCENE-2562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16816280#comment-16816280 ] Uwe Schindler commented on LUCENE-2562: --- The fix for smoker is to be done here in dev-tools/scripts/smoketestRelease.py: {code:python} if project == 'lucene': # TODO: clean this up to not be a list of modules that we must maintain extras = ('analysis', 'backward-codecs', 'benchmark', 'classification', 'codecs', 'core', 'demo', 'docs', 'expressions', 'facet', 'grouping', 'highlighter', 'join', 'memory', 'misc', 'queries', 'queryparser', 'replicator', 'sandbox', 'spatial', 'spatial-extras', 'spatial3d', 'suggest', 'test-framework', 'licenses') {code} The maven part should not fail as it just downloads everything from the maven folder and then checks if POMS are available. But as it's not in the maven folder of the release, it should not fail. I'd wait for that to fail or not fail. [~Tomoko Uchida]: can you just fix the above code and commit that, too. You just need to add "luke". > Make Luke a Lucene/Solr Module > -- > > Key: LUCENE-2562 > URL: https://issues.apache.org/jira/browse/LUCENE-2562 > Project: Lucene - Core > Issue Type: Task >Reporter: Mark Miller >Assignee: Tomoko Uchida >Priority: Major > Labels: gsoc2014 > Fix For: 8.1, master (9.0) > > Attachments: LUCENE-2562-Ivy.patch, LUCENE-2562-Ivy.patch, > LUCENE-2562-Ivy.patch, LUCENE-2562-ivy.patch, LUCENE-2562.patch, > LUCENE-2562.patch, LUCENE-2562.patch, Luke-ALE-1.png, Luke-ALE-2.png, > Luke-ALE-3.png, Luke-ALE-4.png, Luke-ALE-5.png, luke-javafx1.png, > luke-javafx2.png, luke-javafx3.png, luke1.jpg, luke2.jpg, luke3.jpg, > lukeALE-documents.png, screenshot-1.png, スクリーンショット 2018-11-05 9.19.47.png > > Time Spent: 50m > Remaining Estimate: 0h > > see > "RE: Luke - in need of maintainer": > http://markmail.org/message/m4gsto7giltvrpuf > "Web-based Luke": http://markmail.org/message/4xwps7p7ifltme5q > I think it would be great if there was a version of Luke that always worked > with trunk - and it would also be great if it was easier to match Luke jars > with Lucene versions. > While I'd like to get GWT Luke into the mix as well, I think the easiest > starting point is to straight port Luke to another UI toolkit before > abstracting out DTO objects that both GWT Luke and Pivot Luke could share. > I've started slowly converting Luke's use of thinlet to Apache Pivot. I > haven't/don't have a lot of time for this at the moment, but I've plugged > away here and there over the past work or two. There is still a *lot* to do. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[GitHub] [lucene-solr] NazerkeBS commented on issue #643: SOLR-13391: Add variance and standard deviation stream evaluators
NazerkeBS commented on issue #643: SOLR-13391: Add variance and standard deviation stream evaluators URL: https://github.com/apache/lucene-solr/pull/643#issuecomment-482579769 I think, that is fine. In the future I would like to contribute more for streaming expressions as I am working on this topic. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (LUCENE-2562) Make Luke a Lucene/Solr Module
[ https://issues.apache.org/jira/browse/LUCENE-2562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16816269#comment-16816269 ] Steve Rowe edited comment on LUCENE-2562 at 4/12/19 1:36 PM: - The smoke tester failed for a non-Maven-related reason: {noformat} [smoker] Test Lucene... [smoker] verify sha512 digest [smoker] unpack lucene-9.0.0.tgz... [smoker] Traceback (most recent call last): [smoker] File "/home/sarowe/git/lucene-solr/dev-tools/scripts/smokeTestRelease.py", line 1518, in [smoker] main() [smoker] File "/home/sarowe/git/lucene-solr/dev-tools/scripts/smokeTestRelease.py", line 1448, in main [smoker] smokeTest(c.java, c.url, c.revision, c.version, c.tmp_dir, c.is_signed, c.local_keys, ' '.join(c.test_args)) [smoker] File "/home/sarowe/git/lucene-solr/dev-tools/scripts/smokeTestRelease.py", line 1499, in smokeTest [smoker] unpackAndVerify(java, 'lucene', tmpDir, artifact, gitRevision, version, testArgs, baseURL) [smoker] File "/home/sarowe/git/lucene-solr/dev-tools/scripts/smokeTestRelease.py", line 604, in unpackAndVerify [smoker] verifyUnpacked(java, project, artifact, unpackPath, gitRevision, version, testArgs, tmpDir, baseURL) [smoker] File "/home/sarowe/git/lucene-solr/dev-tools/scripts/smokeTestRelease.py", line 679, in verifyUnpacked [smoker] raise RuntimeError('%s: unexpected files/dirs in artifact %s: %s' % (project, artifact, l)) [smoker] RuntimeError: lucene: unexpected files/dirs in artifact lucene-9.0.0.tgz: ['luke'] {noformat} Sorry, I don't have time right now to work on it, but I will have time this weekend if nobody beats me to it. FYI, I expect there will be a Maven-related problem with the smoke tester - here's what I think will be the problem (from {{smokeTestRelease.py}}) - probably just need to introduce an exception list: {noformat} def verifyPOMperBinaryArtifact(artifacts, version): print('verify that each binary artifact has a deployed POM...') reBinaryJarWar = re.compile(r'%s\.[jw]ar$' % re.escape(version)) for project in ('lucene', 'solr'): for artifact in [a for a in artifacts[project] if reBinaryJarWar.search(a)]: POM = artifact[:-4] + '.pom' if POM not in artifacts[project]: raise RuntimeError('missing: POM for %s' % artifact) {noformat} was (Author: steve_rowe): The smoke tester failed for a non-Maven-related reason: {noformat} [smoker] Test Lucene... [smoker] verify sha512 digest [smoker] unpack lucene-9.0.0.tgz... [smoker] Traceback (most recent call last): [smoker] File "/home/sarowe/git/lucene-solr/dev-tools/scripts/smokeTestRelease.py", line 1518, in [smoker] main() [smoker] File "/home/sarowe/git/lucene-solr/dev-tools/scripts/smokeTestRelease.py", line 1448, in main [smoker] smokeTest(c.java, c.url, c.revision, c.version, c.tmp_dir, c.is_signed, c.local_keys, ' '.join(c.test_args)) [smoker] File "/home/sarowe/git/lucene-solr/dev-tools/scripts/smokeTestRelease.py", line 1499, in smokeTest [smoker] unpackAndVerify(java, 'lucene', tmpDir, artifact, gitRevision, version, testArgs, baseURL) [smoker] File "/home/sarowe/git/lucene-solr/dev-tools/scripts/smokeTestRelease.py", line 604, in unpackAndVerify [smoker] verifyUnpacked(java, project, artifact, unpackPath, gitRevision, version, testArgs, tmpDir, baseURL) [smoker] File "/home/sarowe/git/lucene-solr/dev-tools/scripts/smokeTestRelease.py", line 679, in verifyUnpacked [smoker] raise RuntimeError('%s: unexpected files/dirs in artifact %s: %s' % (project, artifact, l)) [smoker] RuntimeError: lucene: unexpected files/dirs in artifact lucene-9.0.0.tgz: ['luke'] {noformat} Sorry, I don't have time right now to work on it, but I will have time this weekend if nobody beats me to it. FYI, I expect there will be a Maven-related problem with the smoke tester - here's what I think will be the problem (from {{smokeTestRelease.py}} - probably just need to introduce an exception list: {noformat} def verifyPOMperBinaryArtifact(artifacts, version): print('verify that each binary artifact has a deployed POM...') reBinaryJarWar = re.compile(r'%s\.[jw]ar$' % re.escape(version)) for project in ('lucene', 'solr'): for artifact in [a for a in artifacts[project] if reBinaryJarWar.search(a)]: POM = artifact[:-4] + '.pom' if POM not in artifacts[project]: raise RuntimeError('missing: POM for %s' % artifact) {noformat} > Make Luke a Lucene/Solr Module > -- > > Key: LUCENE-2562 > URL: https://issues.apache.org/jira/browse/LUCENE-2562 > Project: Lucene - Core > Issue Type: Task >Reporter: Mark Miller >Assignee: Tomoko Uchida >Priority: Major >
[jira] [Commented] (LUCENE-2562) Make Luke a Lucene/Solr Module
[ https://issues.apache.org/jira/browse/LUCENE-2562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16816269#comment-16816269 ] Steve Rowe commented on LUCENE-2562: The smoke tester failed for a non-Maven-related reason: {noformat} [smoker] Test Lucene... [smoker] verify sha512 digest [smoker] unpack lucene-9.0.0.tgz... [smoker] Traceback (most recent call last): [smoker] File "/home/sarowe/git/lucene-solr/dev-tools/scripts/smokeTestRelease.py", line 1518, in [smoker] main() [smoker] File "/home/sarowe/git/lucene-solr/dev-tools/scripts/smokeTestRelease.py", line 1448, in main [smoker] smokeTest(c.java, c.url, c.revision, c.version, c.tmp_dir, c.is_signed, c.local_keys, ' '.join(c.test_args)) [smoker] File "/home/sarowe/git/lucene-solr/dev-tools/scripts/smokeTestRelease.py", line 1499, in smokeTest [smoker] unpackAndVerify(java, 'lucene', tmpDir, artifact, gitRevision, version, testArgs, baseURL) [smoker] File "/home/sarowe/git/lucene-solr/dev-tools/scripts/smokeTestRelease.py", line 604, in unpackAndVerify [smoker] verifyUnpacked(java, project, artifact, unpackPath, gitRevision, version, testArgs, tmpDir, baseURL) [smoker] File "/home/sarowe/git/lucene-solr/dev-tools/scripts/smokeTestRelease.py", line 679, in verifyUnpacked [smoker] raise RuntimeError('%s: unexpected files/dirs in artifact %s: %s' % (project, artifact, l)) [smoker] RuntimeError: lucene: unexpected files/dirs in artifact lucene-9.0.0.tgz: ['luke'] {noformat} Sorry, I don't have time right now to work on it, but I will have time this weekend if nobody beats me to it. FYI, I expect there will be a Maven-related problem with the smoke tester - here's what I think will be the problem (from {{smokeTestRelease.py}} - probably just need to introduce an exception list: {noformat} def verifyPOMperBinaryArtifact(artifacts, version): print('verify that each binary artifact has a deployed POM...') reBinaryJarWar = re.compile(r'%s\.[jw]ar$' % re.escape(version)) for project in ('lucene', 'solr'): for artifact in [a for a in artifacts[project] if reBinaryJarWar.search(a)]: POM = artifact[:-4] + '.pom' if POM not in artifacts[project]: raise RuntimeError('missing: POM for %s' % artifact) {noformat} > Make Luke a Lucene/Solr Module > -- > > Key: LUCENE-2562 > URL: https://issues.apache.org/jira/browse/LUCENE-2562 > Project: Lucene - Core > Issue Type: Task >Reporter: Mark Miller >Assignee: Tomoko Uchida >Priority: Major > Labels: gsoc2014 > Fix For: 8.1, master (9.0) > > Attachments: LUCENE-2562-Ivy.patch, LUCENE-2562-Ivy.patch, > LUCENE-2562-Ivy.patch, LUCENE-2562-ivy.patch, LUCENE-2562.patch, > LUCENE-2562.patch, LUCENE-2562.patch, Luke-ALE-1.png, Luke-ALE-2.png, > Luke-ALE-3.png, Luke-ALE-4.png, Luke-ALE-5.png, luke-javafx1.png, > luke-javafx2.png, luke-javafx3.png, luke1.jpg, luke2.jpg, luke3.jpg, > lukeALE-documents.png, screenshot-1.png, スクリーンショット 2018-11-05 9.19.47.png > > Time Spent: 50m > Remaining Estimate: 0h > > see > "RE: Luke - in need of maintainer": > http://markmail.org/message/m4gsto7giltvrpuf > "Web-based Luke": http://markmail.org/message/4xwps7p7ifltme5q > I think it would be great if there was a version of Luke that always worked > with trunk - and it would also be great if it was easier to match Luke jars > with Lucene versions. > While I'd like to get GWT Luke into the mix as well, I think the easiest > starting point is to straight port Luke to another UI toolkit before > abstracting out DTO objects that both GWT Luke and Pivot Luke could share. > I've started slowly converting Luke's use of thinlet to Apache Pivot. I > haven't/don't have a lot of time for this at the moment, but I've plugged > away here and there over the past work or two. There is still a *lot* to do. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-2562) Make Luke a Lucene/Solr Module
[ https://issues.apache.org/jira/browse/LUCENE-2562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16816264#comment-16816264 ] Uwe Schindler commented on LUCENE-2562: --- All fine, we will wait for jenkins to prove. If we need to change smoke tester we can wait till later. > Make Luke a Lucene/Solr Module > -- > > Key: LUCENE-2562 > URL: https://issues.apache.org/jira/browse/LUCENE-2562 > Project: Lucene - Core > Issue Type: Task >Reporter: Mark Miller >Assignee: Tomoko Uchida >Priority: Major > Labels: gsoc2014 > Fix For: 8.1, master (9.0) > > Attachments: LUCENE-2562-Ivy.patch, LUCENE-2562-Ivy.patch, > LUCENE-2562-Ivy.patch, LUCENE-2562-ivy.patch, LUCENE-2562.patch, > LUCENE-2562.patch, LUCENE-2562.patch, Luke-ALE-1.png, Luke-ALE-2.png, > Luke-ALE-3.png, Luke-ALE-4.png, Luke-ALE-5.png, luke-javafx1.png, > luke-javafx2.png, luke-javafx3.png, luke1.jpg, luke2.jpg, luke3.jpg, > lukeALE-documents.png, screenshot-1.png, スクリーンショット 2018-11-05 9.19.47.png > > Time Spent: 50m > Remaining Estimate: 0h > > see > "RE: Luke - in need of maintainer": > http://markmail.org/message/m4gsto7giltvrpuf > "Web-based Luke": http://markmail.org/message/4xwps7p7ifltme5q > I think it would be great if there was a version of Luke that always worked > with trunk - and it would also be great if it was easier to match Luke jars > with Lucene versions. > While I'd like to get GWT Luke into the mix as well, I think the easiest > starting point is to straight port Luke to another UI toolkit before > abstracting out DTO objects that both GWT Luke and Pivot Luke could share. > I've started slowly converting Luke's use of thinlet to Apache Pivot. I > haven't/don't have a lot of time for this at the moment, but I've plugged > away here and there over the past work or two. There is still a *lot* to do. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (LUCENE-2562) Make Luke a Lucene/Solr Module
[ https://issues.apache.org/jira/browse/LUCENE-2562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16816261#comment-16816261 ] Uwe Schindler edited comment on LUCENE-2562 at 4/12/19 1:30 PM: bq. and a 301 HTTP code (permanently relocated), That's a bug in Java, which wont be fixed. The Java HTTP as shipped with JDK behind the URL class does not follow redirects between different HTTP protocols, so it won't move from http to https. Thats a known issue: [https://bugs.openjdk.java.net/browse/JDK-4620571?focusedCommentId=12159233=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-12159233] This makes me really angry as its against HTTP spec, but Oracle/SUN never implemented it for some fake security reasons. In fact HTTP->HTTPS is legit, just the other way round is not. Not sure we need a workaround for that, but that's a separate issue. was (Author: thetaphi): bq. and a 301 HTTP code (permanently relocated), That's a bug in Java, which wont be fixed. The Java HTTP as shipped with JDK behind the URL class does not follow redirects between different HTTP protocols, so it won't move from http to https. Thats a known issue: [https://bugs.openjdk.java.net/browse/JDK-4620571?focusedCommentId=12159233=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-12159233] This makes me angry as its against HTTP spec, but Oracle/SUn never implemented it for some fake security reasons. In fact HTTP->HTTPS is legt, just the otehr way round is not. > Make Luke a Lucene/Solr Module > -- > > Key: LUCENE-2562 > URL: https://issues.apache.org/jira/browse/LUCENE-2562 > Project: Lucene - Core > Issue Type: Task >Reporter: Mark Miller >Assignee: Tomoko Uchida >Priority: Major > Labels: gsoc2014 > Fix For: 8.1, master (9.0) > > Attachments: LUCENE-2562-Ivy.patch, LUCENE-2562-Ivy.patch, > LUCENE-2562-Ivy.patch, LUCENE-2562-ivy.patch, LUCENE-2562.patch, > LUCENE-2562.patch, LUCENE-2562.patch, Luke-ALE-1.png, Luke-ALE-2.png, > Luke-ALE-3.png, Luke-ALE-4.png, Luke-ALE-5.png, luke-javafx1.png, > luke-javafx2.png, luke-javafx3.png, luke1.jpg, luke2.jpg, luke3.jpg, > lukeALE-documents.png, screenshot-1.png, スクリーンショット 2018-11-05 9.19.47.png > > Time Spent: 50m > Remaining Estimate: 0h > > see > "RE: Luke - in need of maintainer": > http://markmail.org/message/m4gsto7giltvrpuf > "Web-based Luke": http://markmail.org/message/4xwps7p7ifltme5q > I think it would be great if there was a version of Luke that always worked > with trunk - and it would also be great if it was easier to match Luke jars > with Lucene versions. > While I'd like to get GWT Luke into the mix as well, I think the easiest > starting point is to straight port Luke to another UI toolkit before > abstracting out DTO objects that both GWT Luke and Pivot Luke could share. > I've started slowly converting Luke's use of thinlet to Apache Pivot. I > haven't/don't have a lot of time for this at the moment, but I've plugged > away here and there over the past work or two. There is still a *lot* to do. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-2562) Make Luke a Lucene/Solr Module
[ https://issues.apache.org/jira/browse/LUCENE-2562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16816261#comment-16816261 ] Uwe Schindler commented on LUCENE-2562: --- bq. and a 301 HTTP code (permanently relocated), That's a bug in Java, which wont be fixed. The Java HTTP as shipped with JDK behind the URL class does not follow redirects between different HTTP protocols, so it won't move from http to https. Thats a known issue: [https://bugs.openjdk.java.net/browse/JDK-4620571?focusedCommentId=12159233=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-12159233] This makes me angry as its against HTTP spec, but Oracle/SUn never implemented it for some fake security reasons. In fact HTTP->HTTPS is legt, just the otehr way round is not. > Make Luke a Lucene/Solr Module > -- > > Key: LUCENE-2562 > URL: https://issues.apache.org/jira/browse/LUCENE-2562 > Project: Lucene - Core > Issue Type: Task >Reporter: Mark Miller >Assignee: Tomoko Uchida >Priority: Major > Labels: gsoc2014 > Fix For: 8.1, master (9.0) > > Attachments: LUCENE-2562-Ivy.patch, LUCENE-2562-Ivy.patch, > LUCENE-2562-Ivy.patch, LUCENE-2562-ivy.patch, LUCENE-2562.patch, > LUCENE-2562.patch, LUCENE-2562.patch, Luke-ALE-1.png, Luke-ALE-2.png, > Luke-ALE-3.png, Luke-ALE-4.png, Luke-ALE-5.png, luke-javafx1.png, > luke-javafx2.png, luke-javafx3.png, luke1.jpg, luke2.jpg, luke3.jpg, > lukeALE-documents.png, screenshot-1.png, スクリーンショット 2018-11-05 9.19.47.png > > Time Spent: 50m > Remaining Estimate: 0h > > see > "RE: Luke - in need of maintainer": > http://markmail.org/message/m4gsto7giltvrpuf > "Web-based Luke": http://markmail.org/message/4xwps7p7ifltme5q > I think it would be great if there was a version of Luke that always worked > with trunk - and it would also be great if it was easier to match Luke jars > with Lucene versions. > While I'd like to get GWT Luke into the mix as well, I think the easiest > starting point is to straight port Luke to another UI toolkit before > abstracting out DTO objects that both GWT Luke and Pivot Luke could share. > I've started slowly converting Luke's use of thinlet to Apache Pivot. I > haven't/don't have a lot of time for this at the moment, but I've plugged > away here and there over the past work or two. There is still a *lot* to do. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-NightlyTests-master - Build # 1818 - Still Failing
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1818/ 6 tests failed. FAILED: org.apache.solr.analytics.legacy.LegacyNoFacetCloudTest.meanTest Error Message: Error starting up MiniSolrCloudCluster Stack Trace: java.lang.Exception: Error starting up MiniSolrCloudCluster at org.apache.solr.cloud.MiniSolrCloudCluster.checkForExceptions(MiniSolrCloudCluster.java:652) at org.apache.solr.cloud.MiniSolrCloudCluster.(MiniSolrCloudCluster.java:306) at org.apache.solr.cloud.SolrCloudTestCase$Builder.build(SolrCloudTestCase.java:212) at org.apache.solr.cloud.SolrCloudTestCase$Builder.configure(SolrCloudTestCase.java:204) at org.apache.solr.analytics.legacy.LegacyAbstractAnalyticsCloudTest.setupCollection(LegacyAbstractAnalyticsCloudTest.java:49) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:972) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.lang.Thread.run(Thread.java:748) Suppressed: java.lang.RuntimeException: Jetty/Solr unresponsive at org.apache.solr.client.solrj.embedded.JettySolrRunner.start(JettySolrRunner.java:507) at
[GitHub] [lucene-solr] joel-bernstein commented on issue #643: SOLR-13391: Add variance and standard deviation stream evaluators
joel-bernstein commented on issue #643: SOLR-13391: Add variance and standard deviation stream evaluators URL: https://github.com/apache/lucene-solr/pull/643#issuecomment-482572344 This is pretty much ready to go. Thanks again for the contribution. I should be able to get this in for the next release. It will be listed in the CHANGES.txt as: SOLR-13391: Add variance and standard deviation stream evaluators (Nazerke Seidan, Joel Bernstein) Let me know if you'd like your name listed differently. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-2562) Make Luke a Lucene/Solr Module
[ https://issues.apache.org/jira/browse/LUCENE-2562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16816250#comment-16816250 ] Tomoko Uchida commented on LUCENE-2562: --- Thanks, Steve, "ant validate-maven-dependencies" is successfully built on my local PC after replacing the POM with the correct one. I can cherry-pick the change to the master and branch_8x when you confirm it. > Make Luke a Lucene/Solr Module > -- > > Key: LUCENE-2562 > URL: https://issues.apache.org/jira/browse/LUCENE-2562 > Project: Lucene - Core > Issue Type: Task >Reporter: Mark Miller >Assignee: Tomoko Uchida >Priority: Major > Labels: gsoc2014 > Fix For: 8.1, master (9.0) > > Attachments: LUCENE-2562-Ivy.patch, LUCENE-2562-Ivy.patch, > LUCENE-2562-Ivy.patch, LUCENE-2562-ivy.patch, LUCENE-2562.patch, > LUCENE-2562.patch, LUCENE-2562.patch, Luke-ALE-1.png, Luke-ALE-2.png, > Luke-ALE-3.png, Luke-ALE-4.png, Luke-ALE-5.png, luke-javafx1.png, > luke-javafx2.png, luke-javafx3.png, luke1.jpg, luke2.jpg, luke3.jpg, > lukeALE-documents.png, screenshot-1.png, スクリーンショット 2018-11-05 9.19.47.png > > Time Spent: 50m > Remaining Estimate: 0h > > see > "RE: Luke - in need of maintainer": > http://markmail.org/message/m4gsto7giltvrpuf > "Web-based Luke": http://markmail.org/message/4xwps7p7ifltme5q > I think it would be great if there was a version of Luke that always worked > with trunk - and it would also be great if it was easier to match Luke jars > with Lucene versions. > While I'd like to get GWT Luke into the mix as well, I think the easiest > starting point is to straight port Luke to another UI toolkit before > abstracting out DTO objects that both GWT Luke and Pivot Luke could share. > I've started slowly converting Luke's use of thinlet to Apache Pivot. I > haven't/don't have a lot of time for this at the moment, but I've plugged > away here and there over the past work or two. There is still a *lot* to do. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-2562) Make Luke a Lucene/Solr Module
[ https://issues.apache.org/jira/browse/LUCENE-2562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16816248#comment-16816248 ] Uwe Schindler commented on LUCENE-2562: --- I have seen the maven issue, too (a while back). It looks like for some legal reasons they removed some artifacts from Maven central which are already in your repository. The easiest is to delete the whole local repository... but then it downloads everythinga agin. If this takes too long for you, just clean up from the ~/.m2 cache folders. > Make Luke a Lucene/Solr Module > -- > > Key: LUCENE-2562 > URL: https://issues.apache.org/jira/browse/LUCENE-2562 > Project: Lucene - Core > Issue Type: Task >Reporter: Mark Miller >Assignee: Tomoko Uchida >Priority: Major > Labels: gsoc2014 > Fix For: 8.1, master (9.0) > > Attachments: LUCENE-2562-Ivy.patch, LUCENE-2562-Ivy.patch, > LUCENE-2562-Ivy.patch, LUCENE-2562-ivy.patch, LUCENE-2562.patch, > LUCENE-2562.patch, LUCENE-2562.patch, Luke-ALE-1.png, Luke-ALE-2.png, > Luke-ALE-3.png, Luke-ALE-4.png, Luke-ALE-5.png, luke-javafx1.png, > luke-javafx2.png, luke-javafx3.png, luke1.jpg, luke2.jpg, luke3.jpg, > lukeALE-documents.png, screenshot-1.png, スクリーンショット 2018-11-05 9.19.47.png > > Time Spent: 50m > Remaining Estimate: 0h > > see > "RE: Luke - in need of maintainer": > http://markmail.org/message/m4gsto7giltvrpuf > "Web-based Luke": http://markmail.org/message/4xwps7p7ifltme5q > I think it would be great if there was a version of Luke that always worked > with trunk - and it would also be great if it was easier to match Luke jars > with Lucene versions. > While I'd like to get GWT Luke into the mix as well, I think the easiest > starting point is to straight port Luke to another UI toolkit before > abstracting out DTO objects that both GWT Luke and Pivot Luke could share. > I've started slowly converting Luke's use of thinlet to Apache Pivot. I > haven't/don't have a lot of time for this at the moment, but I've plugged > away here and there over the past work or two. There is still a *lot* to do. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[GitHub] [lucene-solr] joel-bernstein commented on issue #643: SOLR-13391: Add variance and standard deviation stream evaluators
joel-bernstein commented on issue #643: SOLR-13391: Add variance and standard deviation stream evaluators URL: https://github.com/apache/lucene-solr/pull/643#issuecomment-482570071 https://user-images.githubusercontent.com/5747955/56039861-40812000-5d03-11e9-80a7-f06590c72771.png;> This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[GitHub] [lucene-solr] joel-bernstein removed a comment on issue #643: SOLR-13391: Add variance and standard deviation stream evaluators
joel-bernstein removed a comment on issue #643: SOLR-13391: Add variance and standard deviation stream evaluators URL: https://github.com/apache/lucene-solr/pull/643#issuecomment-482570071 https://user-images.githubusercontent.com/5747955/56039861-40812000-5d03-11e9-80a7-f06590c72771.png;> This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[GitHub] [lucene-solr] joel-bernstein edited a comment on issue #643: SOLR-13391: Add variance and standard deviation stream evaluators
joel-bernstein edited a comment on issue #643: SOLR-13391: Add variance and standard deviation stream evaluators URL: https://github.com/apache/lucene-solr/pull/643#issuecomment-482569350 Here is how it looks from the Zeppelin-Solr interpreter https://user-images.githubusercontent.com/5747955/56039902-4b3bb500-5d03-11e9-85be-43a34d54fc56.png;> This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[GitHub] [lucene-solr] joel-bernstein edited a comment on issue #643: SOLR-13391: Add variance and standard deviation stream evaluators
joel-bernstein edited a comment on issue #643: SOLR-13391: Add variance and standard deviation stream evaluators URL: https://github.com/apache/lucene-solr/pull/643#issuecomment-482569350 Here is how it looks from Zeppelin-Solr interpreter https://user-images.githubusercontent.com/5747955/56039902-4b3bb500-5d03-11e9-85be-43a34d54fc56.png;> This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[GitHub] [lucene-solr] joel-bernstein edited a comment on issue #643: SOLR-13391: Add variance and standard deviation stream evaluators
joel-bernstein edited a comment on issue #643: SOLR-13391: Add variance and standard deviation stream evaluators URL: https://github.com/apache/lucene-solr/pull/643#issuecomment-482569350 Here is how it looks from Zeppelin-Solr interpreter This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[GitHub] [lucene-solr] joel-bernstein commented on issue #643: SOLR-13391: Add variance and standard deviation stream evaluators
joel-bernstein commented on issue #643: SOLR-13391: Add variance and standard deviation stream evaluators URL: https://github.com/apache/lucene-solr/pull/643#issuecomment-482569350 Here is how it looks from Zeppelin-Solr interpreter https://user-images.githubusercontent.com/5747955/56039810-234c5180-5d03-11e9-9198-61d380ec59f7.png;> This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (LUCENE-2562) Make Luke a Lucene/Solr Module
[ https://issues.apache.org/jira/browse/LUCENE-2562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16816237#comment-16816237 ] Steve Rowe edited comment on LUCENE-2562 at 4/12/19 1:06 PM: - [~thetaphi]: I didn't think of running the smoke tester; I'm not sure whether/how Maven artifact validation checks artifact correspondence, I'll run it locally on the branch now that Tomoko has updated it. [~Tomoko Uchida]: I saw that same problem, as did Kevin Risden [over on SOLR-9515|https://jira.apache.org/jira/browse/SOLR-9515?focusedCommentId=16762835=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16762835]. AFAICT this happens when you have a local maven repo that you haven't used the Lucene/Solr Maven build with, but which is populated by the Lucene/Solr Ant build via the Maven Ant Tasks plugin. Earlier in the log I see: {noformat} [artifact:dependencies] Downloading: com/healthmarketscience/openhms-parent/1.1.8/openhms-parent-1.1.8.pom from repository taglets at http://maven.geotoolkit.org/ [artifact:dependencies] Transferring 0K from taglets [artifact:dependencies] [WARNING] *** CHECKSUM FAILED - Checksum failed on download: local = 'c53d5de1975ce58462f226d7ed126e02d8f1f58b'; remote = ' [artifact:dependencies] 301' - RETRYING {noformat} So (for me anyway) what happened was that a Maven repository we don't specify (probably specified in the jackcess POM hierarchy) returned an HTML page and a 301 HTTP code (permanently relocated), which is improperly interpreted by Maven Ant Tasks as the appropriate artifact and saved as if it were the jackcess parent POM. I worked around this by deleting the local repo's bad file, then installing the correct POM manually: {noformat} rm ~/.m2/repository/com/healthmarketscience/openhms-parent/1.1.8/openhms-parent-1.1.8.pom curl -O https://repo1.maven.org/maven2/com/healthmarketscience/openhms-parent/1.1.8/openhms-parent-1.1.8.pom mvn install:install-file -Dfile=openhms-parent-1.1.8.pom -DgroupId=com.healthmarketscience -DartifactId=openhms-parent -Dversion=1.1.8 -Dpackaging=pom {noformat} Maybe there's a better way? (E.g. running the Maven build in this context?) The above fixed the problem for me. (Maven Ant Tasks have been EOL'd for a few years, so there will be no fix for this problem in that project.) was (Author: steve_rowe): [~thetaphi]: I didn't think of running the smoke tester; I'm not sure whether/how Maven artifact validation looks at , I'll run it locally on the branch now that Tomoko has updated it. [~Tomoko Uchida]: I saw that same problem, as did Kevin Risden [over on SOLR-9515|https://jira.apache.org/jira/browse/SOLR-9515?focusedCommentId=16762835=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16762835]. AFAICT this happens when you have a local maven repo that you haven't used the Lucene/Solr Maven build with, but which is populated by the Lucene/Solr Ant build via the Maven Ant Tasks plugin. Earlier in the log I see: {noformat} [artifact:dependencies] Downloading: com/healthmarketscience/openhms-parent/1.1.8/openhms-parent-1.1.8.pom from repository taglets at http://maven.geotoolkit.org/ [artifact:dependencies] Transferring 0K from taglets [artifact:dependencies] [WARNING] *** CHECKSUM FAILED - Checksum failed on download: local = 'c53d5de1975ce58462f226d7ed126e02d8f1f58b'; remote = ' [artifact:dependencies] 301' - RETRYING {noformat} So (for me anyway) what happened was that a Maven repository we don't specify (probably specified in the jackcess POM hierarchy) returned an HTML page and a 301 HTTP code (permanently relocated), which is improperly interpreted by Maven Ant Tasks as the appropriate artifact and saved as if it were the jackcess parent POM. I worked around this by deleting the local repo's bad file, then installing the correct POM manually: {noformat} rm ~/.m2/repository/com/healthmarketscience/openhms-parent/1.1.8/openhms-parent-1.1.8.pom curl -O https://repo1.maven.org/maven2/com/healthmarketscience/openhms-parent/1.1.8/openhms-parent-1.1.8.pom mvn install:install-file -Dfile=openhms-parent-1.1.8.pom -DgroupId=com.healthmarketscience -DartifactId=openhms-parent -Dversion=1.1.8 -Dpackaging=pom {noformat} Maybe there's a better way? (E.g. running the Maven build in this context?) The above fixed the problem for me. (Maven Ant Tasks have been EOL'd for a few years, so there will be no fix for this problem in that project.) > Make Luke a Lucene/Solr Module > -- > > Key: LUCENE-2562 > URL: https://issues.apache.org/jira/browse/LUCENE-2562 > Project: Lucene - Core > Issue Type: Task >Reporter: Mark Miller >Assignee: Tomoko Uchida >Priority: Major > Labels: gsoc2014 > Fix For: 8.1, master (9.0) > >
[jira] [Commented] (LUCENE-2562) Make Luke a Lucene/Solr Module
[ https://issues.apache.org/jira/browse/LUCENE-2562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16816237#comment-16816237 ] Steve Rowe commented on LUCENE-2562: [~thetaphi]: I didn't think of running the smoke tester; I'm not sure whether/how Maven artifact validation looks at , I'll run it locally on the branch now that Tomoko has updated it. [~Tomoko Uchida]: I saw that same problem, as did Kevin Risden [over on SOLR-9515|https://jira.apache.org/jira/browse/SOLR-9515?focusedCommentId=16762835=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16762835]. AFAICT this happens when you have a local maven repo that you haven't used the Lucene/Solr Maven build with, but which is populated by the Lucene/Solr Ant build via the Maven Ant Tasks plugin. Earlier in the log I see: {noformat} [artifact:dependencies] Downloading: com/healthmarketscience/openhms-parent/1.1.8/openhms-parent-1.1.8.pom from repository taglets at http://maven.geotoolkit.org/ [artifact:dependencies] Transferring 0K from taglets [artifact:dependencies] [WARNING] *** CHECKSUM FAILED - Checksum failed on download: local = 'c53d5de1975ce58462f226d7ed126e02d8f1f58b'; remote = ' [artifact:dependencies] 301' - RETRYING {noformat} So (for me anyway) what happened was that a Maven repository we don't specify (probably specified in the jackcess POM hierarchy) returned an HTML page and a 301 HTTP code (permanently relocated), which is improperly interpreted by Maven Ant Tasks as the appropriate artifact and saved as if it were the jackcess parent POM. I worked around this by deleting the local repo's bad file, then installing the correct POM manually: {noformat} rm ~/.m2/repository/com/healthmarketscience/openhms-parent/1.1.8/openhms-parent-1.1.8.pom curl -O https://repo1.maven.org/maven2/com/healthmarketscience/openhms-parent/1.1.8/openhms-parent-1.1.8.pom mvn install:install-file -Dfile=openhms-parent-1.1.8.pom -DgroupId=com.healthmarketscience -DartifactId=openhms-parent -Dversion=1.1.8 -Dpackaging=pom {noformat} Maybe there's a better way? (E.g. running the Maven build in this context?) The above fixed the problem for me. (Maven Ant Tasks have been EOL'd for a few years, so there will be no fix for this problem in that project.) > Make Luke a Lucene/Solr Module > -- > > Key: LUCENE-2562 > URL: https://issues.apache.org/jira/browse/LUCENE-2562 > Project: Lucene - Core > Issue Type: Task >Reporter: Mark Miller >Assignee: Tomoko Uchida >Priority: Major > Labels: gsoc2014 > Fix For: 8.1, master (9.0) > > Attachments: LUCENE-2562-Ivy.patch, LUCENE-2562-Ivy.patch, > LUCENE-2562-Ivy.patch, LUCENE-2562-ivy.patch, LUCENE-2562.patch, > LUCENE-2562.patch, LUCENE-2562.patch, Luke-ALE-1.png, Luke-ALE-2.png, > Luke-ALE-3.png, Luke-ALE-4.png, Luke-ALE-5.png, luke-javafx1.png, > luke-javafx2.png, luke-javafx3.png, luke1.jpg, luke2.jpg, luke3.jpg, > lukeALE-documents.png, screenshot-1.png, スクリーンショット 2018-11-05 9.19.47.png > > Time Spent: 50m > Remaining Estimate: 0h > > see > "RE: Luke - in need of maintainer": > http://markmail.org/message/m4gsto7giltvrpuf > "Web-based Luke": http://markmail.org/message/4xwps7p7ifltme5q > I think it would be great if there was a version of Luke that always worked > with trunk - and it would also be great if it was easier to match Luke jars > with Lucene versions. > While I'd like to get GWT Luke into the mix as well, I think the easiest > starting point is to straight port Luke to another UI toolkit before > abstracting out DTO objects that both GWT Luke and Pivot Luke could share. > I've started slowly converting Luke's use of thinlet to Apache Pivot. I > haven't/don't have a lot of time for this at the moment, but I've plugged > away here and there over the past work or two. There is still a *lot* to do. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-repro - Build # 3153 - Unstable
Build: https://builds.apache.org/job/Lucene-Solr-repro/3153/ [...truncated 33 lines...] [repro] Jenkins log URL: https://builds.apache.org/job/Lucene-Solr-Tests-8.x/120/consoleText [repro] Revision: c58787d045d5ab0f463ccd09e76eb8d66e14ee96 [repro] Repro line: ant test -Dtestcase=TestReplicationHandler -Dtests.seed=D57F7B41B9728F1B -Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=ja-JP-u-ca-japanese-x-lvariant-JP -Dtests.timezone=Europe/Simferopol -Dtests.asserts=true -Dtests.file.encoding=UTF-8 [repro] Repro line: ant test -Dtestcase=BasicAuthIntegrationTest -Dtests.method=testBasicAuth -Dtests.seed=D57F7B41B9728F1B -Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=es-MX -Dtests.timezone=Pacific/Chatham -Dtests.asserts=true -Dtests.file.encoding=UTF-8 [repro] git rev-parse --abbrev-ref HEAD [repro] git rev-parse HEAD [repro] Initial local git branch/revision: 03f5a5e7a1d75d6502087dbcc1ca86450875a233 [repro] git fetch [repro] git checkout c58787d045d5ab0f463ccd09e76eb8d66e14ee96 [...truncated 2 lines...] [repro] git merge --ff-only [...truncated 1 lines...] [repro] ant clean [...truncated 6 lines...] [repro] Test suites by module: [repro]solr/core [repro] TestReplicationHandler [repro] BasicAuthIntegrationTest [repro] ant compile-test [...truncated 3576 lines...] [repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=10 -Dtests.class="*.TestReplicationHandler|*.BasicAuthIntegrationTest" -Dtests.showOutput=onerror -Dtests.seed=D57F7B41B9728F1B -Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=ja-JP-u-ca-japanese-x-lvariant-JP -Dtests.timezone=Europe/Simferopol -Dtests.asserts=true -Dtests.file.encoding=UTF-8 [...truncated 814 lines...] [junit4] 2> ERROR: Solr requires authentication for http://127.0.0.1:45358/solr/admin/info/system. Please supply valid credentials. HTTP code=401 [junit4] 2> [junit4] 2> 115967 ERROR (TEST-BasicAuthIntegrationTest.testBasicAuth-seed#[D57F7B41B9728F1B]) [] o.a.s.s.BasicAuthIntegrationTest RunExampleTool failed due to: java.lang.NullPointerException; stdout from tool prior to failure: [junit4] 2> 115977 ERROR (TEST-BasicAuthIntegrationTest.testBasicAuth-seed#[D57F7B41B9728F1B]) [] o.a.s.c.s.i.BaseCloudSolrClient Request to collection [authCollection] failed due to (401) org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error from server at http://127.0.0.1:37039/solr/authCollection: Expected mime type application/octet-stream but got text/html. [junit4] 2> [junit4] 2> [junit4] 2> Error 401 require authentication [junit4] 2> [junit4] 2> HTTP ERROR 401 [junit4] 2> Problem accessing /solr/authCollection/select. Reason: [junit4] 2> require authenticationhttp://eclipse.org/jetty;>Powered by Jetty:// 9.4.14.v20181114 [junit4] 2> [junit4] 2> [junit4] 2> [junit4] 2> , retry=0 commError=false errorCode=401 [junit4] 2> 115977 INFO (TEST-BasicAuthIntegrationTest.testBasicAuth-seed#[D57F7B41B9728F1B]) [] o.a.s.c.s.i.BaseCloudSolrClient request was not communication error it seems [junit4] 2> 116037 INFO (qtp1467139841-1827) [n:127.0.0.1:35865_solr] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/info/key params={omitHeader=true=json} status=0 QTime=0 [junit4] 2> 116038 INFO (qtp1467139841-1829) [n:127.0.0.1:35865_solr] o.a.s.s.PKIAuthenticationPlugin New Key obtained from node: 127.0.0.1:35865_solr / MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAkLaOlc2cF20CJPIvLoX2qkV3TLY2miB9jFUzZoKfes/iHW/RovtP5KsMSZl6SJ6ujRqPtYGjOjEPG9epQdPHINcUAVa3o0K8FjAv3xeVfQvCg74D1iDUbgU44xDoz3CD6U95CEYGsJWZYG/BjScxGja0fuYqV6YpDtf83dZxJVEXrVx4vJe1ZL598STWhzVx17WqXjMxLOCg1+rL1R5oAlnmGHWic27R36fl2hLU1bMxUIpdBL8xtl24Tep82yU+IYyxV6fEciEvDMq682qlTiNYIBzCsCuFJ3Fbp2yKmozklOUNPp+4KMKL+TResautRizSFjtBmiPAdD+tEjYYzQIDAQAB [junit4] 2> 116044 INFO (qtp281938959-1848) [n:127.0.0.1:45358_solr c:authCollection s:shard3 r:core_node6 x:authCollection_shard3_replica_n5] o.a.s.c.S.Request [authCollection_shard3_replica_n5] webapp=/solr path=/select params={df=text=false=id=score=4=0=true=http://127.0.0.1:45358/solr/authCollection_shard3_replica_n5/=10=2=*:*=1555074181395=true=javabin} hits=0 status=0 QTime=6 [junit4] 2> 116044 INFO (qtp1467139841-1829) [n:127.0.0.1:35865_solr c:authCollection s:shard1 r:core_node2 x:authCollection_shard1_replica_n1] o.a.s.c.S.Request [authCollection_shard1_replica_n1] webapp=/solr path=/select params={df=text=false=id=score=4=0=true=http://127.0.0.1:35865/solr/authCollection_shard1_replica_n1/=10=2=*:*=1555074181395=true=javabin} hits=0 status=0 QTime=4 [junit4] 2> 116066 INFO (qtp1724820627-1968) [n:127.0.0.1:37039_solr c:authCollection s:shard2 r:core_node4 x:authCollection_shard2_replica_n3] o.a.s.c.S.Request [authCollection_shard2_replica_n3] webapp=/solr path=/select
[JENKINS] Lucene-Solr-Tests-8.x - Build # 121 - Unstable
Build: https://builds.apache.org/job/Lucene-Solr-Tests-8.x/121/ 1 tests failed. FAILED: org.apache.solr.cloud.TestCloudRecovery2.test Error Message: Error from server at https://127.0.0.1:40849/solr: Async exception during distributed update: java.net.ConnectException: Connection refused Stack Trace: org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error from server at https://127.0.0.1:40849/solr: Async exception during distributed update: java.net.ConnectException: Connection refused at __randomizedtesting.SeedInfo.seed([1401B86DBE0AA38:8914245C751CC7C0]:0) at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:649) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:207) at org.apache.solr.client.solrj.request.UpdateRequest.commit(UpdateRequest.java:239) at org.apache.solr.cloud.TestCloudRecovery2.test(TestCloudRecovery2.java:67) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at
[jira] [Commented] (LUCENE-2562) Make Luke a Lucene/Solr Module
[ https://issues.apache.org/jira/browse/LUCENE-2562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16816229#comment-16816229 ] Tomoko Uchida commented on LUCENE-2562: --- I pushed the change of {{lucene/luke/build.xml}} to the branch: https://github.com/apache/lucene-solr/commits/jira/lucene-2562-luke-swing-3 Ant target "generate-maven-artifacts" was built successfully but "validate-maven-dependencies" was failed on my PC, with the error that seems to have nothing to do with luke ... {code:bash} -validate-maven-dependencies: [artifact:dependencies] An error has occurred while processing the Maven artifact tasks. [artifact:dependencies] Diagnosis: [artifact:dependencies] [artifact:dependencies] Unable to resolve artifact: Unable to get dependency information: Unable to read the metadata file for artifact 'com.healthmarketscience.jackcess:jackcess:jar': Cannot find parent: com.healthmarketscience:openhms-parent for project: com.healthmarketscience.jackcess:jackcess:jar:2.1.12 for project com.healthmarketscience.jackcess:jackcess:jar:2.1.12 [artifact:dependencies] com.healthmarketscience.jackcess:jackcess:jar:2.1.12 [artifact:dependencies] [artifact:dependencies] from the specified remote repositories: [artifact:dependencies] maven-restlet (http://maven.restlet.org), [artifact:dependencies] central (http://repo1.maven.org/maven2), [artifact:dependencies] releases.cloudera.com (https://repository.cloudera.com/artifactory/libs-release), [artifact:dependencies] apache.snapshots (foobar://disabled/) [artifact:dependencies] [artifact:dependencies] Path to dependency: [artifact:dependencies] 1) org.apache.solr:solr-dataimporthandler-extras:jar:9.0.0-SNAPSHOT [artifact:dependencies] [artifact:dependencies] [artifact:dependencies] Not a v4.0.0 POM. for project com.healthmarketscience:openhms-parent at /home/moco/.m2/repository/com/healthmarketscience/openhms-parent/1.1.8/openhms-parent-1.1.8.pom BUILD FAILED /home/moco/repo/lucene-solr/build.xml:235: The following error occurred while executing this line: /home/moco/repo/lucene-solr/solr/build.xml:735: The following error occurred while executing this line: /home/moco/repo/lucene-solr/solr/common-build.xml:479: The following error occurred while executing this line: /home/moco/repo/lucene-solr/solr/common-build.xml:382: The following error occurred while executing this line: /home/moco/repo/lucene-solr/lucene/common-build.xml:692: Unable to resolve artifact: Unable to get dependency information: Unable to read the metadata file for artifact 'com.healthmarketscience.jackcess:jackcess:jar': Cannot find parent: com.healthmarketscience:openhms-parent for project: com.healthmarketscience.jackcess:jackcess:jar:2.1.12 for project com.healthmarketscience.jackcess:jackcess:jar:2.1.12 com.healthmarketscience.jackcess:jackcess:jar:2.1.12 from the specified remote repositories: maven-restlet (http://maven.restlet.org), central (http://repo1.maven.org/maven2), releases.cloudera.com (https://repository.cloudera.com/artifactory/libs-release), apache.snapshots (foobar://disabled/) Path to dependency: 1) org.apache.solr:solr-dataimporthandler-extras:jar:9.0.0-SNAPSHOT {code} bq. Nevertheless, Tomoko Uchida: did you publish maven artfacts for the original "luke"? If not we are fine to disable it, but if the original Luke was published on Maven, we should maybe do the same here, too. No, we have not published any maven artifacts of luke. I agree with [~steve_rowe]. > Make Luke a Lucene/Solr Module > -- > > Key: LUCENE-2562 > URL: https://issues.apache.org/jira/browse/LUCENE-2562 > Project: Lucene - Core > Issue Type: Task >Reporter: Mark Miller >Assignee: Tomoko Uchida >Priority: Major > Labels: gsoc2014 > Fix For: 8.1, master (9.0) > > Attachments: LUCENE-2562-Ivy.patch, LUCENE-2562-Ivy.patch, > LUCENE-2562-Ivy.patch, LUCENE-2562-ivy.patch, LUCENE-2562.patch, > LUCENE-2562.patch, LUCENE-2562.patch, Luke-ALE-1.png, Luke-ALE-2.png, > Luke-ALE-3.png, Luke-ALE-4.png, Luke-ALE-5.png, luke-javafx1.png, > luke-javafx2.png, luke-javafx3.png, luke1.jpg, luke2.jpg, luke3.jpg, > lukeALE-documents.png, screenshot-1.png, スクリーンショット 2018-11-05 9.19.47.png > > Time Spent: 50m > Remaining Estimate: 0h > > see > "RE: Luke - in need of maintainer": > http://markmail.org/message/m4gsto7giltvrpuf > "Web-based Luke": http://markmail.org/message/4xwps7p7ifltme5q > I think it would be great if there was a version of Luke that always worked > with trunk - and it would also be great if it was easier to match Luke jars > with Lucene versions. > While I'd like to get GWT Luke into the mix as well, I think the easiest > starting point is to straight port Luke to another UI toolkit before >
[jira] [Commented] (LUCENE-8738) Bump minimum Java version requirement to 11
[ https://issues.apache.org/jira/browse/LUCENE-8738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16816220#comment-16816220 ] Uwe Schindler commented on LUCENE-8738: --- Cleaning up can be done later, the current patch just restores the Lucene 8 behaviour. If we change it and remove the observer pattern, it should be done also in 8.1 or 8.2. > Bump minimum Java version requirement to 11 > --- > > Key: LUCENE-8738 > URL: https://issues.apache.org/jira/browse/LUCENE-8738 > Project: Lucene - Core > Issue Type: Improvement > Components: general/build >Reporter: Adrien Grand >Priority: Minor > Labels: Java11 > Fix For: master (9.0) > > Attachments: LUCENE-8738-solr-CoreCloseListener.patch > > > See vote thread for reference: https://markmail.org/message/q6ubdycqscpl43aq. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8738) Bump minimum Java version requirement to 11
[ https://issues.apache.org/jira/browse/LUCENE-8738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16816215#comment-16816215 ] Uwe Schindler commented on LUCENE-8738: --- But as a first step the patch seem fine, as it does exactly the same as before (from its logic) - just typesafe? > Bump minimum Java version requirement to 11 > --- > > Key: LUCENE-8738 > URL: https://issues.apache.org/jira/browse/LUCENE-8738 > Project: Lucene - Core > Issue Type: Improvement > Components: general/build >Reporter: Adrien Grand >Priority: Minor > Labels: Java11 > Fix For: master (9.0) > > Attachments: LUCENE-8738-solr-CoreCloseListener.patch > > > See vote thread for reference: https://markmail.org/message/q6ubdycqscpl43aq. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[GitHub] [lucene-solr] NazerkeBS edited a comment on issue #643: SOLR-13391: Add variance and standard deviation stream evaluators
NazerkeBS edited a comment on issue #643: SOLR-13391: Add variance and standard deviation stream evaluators URL: https://github.com/apache/lucene-solr/pull/643#issuecomment-482549905 I added a separate test for the variance and the standard deviation. The test passes. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[GitHub] [lucene-solr] NazerkeBS commented on issue #643: SOLR-13391: Add variance and standard deviation stream evaluators
NazerkeBS commented on issue #643: SOLR-13391: Add variance and standard deviation stream evaluators URL: https://github.com/apache/lucene-solr/pull/643#issuecomment-482549905 I added a separate test for the variance and the standard deviation. It works now. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8736) LatLonShapePolygonQuery returning incorrect WITHIN results with shared boundaries
[ https://issues.apache.org/jira/browse/LUCENE-8736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16816190#comment-16816190 ] Robert Muir commented on LUCENE-8736: - Can we reopen this and think about rolling back the points changes? For points the behavior was intentional: [https://web.archive.org/web/20110709094146/http://www.ecse.rpi.edu/Homepages/wrf/Research/Short_Notes/pnpoly.html#Point%20on%20an%20Edge] There was a unit test for it, but the values in that test were just changed here. Besides the performance loss for points, and some of the good properties mentioned on that page, I think we should stay consistent with this really old formula and not do something different? > LatLonShapePolygonQuery returning incorrect WITHIN results with shared > boundaries > - > > Key: LUCENE-8736 > URL: https://issues.apache.org/jira/browse/LUCENE-8736 > Project: Lucene - Core > Issue Type: Bug >Reporter: Nicholas Knize >Priority: Major > Fix For: 8.1, master (9.0) > > Attachments: LUCENE-8736.patch, LUCENE-8736.patch, > adaptive-decoding.patch > > > Triangles that are {{WITHIN}} a target polygon query that also share a > boundary with the polygon are incorrectly reported as {{CROSSES}} instead of > {{INSIDE}}. This leads to incorrect {{WITHIN}} query results as demonstrated > in the following test: > {code:java} > public void testWithinFailure() throws Exception { > Directory dir = newDirectory(); > RandomIndexWriter w = new RandomIndexWriter(random(), dir); > // test polygons: > Polygon indexPoly1 = new Polygon(new double[] {4d, 4d, 3d, 3d, 4d}, new > double[] {3d, 4d, 4d, 3d, 3d}); > Polygon indexPoly2 = new Polygon(new double[] {2d, 2d, 1d, 1d, 2d}, new > double[] {6d, 7d, 7d, 6d, 6d}); > Polygon indexPoly3 = new Polygon(new double[] {1d, 1d, 0d, 0d, 1d}, new > double[] {3d, 4d, 4d, 3d, 3d}); > Polygon indexPoly4 = new Polygon(new double[] {2d, 2d, 1d, 1d, 2d}, new > double[] {0d, 1d, 1d, 0d, 0d}); > // index polygons: > Document doc; > addPolygonsToDoc(FIELDNAME, doc = new Document(), indexPoly1); > w.addDocument(doc); > addPolygonsToDoc(FIELDNAME, doc = new Document(), indexPoly2); > w.addDocument(doc); > addPolygonsToDoc(FIELDNAME, doc = new Document(), indexPoly3); > w.addDocument(doc); > addPolygonsToDoc(FIELDNAME, doc = new Document(), indexPoly4); > w.addDocument(doc); > / search // > IndexReader reader = w.getReader(); > w.close(); > IndexSearcher searcher = newSearcher(reader); > Polygon[] searchPoly = new Polygon[] {new Polygon(new double[] {4d, 4d, > 0d, 0d, 4d}, new double[] {0d, 7d, 7d, 0d, 0d})}; > Query q = LatLonShape.newPolygonQuery(FIELDNAME, QueryRelation.WITHIN, > searchPoly); > assertEquals(4, searcher.count(q)); > IOUtils.close(w, reader, dir); > } > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-2562) Make Luke a Lucene/Solr Module
[ https://issues.apache.org/jira/browse/LUCENE-2562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16816175#comment-16816175 ] Uwe Schindler commented on LUCENE-2562: --- [~steve_rowe]: would the smoke tester complain if we have a module without maven artifacts? I am not sure about that, mabye we just wait for the next run of nightly-smoke. If it works all fine, if not we have to think. Nevertheless, [~Tomoko Uchida]: did you publish maven artfacts for the original "luke"? If not we are fine to disable it, but if the original Luke was published on Maven, we should maybe do the same here, too. > Make Luke a Lucene/Solr Module > -- > > Key: LUCENE-2562 > URL: https://issues.apache.org/jira/browse/LUCENE-2562 > Project: Lucene - Core > Issue Type: Task >Reporter: Mark Miller >Assignee: Tomoko Uchida >Priority: Major > Labels: gsoc2014 > Fix For: 8.1, master (9.0) > > Attachments: LUCENE-2562-Ivy.patch, LUCENE-2562-Ivy.patch, > LUCENE-2562-Ivy.patch, LUCENE-2562-ivy.patch, LUCENE-2562.patch, > LUCENE-2562.patch, LUCENE-2562.patch, Luke-ALE-1.png, Luke-ALE-2.png, > Luke-ALE-3.png, Luke-ALE-4.png, Luke-ALE-5.png, luke-javafx1.png, > luke-javafx2.png, luke-javafx3.png, luke1.jpg, luke2.jpg, luke3.jpg, > lukeALE-documents.png, screenshot-1.png, スクリーンショット 2018-11-05 9.19.47.png > > Time Spent: 50m > Remaining Estimate: 0h > > see > "RE: Luke - in need of maintainer": > http://markmail.org/message/m4gsto7giltvrpuf > "Web-based Luke": http://markmail.org/message/4xwps7p7ifltme5q > I think it would be great if there was a version of Luke that always worked > with trunk - and it would also be great if it was easier to match Luke jars > with Lucene versions. > While I'd like to get GWT Luke into the mix as well, I think the easiest > starting point is to straight port Luke to another UI toolkit before > abstracting out DTO objects that both GWT Luke and Pivot Luke could share. > I've started slowly converting Luke's use of thinlet to Apache Pivot. I > haven't/don't have a lot of time for this at the moment, but I've plugged > away here and there over the past work or two. There is still a *lot* to do. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-SmokeRelease-8.x - Build # 70 - Still Failing
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-8.x/70/ No tests ran. Build Log: [...truncated 12132 lines...] BUILD FAILED /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/build.xml:446: The following error occurred while executing this line: /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/build.xml:433: The following error occurred while executing this line: /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/build.xml:413: The following error occurred while executing this line: /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/common-build.xml:2269: The following error occurred while executing this line: /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/common-build.xml:1727: The following error occurred while executing this line: /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/common-build.xml:657: Unable to initialize POM pom.xml: Could not find the model file '/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/build/poms/lucene/luke/pom.xml'. for project unknown Total time: 5 minutes 16 seconds Build step 'Invoke Ant' marked build as failure Setting JDK_1_9_LATEST__HOME=/home/jenkins/tools/java/latest1.9 Setting JDK_1_9_LATEST__HOME=/home/jenkins/tools/java/latest1.9 Email was triggered for: Failure - Any Sending email for trigger: Failure - Any Setting JDK_1_9_LATEST__HOME=/home/jenkins/tools/java/latest1.9 Setting JDK_1_9_LATEST__HOME=/home/jenkins/tools/java/latest1.9 Setting JDK_1_9_LATEST__HOME=/home/jenkins/tools/java/latest1.9 Setting JDK_1_9_LATEST__HOME=/home/jenkins/tools/java/latest1.9 - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-2562) Make Luke a Lucene/Solr Module
[ https://issues.apache.org/jira/browse/LUCENE-2562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16816172#comment-16816172 ] Uwe Schindler commented on LUCENE-2562: --- Yeah that's fine. Just merge it or cherry pick, whatever you think is better. > Make Luke a Lucene/Solr Module > -- > > Key: LUCENE-2562 > URL: https://issues.apache.org/jira/browse/LUCENE-2562 > Project: Lucene - Core > Issue Type: Task >Reporter: Mark Miller >Assignee: Tomoko Uchida >Priority: Major > Labels: gsoc2014 > Fix For: 8.1, master (9.0) > > Attachments: LUCENE-2562-Ivy.patch, LUCENE-2562-Ivy.patch, > LUCENE-2562-Ivy.patch, LUCENE-2562-ivy.patch, LUCENE-2562.patch, > LUCENE-2562.patch, LUCENE-2562.patch, Luke-ALE-1.png, Luke-ALE-2.png, > Luke-ALE-3.png, Luke-ALE-4.png, Luke-ALE-5.png, luke-javafx1.png, > luke-javafx2.png, luke-javafx3.png, luke1.jpg, luke2.jpg, luke3.jpg, > lukeALE-documents.png, screenshot-1.png, スクリーンショット 2018-11-05 9.19.47.png > > Time Spent: 50m > Remaining Estimate: 0h > > see > "RE: Luke - in need of maintainer": > http://markmail.org/message/m4gsto7giltvrpuf > "Web-based Luke": http://markmail.org/message/4xwps7p7ifltme5q > I think it would be great if there was a version of Luke that always worked > with trunk - and it would also be great if it was easier to match Luke jars > with Lucene versions. > While I'd like to get GWT Luke into the mix as well, I think the easiest > starting point is to straight port Luke to another UI toolkit before > abstracting out DTO objects that both GWT Luke and Pivot Luke could share. > I've started slowly converting Luke's use of thinlet to Apache Pivot. I > haven't/don't have a lot of time for this at the moment, but I've plugged > away here and there over the past work or two. There is still a *lot* to do. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-13397) Solr Syncing Script/Function
[ https://issues.apache.org/jira/browse/SOLR-13397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16816171#comment-16816171 ] Anuj B commented on SOLR-13397: --- Tried but getting errors - data-config.xml {{ }} {{ }} {{ }} While doing full import getting the following message - Last Update: 11:31:09 Requests: 1 , Fetched: 951,104 , Skipped: 0 , Processed: 951,104 Started: 4 minutes ago However the overview shows Last Modified: 2 minutes ago Num Docs: 941601 Max Doc: 941601 Not all records are getting indexed. Logging shows the following message - {{WARN false SimplePropertiesWriter Unable to read: dataimport.properties ERROR false EntityProcessorBase getNext() failed for query 'select * from news':org.apache.solr.handler.dataimport.DataImportHandlerException: java.sql.SQLException: Operation not allowed after ResultSet closed }} {{ERROR false DocBuilder Exception while processing: news document : SolrInputDocument(fields: []):org.apache.solr.handler.dataimport.DataImportHandlerException: java.sql.SQLException: Operation not allowed after ResultSet closed }} {{ERROR false DataImporter Full Import failed:java.lang.RuntimeException: java.lang.RuntimeException: org.apache.solr.handler.dataimport.DataImportHandlerException: java.sql.SQLException: Operation not allowed after ResultSet closed }} {{ERROR false CommitTracker auto commit error...:org.apache.solr.common.SolrException: Error opening new searcher}} > Solr Syncing Script/Function > > > Key: SOLR-13397 > URL: https://issues.apache.org/jira/browse/SOLR-13397 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Anuj B >Priority: Major > > A syncing script/function would be a nice addon feature. It should > automatically check the MySql database and index the contents according to > the changes/additions/deletions made in the main MySql database -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-BadApples-Tests-8.x - Build # 72 - Unstable
Build: https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-8.x/72/ 1 tests failed. FAILED: org.apache.solr.cloud.TestCloudConsistency.testOutOfSyncReplicasCannotBecomeLeader Error Message: Doc with id=4 not found in https://127.0.0.1:35828/solr/outOfSyncReplicasCannotBecomeLeader-false due to: Path not found: /id; rsp={doc=null} Stack Trace: java.lang.AssertionError: Doc with id=4 not found in https://127.0.0.1:35828/solr/outOfSyncReplicasCannotBecomeLeader-false due to: Path not found: /id; rsp={doc=null} at __randomizedtesting.SeedInfo.seed([2CFCB9CEE353FE3:7C24EB8C2D5230D9]:0) at org.junit.Assert.fail(Assert.java:88) at org.junit.Assert.assertTrue(Assert.java:41) at org.apache.solr.cloud.TestCloudConsistency.assertDocExists(TestCloudConsistency.java:283) at org.apache.solr.cloud.TestCloudConsistency.assertDocsExistInAllReplicas(TestCloudConsistency.java:267) at org.apache.solr.cloud.TestCloudConsistency.testOutOfSyncReplicasCannotBecomeLeader(TestCloudConsistency.java:138) at org.apache.solr.cloud.TestCloudConsistency.testOutOfSyncReplicasCannotBecomeLeader(TestCloudConsistency.java:97) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
[jira] [Commented] (SOLR-13397) Solr Syncing Script/Function
[ https://issues.apache.org/jira/browse/SOLR-13397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16816149#comment-16816149 ] Jan Høydahl commented on SOLR-13397: Have you looked at Data Import Handler at all[1]? It can do exactly that, with the help of a cron job to trigger it. Please discuss things on the user-list before creating a JIRA. This Jira should probably be closed. [1] [https://lucene.apache.org/solr/guide/7_7/uploading-structured-data-store-data-with-the-data-import-handler.html] > Solr Syncing Script/Function > > > Key: SOLR-13397 > URL: https://issues.apache.org/jira/browse/SOLR-13397 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Anuj B >Priority: Major > > A syncing script/function would be a nice addon feature. It should > automatically check the MySql database and index the contents according to > the changes/additions/deletions made in the main MySql database -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-13396) SolrCloud will delete the core data for any core that is not referenced in the clusterstate
[ https://issues.apache.org/jira/browse/SOLR-13396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16816146#comment-16816146 ] Jan Høydahl commented on SOLR-13396: Could perhaps this be registered as a AutoScaling suggestion that is not executed, but shows up in the suggestions for manual execution? Or could it be scheduled for the Overseer to delete, say, 7 days after it was discovered, so an Admin will have time to cancel the deletion should it be a mistake? Just thinking aloud here. > SolrCloud will delete the core data for any core that is not referenced in > the clusterstate > --- > > Key: SOLR-13396 > URL: https://issues.apache.org/jira/browse/SOLR-13396 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrCloud >Affects Versions: 7.3.1, 8.0 >Reporter: Shawn Heisey >Priority: Major > > SOLR-12066 is an improvement designed to delete core data for replicas that > were deleted while the node was down -- better cleanup. > In practice, that change causes SolrCloud to delete all core data for cores > that are not referenced in the ZK clusterstate. If all the ZK data gets > deleted or the Solr instance is pointed at a ZK ensemble with no data, it > will proceed to delete all of the cores in the solr home, with no possibility > of recovery. > I do not think that Solr should ever delete core data unless an explicit > DELETE action has been made and the node is operational at the time of the > request. If a core exists during startup that cannot be found in the ZK > clusterstate, it should be ignored (not started) and a helpful message should > be logged. I think that message should probably be at WARN so that it shows > up in the admin UI logging tab with default settings. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org