[jira] [Commented] (SOLR-12546) CSVResponseWriter doesnt return non-stored field even when docValues is enabled, when no explicit fl specified
[ https://issues.apache.org/jira/browse/SOLR-12546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16687584#comment-16687584 ] Lucene/Solr QA commented on SOLR-12546: --- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 38s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} Release audit (RAT) {color} | {color:green} 1m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} Check forbidden APIs {color} | {color:green} 1m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} Validate source patterns {color} | {color:green} 1m 30s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 48m 1s{color} | {color:green} core in the patch passed. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 53m 39s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | SOLR-12546 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12948115/SOLR-12546.patch | | Optional Tests | compile javac unit ratsources checkforbiddenapis validatesourcepatterns | | uname | Linux lucene1-us-west 4.4.0-137-generic #163~14.04.1-Ubuntu SMP Mon Sep 24 17:14:57 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | ant | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-SOLR-Build/sourcedir/dev-tools/test-patch/lucene-solr-yetus-personality.sh | | git revision | master / 763e642 | | ant | version: Apache Ant(TM) version 1.9.3 compiled on July 24 2018 | | Default Java | 1.8.0_191 | | Test Results | https://builds.apache.org/job/PreCommit-SOLR-Build/227/testReport/ | | modules | C: solr/core U: solr/core | | Console output | https://builds.apache.org/job/PreCommit-SOLR-Build/227/console | | Powered by | Apache Yetus 0.7.0 http://yetus.apache.org | This message was automatically generated. > CSVResponseWriter doesnt return non-stored field even when docValues is > enabled, when no explicit fl specified > -- > > Key: SOLR-12546 > URL: https://issues.apache.org/jira/browse/SOLR-12546 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: Response Writers >Affects Versions: 7.2.1 >Reporter: Karthik S >Priority: Major > Fix For: 7.2.2 > > Attachments: SOLR-12546-old.patch, SOLR-12546.patch, > SOLR-12546.patch, SOLR-12546.patch > > > As part of this Jira SOLR-2970, CSVResponseWriter doesn't return fields > whose stored attribute set to false, but doesnt consider docvalues. > > Causing fields whose stored=false and docValues =true are not returned when > no explicit fl are specified. Behavior must be same as of json/xml response > writer.. > > Eg: > - Created collection with below fields > type="string"/> > type="int" stored="false"/> > type="plong" stored="false"/> > > precisionStep="0"/> > > > > - Added few documents > contentid,testint,testlong > id,1,56 > id2,2,66 > > - http://machine:port/solr/testdocvalue/select?q=*:*=json > [\{"contentid":"id","_version_":1605281886069850112, > "timestamp":"2018-07-06T22:28:25.335Z","testint":1, > "testlong":56}, > { > "contentid":"id2","_version_":1605281886075092992, > "timestamp":"2018-07-06T22:28:25.335Z","testint":2, > "testlong":66}] > > - http://machine:port/solr/testdocvalue/select?q=*:*=csv > "_version_",contentid,timestamp1605281886069850112,id,2018-07-06T22:28:25.335Z1605281886075092992,id2,2018-07-06T22:28:25.335Z > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail:
[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk-10.0.1) - Build # 3095 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/3095/ Java: 64bit/jdk-10.0.1 -XX:+UseCompressedOops -XX:+UseParallelGC 11 tests failed. FAILED: org.apache.solr.cloud.api.collections.TestHdfsCloudBackupRestore.test Error Message: Node 127.0.0.1:38451_solr has 3 replicas. Expected num replicas : 2. state: DocCollection(hdfsbackuprestore_restored//collections/hdfsbackuprestore_restored/state.json/12)={ "pullReplicas":0, "replicationFactor":1, "shards":{ "shard2":{ "range":"0-7fff", "state":"active", "replicas":{"core_node62":{ "core":"hdfsbackuprestore_restored_shard2_replica_n61", "base_url":"http://127.0.0.1:38451/solr;, "node_name":"127.0.0.1:38451_solr", "state":"active", "type":"NRT", "force_set_state":"false", "leader":"true"}}, "stateTimestamp":"1542265945388056525"}, "shard1_1":{ "range":"c000-", "state":"active", "replicas":{"core_node64":{ "core":"hdfsbackuprestore_restored_shard1_1_replica_n63", "base_url":"http://127.0.0.1:38451/solr;, "node_name":"127.0.0.1:38451_solr", "state":"active", "type":"NRT", "force_set_state":"false", "leader":"true"}}, "stateTimestamp":"1542265945388093775"}, "shard1_0":{ "range":"8000-bfff", "state":"active", "replicas":{"core_node66":{ "core":"hdfsbackuprestore_restored_shard1_0_replica_n65", "base_url":"http://127.0.0.1:38451/solr;, "node_name":"127.0.0.1:38451_solr", "state":"active", "type":"NRT", "force_set_state":"false", "leader":"true"}}, "stateTimestamp":"1542265945388123048"}}, "router":{ "name":"compositeId", "field":"shard_s"}, "maxShardsPerNode":"-1", "autoAddReplicas":"true", "nrtReplicas":1, "tlogReplicas":0} Stack Trace: java.lang.AssertionError: Node 127.0.0.1:38451_solr has 3 replicas. Expected num replicas : 2. state: DocCollection(hdfsbackuprestore_restored//collections/hdfsbackuprestore_restored/state.json/12)={ "pullReplicas":0, "replicationFactor":1, "shards":{ "shard2":{ "range":"0-7fff", "state":"active", "replicas":{"core_node62":{ "core":"hdfsbackuprestore_restored_shard2_replica_n61", "base_url":"http://127.0.0.1:38451/solr;, "node_name":"127.0.0.1:38451_solr", "state":"active", "type":"NRT", "force_set_state":"false", "leader":"true"}}, "stateTimestamp":"1542265945388056525"}, "shard1_1":{ "range":"c000-", "state":"active", "replicas":{"core_node64":{ "core":"hdfsbackuprestore_restored_shard1_1_replica_n63", "base_url":"http://127.0.0.1:38451/solr;, "node_name":"127.0.0.1:38451_solr", "state":"active", "type":"NRT", "force_set_state":"false", "leader":"true"}}, "stateTimestamp":"1542265945388093775"}, "shard1_0":{ "range":"8000-bfff", "state":"active", "replicas":{"core_node66":{ "core":"hdfsbackuprestore_restored_shard1_0_replica_n65", "base_url":"http://127.0.0.1:38451/solr;, "node_name":"127.0.0.1:38451_solr", "state":"active", "type":"NRT", "force_set_state":"false", "leader":"true"}}, "stateTimestamp":"1542265945388123048"}}, "router":{ "name":"compositeId", "field":"shard_s"}, "maxShardsPerNode":"-1", "autoAddReplicas":"true", "nrtReplicas":1, "tlogReplicas":0} at __randomizedtesting.SeedInfo.seed([864380771B02AB2:803007DDDF4C474A]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.apache.solr.cloud.api.collections.AbstractCloudBackupRestoreTestCase.lambda$testBackupAndRestore$1(AbstractCloudBackupRestoreTestCase.java:339) at java.base/java.util.HashMap.forEach(HashMap.java:1341) at org.apache.solr.cloud.api.collections.AbstractCloudBackupRestoreTestCase.testBackupAndRestore(AbstractCloudBackupRestoreTestCase.java:338) at org.apache.solr.cloud.api.collections.AbstractCloudBackupRestoreTestCase.test(AbstractCloudBackupRestoreTestCase.java:144) at org.apache.solr.cloud.api.collections.TestHdfsCloudBackupRestore.test(TestHdfsCloudBackupRestore.java:213) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:564) at
[jira] [Commented] (SOLR-12833) Use timed-out lock in DistributedUpdateProcessor
[ https://issues.apache.org/jira/browse/SOLR-12833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16687573#comment-16687573 ] Mark Miller commented on SOLR-12833: Hey [~yuanyun.cn], I was looking through the patch and I have some thoughts on some slight renaming, but I can handle that, otherwise it's looking okay. One question I had was when try lock fails - don't we still go down the logic as if we had gotten the lock? {code:java} vBucketLocked = tryGetVersionBucketLock(bucket); bucket.wakeUpAll(); //just in case anyone is waiting let them know that we have a new update // we obtain the version when synchronized and then do the add so we can ensure that // if version1 < version2 then version1 is actually added before version2. // even if we don't store the version field, synchronizing on the bucket // will enable us to know what version happened first, and thus enable // realtime-get to work reliably. // TODO: if versions aren't stored, do we need to set on the cmd anyway for some reason? // there may be other reasons in the future for a version on the commands if (versionsStored) { {code} > Use timed-out lock in DistributedUpdateProcessor > > > Key: SOLR-12833 > URL: https://issues.apache.org/jira/browse/SOLR-12833 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: update, UpdateRequestProcessors >Affects Versions: 7.5, master (8.0) >Reporter: jefferyyuan >Assignee: Mark Miller >Priority: Minor > Fix For: master (8.0) > > > There is a synchronize block that blocks other update requests whose IDs fall > in the same hash bucket. The update waits forever until it gets the lock at > the synchronize block, this can be a problem in some cases. > > Some add/update requests (for example updates with spatial/shape analysis) > like may take time (30+ seconds or even more), this would the request time > out and fail. > Client may retry the same requests multiple times or several minutes, this > would make things worse. > The server side receives all the update requests but all except one can do > nothing, have to wait there. This wastes precious memory and cpu resource. > We have seen the case 2000+ threads are blocking at the synchronize lock, and > only a few updates are making progress. Each thread takes 3+ mb memory which > causes OOM. > Also if the update can't get the lock in expected time range, its better to > fail fast. > > We can have one configuration in solrconfig.xml: > updateHandler/versionLock/timeInMill, so users can specify how long they want > to wait the version bucket lock. > The default value can be -1, so it behaves same - wait forever until it gets > the lock. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-12497) Add ref guide docs for Hadoop Credential Provider based SSL/TLS store password source.
[ https://issues.apache.org/jira/browse/SOLR-12497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16687552#comment-16687552 ] Cassandra Targett edited comment on SOLR-12497 at 11/15/18 6:45 AM: I apologize in advance that I lost track of this. I decided it should be a section of that page on its own, so I moved it and made a couple other changes to capitalization and calling out parameters more specifically than in the patch. [~manokovacs], please take a look at https://builds.apache.org/view/L/view/Lucene/job/Solr-reference-guide-master/javadoc/enabling-ssl.html to see the changes and let me know what you think. I'm happy to change anything I might have gotten wrong. was (Author: ctargett): I apologize in advance that I lost track of this I decided it should be a section of that page on its own, so I moved it so it and made a couple other changes to capitalization and calling out parameters more specifically than in the patch. [~manokovacs], please take a look at https://builds.apache.org/view/L/view/Lucene/job/Solr-reference-guide-master/javadoc/enabling-ssl.html to see the changes and let me know what you think. I'm happy to change anything I might have gotten wrong. > Add ref guide docs for Hadoop Credential Provider based SSL/TLS store > password source. > -- > > Key: SOLR-12497 > URL: https://issues.apache.org/jira/browse/SOLR-12497 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: documentation >Affects Versions: 7.4 >Reporter: Mano Kovacs >Assignee: Cassandra Targett >Priority: Minor > Fix For: 7.6, master (8.0) > > Attachments: SOLR-12497.patch > > > Document configuration added in SOLR-10783. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-12497) Add ref guide docs for Hadoop Credential Provider based SSL/TLS store password source.
[ https://issues.apache.org/jira/browse/SOLR-12497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Cassandra Targett updated SOLR-12497: - Fix Version/s: master (8.0) 7.6 > Add ref guide docs for Hadoop Credential Provider based SSL/TLS store > password source. > -- > > Key: SOLR-12497 > URL: https://issues.apache.org/jira/browse/SOLR-12497 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: documentation >Affects Versions: 7.4 >Reporter: Mano Kovacs >Assignee: Cassandra Targett >Priority: Minor > Fix For: 7.6, master (8.0) > > Attachments: SOLR-12497.patch > > > Document configuration added in SOLR-10783. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12497) Add ref guide docs for Hadoop Credential Provider based SSL/TLS store password source.
[ https://issues.apache.org/jira/browse/SOLR-12497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16687552#comment-16687552 ] Cassandra Targett commented on SOLR-12497: -- I apologize in advance that I lost track of this I decided it should be a section of that page on its own, so I moved it so it and made a couple other changes to capitalization and calling out parameters more specifically than in the patch. [~manokovacs], please take a look at https://builds.apache.org/view/L/view/Lucene/job/Solr-reference-guide-master/javadoc/enabling-ssl.html to see the changes and let me know what you think. I'm happy to change anything I might have gotten wrong. > Add ref guide docs for Hadoop Credential Provider based SSL/TLS store > password source. > -- > > Key: SOLR-12497 > URL: https://issues.apache.org/jira/browse/SOLR-12497 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: documentation >Affects Versions: 7.4 >Reporter: Mano Kovacs >Assignee: Cassandra Targett >Priority: Minor > Fix For: 7.6, master (8.0) > > Attachments: SOLR-12497.patch > > > Document configuration added in SOLR-10783. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12497) Add ref guide docs for Hadoop Credential Provider based SSL/TLS store password source.
[ https://issues.apache.org/jira/browse/SOLR-12497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16687544#comment-16687544 ] ASF subversion and git services commented on SOLR-12497: Commit efae53e5e910aa1c44b201f861db856db492837d in lucene-solr's branch refs/heads/branch_7x from [~ctargett] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=efae53e ] SOLR-12497: Add documentation for Hadoop credential provider-based keystore/truststore > Add ref guide docs for Hadoop Credential Provider based SSL/TLS store > password source. > -- > > Key: SOLR-12497 > URL: https://issues.apache.org/jira/browse/SOLR-12497 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: documentation >Affects Versions: 7.4 >Reporter: Mano Kovacs >Assignee: Cassandra Targett >Priority: Minor > Attachments: SOLR-12497.patch > > > Document configuration added in SOLR-10783. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12497) Add ref guide docs for Hadoop Credential Provider based SSL/TLS store password source.
[ https://issues.apache.org/jira/browse/SOLR-12497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16687545#comment-16687545 ] ASF subversion and git services commented on SOLR-12497: Commit 0f73394995bd8abde2b18dbaa9c228ab72fb79e2 in lucene-solr's branch refs/heads/branch_7_6 from [~ctargett] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=0f73394 ] SOLR-12497: Add documentation for Hadoop credential provider-based keystore/truststore > Add ref guide docs for Hadoop Credential Provider based SSL/TLS store > password source. > -- > > Key: SOLR-12497 > URL: https://issues.apache.org/jira/browse/SOLR-12497 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: documentation >Affects Versions: 7.4 >Reporter: Mano Kovacs >Assignee: Cassandra Targett >Priority: Minor > Attachments: SOLR-12497.patch > > > Document configuration added in SOLR-10783. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12497) Add ref guide docs for Hadoop Credential Provider based SSL/TLS store password source.
[ https://issues.apache.org/jira/browse/SOLR-12497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16687542#comment-16687542 ] ASF subversion and git services commented on SOLR-12497: Commit df5540acc99fe287758433701108303fedb2c5b6 in lucene-solr's branch refs/heads/master from [~ctargett] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=df5540a ] SOLR-12497: Add documentation for Hadoop credential provider-based keystore/truststore > Add ref guide docs for Hadoop Credential Provider based SSL/TLS store > password source. > -- > > Key: SOLR-12497 > URL: https://issues.apache.org/jira/browse/SOLR-12497 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: documentation >Affects Versions: 7.4 >Reporter: Mano Kovacs >Assignee: Cassandra Targett >Priority: Minor > Attachments: SOLR-12497.patch > > > Document configuration added in SOLR-10783. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-12-ea+12) - Build # 23212 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/23212/ Java: 64bit/jdk-12-ea+12 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC 30 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.cloud.MultiSolrCloudTestCaseTest Error Message: Could not find collection:collection1 Stack Trace: java.lang.AssertionError: Could not find collection:collection1 at __randomizedtesting.SeedInfo.seed([5A57AEE5159A0F94]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.junit.Assert.assertNotNull(Assert.java:526) at org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:155) at org.apache.solr.cloud.MultiSolrCloudTestCase$DefaultClusterInitFunction.doAccept(MultiSolrCloudTestCase.java:80) at org.apache.solr.cloud.MultiSolrCloudTestCaseTest$2.accept(MultiSolrCloudTestCaseTest.java:66) at org.apache.solr.cloud.MultiSolrCloudTestCaseTest$2.accept(MultiSolrCloudTestCaseTest.java:61) at org.apache.solr.cloud.MultiSolrCloudTestCase.doSetupClusters(MultiSolrCloudTestCase.java:95) at org.apache.solr.cloud.MultiSolrCloudTestCaseTest.setupClusters(MultiSolrCloudTestCaseTest.java:53) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:875) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.base/java.lang.Thread.run(Thread.java:835) FAILED: junit.framework.TestSuite.org.apache.solr.cloud.MultiSolrCloudTestCaseTest Error Message: ObjectTracker found 10 object(s) that were not released!!! [ZkCollectionTerms, InternalHttpClient, SolrZkClient, InternalHttpClient, Overseer, InternalHttpClient, SolrZkClient, SolrZkClient, InternalHttpClient, ZkController] org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: org.apache.solr.cloud.ZkCollectionTerms at org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42) at org.apache.solr.cloud.ZkCollectionTerms.(ZkCollectionTerms.java:39) at org.apache.solr.cloud.ZkController.getCollectionTerms(ZkController.java:1523) at org.apache.solr.cloud.ZkController.getShardTerms(ZkController.java:1518) at org.apache.solr.cloud.ZkController.register(ZkController.java:1114) at org.apache.solr.cloud.ZkController.register(ZkController.java:1079) at org.apache.solr.core.ZkContainer.lambda$registerInZk$0(ZkContainer.java:187) at org.apache.solr.core.ZkContainer.registerInZk(ZkContainer.java:214) at org.apache.solr.core.CoreContainer.registerCore(CoreContainer.java:991) at org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1153) at
[jira] [Commented] (SOLR-5211) updating parent as childless makes old children orphans
[ https://issues.apache.org/jira/browse/SOLR-5211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16687514#comment-16687514 ] David Smiley commented on SOLR-5211: bq. A rename could be done, what did you have in mind though? This is what I meant by \_nest\_root\_. This helps brand nested documents as such more consistently. You'd look a this field and might a clue what it's for. It seems IndexSchema.ROOT_FIELD_NAME is only used in a few places and wouldn't be hard to migrate to this new scheme. bq. Is there any scenario where differentiating between the new and old schema might be beneficial? For back-compat only. I took a look at the patch and I have some notes: * DirectUpdateHandler2.delete() should use cmd.getIndexedId() instead of direct field access. Those members on DeleteUpdateCommand ought to be private! * AddUpdateCommand: I see you refactored out a new addBlockId method so that the underlying logic can be invoked in now two places. However it calls getHashableId each time. That could be fixed by adding this as a parameter so that it's calculated up front. This method also adds the \_version\_ field to a document. This was being done only because child documents probably ought to have the same version as that of the root. (it needed a comment saying this!). That said; I think _use_ of the version on a child document isn't tested and might not work (hence SOLR-12638). I wonder what would happen if it were blank on a child doc? i.e. do we even need to do anything here? * I'm sympathetic to moving "getDocument" logic out of the command and into DirectUpdateHandler2. I think there is some entangling of responsibilities between the two that would probably become cleaner. Do or not do here as you have time for. * I appreciate the test of "legacy" behavior though I'm not sure it's worth committing this as it's kind of a burden going forward. If we go with the rename approach... then the legacy test becomes simpler. > updating parent as childless makes old children orphans > --- > > Key: SOLR-5211 > URL: https://issues.apache.org/jira/browse/SOLR-5211 > Project: Solr > Issue Type: Sub-task > Components: update >Affects Versions: 4.5, 6.0 >Reporter: Mikhail Khludnev >Assignee: Mikhail Khludnev >Priority: Major > Attachments: SOLR-5211.patch, SOLR-5211.patch, SOLR-5211.patch > > > if I have parent with children in the index, I can send update omitting > children. as a result old children become orphaned. > I suppose separate \_root_ fields makes much trouble. I propose to extend > notion of uniqueKey, and let it spans across blocks that makes updates > unambiguous. > WDYT? Do you like to see a test proves this issue? -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-7.x-Windows (32bit/jdk1.8.0_172) - Build # 885 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Windows/885/ Java: 32bit/jdk1.8.0_172 -server -XX:+UseG1GC 3 tests failed. FAILED: org.apache.solr.cloud.autoscaling.TriggerIntegrationTest.testTriggerThrottling Error Message: Both triggers should have fired by now Stack Trace: java.lang.AssertionError: Both triggers should have fired by now at __randomizedtesting.SeedInfo.seed([3EFD0B73DE997806:C5DFA3560C339B94]:0) at org.junit.Assert.fail(Assert.java:93) at org.apache.solr.cloud.autoscaling.TriggerIntegrationTest.testTriggerThrottling(TriggerIntegrationTest.java:222) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.lang.Thread.run(Thread.java:748) FAILED: org.apache.solr.cloud.autoscaling.TriggerIntegrationTest.testTriggerThrottling Error Message: Both triggers should have fired by now Stack Trace: java.lang.AssertionError: Both triggers should have fired by now at __randomizedtesting.SeedInfo.seed([3EFD0B73DE997806:C5DFA3560C339B94]:0)
[JENKINS] Lucene-Solr-repro - Build # 1942 - Still Unstable
Build: https://builds.apache.org/job/Lucene-Solr-repro/1942/ [...truncated 34 lines...] ERROR: Error fetching remote repo 'origin' hudson.plugins.git.GitException: Failed to fetch from git://git.apache.org/lucene-solr.git at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:888) at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1155) at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1186) at hudson.scm.SCM.checkout(SCM.java:504) at hudson.model.AbstractProject.checkout(AbstractProject.java:1208) at hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574) at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86) at hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499) at hudson.model.Run.execute(Run.java:1794) at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43) at hudson.model.ResourceController.execute(ResourceController.java:97) at hudson.model.Executor.run(Executor.java:429) Caused by: hudson.plugins.git.GitException: Command "git fetch --tags --progress git://git.apache.org/lucene-solr.git +refs/heads/*:refs/remotes/origin/*" returned status code 128: stdout: stderr: fatal: unable to connect to git.apache.org: git.apache.org[0: 54.84.58.65]: errno=Connection refused at org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:2002) at org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandWithCredentials(CliGitAPIImpl.java:1721) at org.jenkinsci.plugins.gitclient.CliGitAPIImpl.access$300(CliGitAPIImpl.java:72) at org.jenkinsci.plugins.gitclient.CliGitAPIImpl$1.execute(CliGitAPIImpl.java:405) at org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler$1.call(RemoteGitImpl.java:153) at org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler$1.call(RemoteGitImpl.java:146) at hudson.remoting.UserRequest.perform(UserRequest.java:212) at hudson.remoting.UserRequest.perform(UserRequest.java:54) at hudson.remoting.Request$2.run(Request.java:369) at hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:748) Suppressed: hudson.remoting.Channel$CallSiteStackTrace: Remote call to lucene at hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1741) at hudson.remoting.UserRequest$ExceptionResponse.retrieve(UserRequest.java:357) at hudson.remoting.Channel.call(Channel.java:955) at org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler.execute(RemoteGitImpl.java:146) at sun.reflect.GeneratedMethodAccessor1641.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler.invoke(RemoteGitImpl.java:132) at com.sun.proxy.$Proxy119.execute(Unknown Source) at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:886) at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1155) at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1186) at hudson.scm.SCM.checkout(SCM.java:504) at hudson.model.AbstractProject.checkout(AbstractProject.java:1208) at hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574) at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86) at hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499) at hudson.model.Run.execute(Run.java:1794) at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43) at hudson.model.ResourceController.execute(ResourceController.java:97) at hudson.model.Executor.run(Executor.java:429) ERROR: Error fetching remote repo 'origin' Retrying after 10 seconds > git rev-parse --is-inside-work-tree # timeout=10 Fetching changes from the remote Git repository > git config remote.origin.url git://git.apache.org/lucene-solr.git # > timeout=10 Cleaning workspace > git rev-parse --verify HEAD # timeout=10 Resetting working tree > git reset --hard # timeout=10 > git clean -fdx # timeout=10 Fetching upstream changes from git://git.apache.org/lucene-solr.git > git --version # timeout=10 > git fetch --tags --progress
[JENKINS] Lucene-Solr-Tests-master - Build # 2950 - Still Unstable
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/2950/ 3 tests failed. FAILED: org.apache.solr.cloud.DeleteReplicaTest.raceConditionOnDeleteAndRegisterReplica Error Message: Expected new active leader null Live Nodes: [127.0.0.1:33235_solr, 127.0.0.1:34844_solr, 127.0.0.1:43085_solr] Last available state: DocCollection(raceDeleteReplica_true//collections/raceDeleteReplica_true/state.json/13)={ "pullReplicas":"0", "replicationFactor":"2", "shards":{"shard1":{ "range":"8000-7fff", "state":"active", "replicas":{ "core_node4":{ "core":"raceDeleteReplica_true_shard1_replica_n2", "base_url":"https://127.0.0.1:44736/solr;, "node_name":"127.0.0.1:44736_solr", "state":"down", "type":"NRT", "leader":"true"}, "core_node6":{ "core":"raceDeleteReplica_true_shard1_replica_n5", "base_url":"https://127.0.0.1:44736/solr;, "node_name":"127.0.0.1:44736_solr", "state":"down", "type":"NRT"}, "core_node3":{ "core":"raceDeleteReplica_true_shard1_replica_n1", "base_url":"https://127.0.0.1:43085/solr;, "node_name":"127.0.0.1:43085_solr", "state":"down", "type":"NRT", "router":{"name":"compositeId"}, "maxShardsPerNode":"1", "autoAddReplicas":"false", "nrtReplicas":"2", "tlogReplicas":"0"} Stack Trace: java.lang.AssertionError: Expected new active leader null Live Nodes: [127.0.0.1:33235_solr, 127.0.0.1:34844_solr, 127.0.0.1:43085_solr] Last available state: DocCollection(raceDeleteReplica_true//collections/raceDeleteReplica_true/state.json/13)={ "pullReplicas":"0", "replicationFactor":"2", "shards":{"shard1":{ "range":"8000-7fff", "state":"active", "replicas":{ "core_node4":{ "core":"raceDeleteReplica_true_shard1_replica_n2", "base_url":"https://127.0.0.1:44736/solr;, "node_name":"127.0.0.1:44736_solr", "state":"down", "type":"NRT", "leader":"true"}, "core_node6":{ "core":"raceDeleteReplica_true_shard1_replica_n5", "base_url":"https://127.0.0.1:44736/solr;, "node_name":"127.0.0.1:44736_solr", "state":"down", "type":"NRT"}, "core_node3":{ "core":"raceDeleteReplica_true_shard1_replica_n1", "base_url":"https://127.0.0.1:43085/solr;, "node_name":"127.0.0.1:43085_solr", "state":"down", "type":"NRT", "router":{"name":"compositeId"}, "maxShardsPerNode":"1", "autoAddReplicas":"false", "nrtReplicas":"2", "tlogReplicas":"0"} at __randomizedtesting.SeedInfo.seed([D90A0EA19462801F:B31C6F71FC90CAD5]:0) at org.junit.Assert.fail(Assert.java:93) at org.apache.solr.cloud.SolrCloudTestCase.waitForState(SolrCloudTestCase.java:280) at org.apache.solr.cloud.DeleteReplicaTest.raceConditionOnDeleteAndRegisterReplica(DeleteReplicaTest.java:328) at org.apache.solr.cloud.DeleteReplicaTest.raceConditionOnDeleteAndRegisterReplica(DeleteReplicaTest.java:223) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at
[jira] [Commented] (SOLR-5970) Create collection API always has status 0
[ https://issues.apache.org/jira/browse/SOLR-5970?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16687447#comment-16687447 ] Jason Gerlowski commented on SOLR-5970: --- bq. Until we fix the collections API properly, should we, at the least, throw a non zero status upon a failed core creation for any replica? +1. Started taking a look at a possible fix for this earlier this afternoon. Few notes from things: First, I ran into two easy ways to reproduce the behavior on {{master}}. You can use the invalid configset method that Ishan suggested above. That still works great. Or, if you prefer, you can chmod Solr's data dir to be read only ({{chmod 444 solr/server/solr}}) and then create a collection however you'd like. Second, the overseer processing reports back an error message under the key "failure" (you can see this in the curl response in Ishan's example above). Naively, it seems like we could rely on this key as an indicator that the request-processing failed, and that the status should be non-zero. I'll probably go down that route tomorrow morning. Lastly, and this is a bit of a side note, but I notice that when I reproduce the problem, the create-collection call repeatedly takes upwards of 30 seconds. I suspect this a secondary result of not noticing that the overseer processing ran into an error - Solr thinks creation has succeeded so it waits 30 seconds to see the collection become "active" (see the line [here|https://github.com/apache/lucene-solr/blob/d799fd53c7cd3a83442d6010dc48802d2fd8c7fb/solr/core/src/java/org/apache/solr/handler/admin/CollectionsHandler.java#L282]). Hopefully a fix for the route cause here will also help us avoid this issue too! > Create collection API always has status 0 > - > > Key: SOLR-5970 > URL: https://issues.apache.org/jira/browse/SOLR-5970 > Project: Solr > Issue Type: Bug >Reporter: Abraham Elmahrek >Assignee: Jason Gerlowski >Priority: Major > Attachments: SOLR-5970-test.patch, bad.jar, schema.xml, solrconfig.xml > > > The responses below are from a successful create collection API > (https://cwiki.apache.org/confluence/display/solr/Collections+API#CollectionsAPI-CreateormodifyanAliasforaCollection) > call and an unsuccessful create collection API call. It seems the 'status' > is always 0. > Success: > {u'responseHeader': {u'status': 0, u'QTime': 4421}, u'success': {u'': > {u'core': u'test1_shard1_replica1', u'responseHeader': {u'status': 0, > u'QTime': 3449 > Failure: > {u'failure': > {u'': > u"org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException:Error > CREATEing SolrCore 'test43_shard1_replica1': Unable to create core: > test43_shard1_replica1 Caused by: Could not find configName for collection > test43 found:[test1]"}, > u'responseHeader': {u'status': 0, u'QTime': 17149} > } > It seems like the status should be 400 or something similar for an > unsuccessful attempt? -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk-9.0.4) - Build # 3094 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/3094/ Java: 64bit/jdk-9.0.4 -XX:-UseCompressedOops -XX:+UseG1GC 4 tests failed. FAILED: org.apache.solr.cloud.TestTlogReplica.testRecovery Error Message: Can not find doc 7 in https://127.0.0.1:40903/solr Stack Trace: java.lang.AssertionError: Can not find doc 7 in https://127.0.0.1:40903/solr at __randomizedtesting.SeedInfo.seed([EF1D694F57FF56D4:2EED10E37AAF9C73]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.junit.Assert.assertNotNull(Assert.java:526) at org.apache.solr.cloud.TestTlogReplica.checkRTG(TestTlogReplica.java:902) at org.apache.solr.cloud.TestTlogReplica.testRecovery(TestTlogReplica.java:567) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:564) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.base/java.lang.Thread.run(Thread.java:844) FAILED: org.apache.solr.cloud.TestTlogReplica.testRecovery Error Message: Can not find doc 7 in https://127.0.0.1:39887/solr Stack Trace: java.lang.AssertionError:
[jira] [Commented] (LUCENE-8563) Remove k1+1 from the numerator of BM25Similarity
[ https://issues.apache.org/jira/browse/LUCENE-8563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16687440#comment-16687440 ] Michael Gibney commented on LUCENE-8563: [~jpountz], thanks for pointing out the work on BM25F. I'm interested to take a closer look at that. "Users could multiply their per-field boosts by (k1+1)?" ... thanks, yes! That should work in a pinch, though I was so focused on the Similarity that I missed the possibility of scaling it externally in this way. Having k1's presence in the numerator be configurable (either as an extra boolean parameter to the (modified) existing BM25Similarity, or something along the lines of what [~softwaredoug] suggests) would make sense to me, regardless of the benefits of the change (performance or otherwise). > Remove k1+1 from the numerator of BM25Similarity > - > > Key: LUCENE-8563 > URL: https://issues.apache.org/jira/browse/LUCENE-8563 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Adrien Grand >Priority: Minor > > Our current implementation of BM25 does > {code:java} > boost * IDF * (k1+1) * tf / (tf + norm) > {code} > As (k1+1) is a constant, it is the same for every term and doesn't modify > ordering. It is often omitted and I found out that the "The Probabilistic > Relevance Framework: BM25 and Beyond" paper by Robertson (BM25's author) and > Zaragova even describes adding (k1+1) to the numerator as a variant whose > benefit is to be more comparable with Robertson/Sparck-Jones weighting, which > we don't care about. > {quote}A common variant is to add a (k1 + 1) component to the > numerator of the saturation function. This is the same for all > terms, and therefore does not affect the ranking produced. > The reason for including it was to make the final formula > more compatible with the RSJ weight used on its own > {quote} > Should we remove it from BM25Similarity as well? > A side-effect that I'm interested in is that integrating other score > contributions (eg. via oal.document.FeatureField) would be a bit easier to > reason about. For instance a weight of 3 in FeatureField#newSaturationQuery > would have a similar impact as a term whose IDF is 3 (and thus docFreq ~= 5%) > rather than a term whose IDF is 3/(k1 + 1). -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Assigned] (SOLR-5970) Create collection API always has status 0
[ https://issues.apache.org/jira/browse/SOLR-5970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Gerlowski reassigned SOLR-5970: - Assignee: Jason Gerlowski > Create collection API always has status 0 > - > > Key: SOLR-5970 > URL: https://issues.apache.org/jira/browse/SOLR-5970 > Project: Solr > Issue Type: Bug >Reporter: Abraham Elmahrek >Assignee: Jason Gerlowski >Priority: Major > Attachments: SOLR-5970-test.patch, bad.jar, schema.xml, solrconfig.xml > > > The responses below are from a successful create collection API > (https://cwiki.apache.org/confluence/display/solr/Collections+API#CollectionsAPI-CreateormodifyanAliasforaCollection) > call and an unsuccessful create collection API call. It seems the 'status' > is always 0. > Success: > {u'responseHeader': {u'status': 0, u'QTime': 4421}, u'success': {u'': > {u'core': u'test1_shard1_replica1', u'responseHeader': {u'status': 0, > u'QTime': 3449 > Failure: > {u'failure': > {u'': > u"org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException:Error > CREATEing SolrCore 'test43_shard1_replica1': Unable to create core: > test43_shard1_replica1 Caused by: Could not find configName for collection > test43 found:[test1]"}, > u'responseHeader': {u'status': 0, u'QTime': 17149} > } > It seems like the status should be 400 or something similar for an > unsuccessful attempt? -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-master-Windows (32bit/jdk1.8.0_172) - Build # 7619 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/7619/ Java: 32bit/jdk1.8.0_172 -client -XX:+UseG1GC 4 tests failed. FAILED: org.apache.solr.cloud.autoscaling.TriggerIntegrationTest.testTriggerThrottling Error Message: Both triggers should have fired by now Stack Trace: java.lang.AssertionError: Both triggers should have fired by now at __randomizedtesting.SeedInfo.seed([9AE24D4969DD4F51:61C0E56CBB77ACC3]:0) at org.junit.Assert.fail(Assert.java:93) at org.apache.solr.cloud.autoscaling.TriggerIntegrationTest.testTriggerThrottling(TriggerIntegrationTest.java:222) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.lang.Thread.run(Thread.java:748) FAILED: org.apache.solr.cloud.autoscaling.TriggerIntegrationTest.testTriggerThrottling Error Message: Both triggers should have fired by now Stack Trace: java.lang.AssertionError: Both triggers should have fired by now at
[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-11) - Build # 23211 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/23211/ Java: 64bit/jdk-11 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC 38 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.cloud.DocValuesNotIndexedTest Error Message: Collection not found: dv_coll Stack Trace: org.apache.solr.common.SolrException: Collection not found: dv_coll at __randomizedtesting.SeedInfo.seed([D96DD30C04710EE5]:0) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:851) at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194) at org.apache.solr.cloud.DocValuesNotIndexedTest.createCluster(DocValuesNotIndexedTest.java:154) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:875) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.base/java.lang.Thread.run(Thread.java:834) FAILED: junit.framework.TestSuite.org.apache.solr.cloud.DocValuesNotIndexedTest Error Message: Collection not found: dv_coll Stack Trace: org.apache.solr.common.SolrException: Collection not found: dv_coll at __randomizedtesting.SeedInfo.seed([D96DD30C04710EE5]:0) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:851) at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194) at org.apache.solr.cloud.DocValuesNotIndexedTest.createCluster(DocValuesNotIndexedTest.java:154) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:875) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at
[jira] [Commented] (SOLR-12881) Remove unneeded import statements
[ https://issues.apache.org/jira/browse/SOLR-12881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16687378#comment-16687378 ] ASF subversion and git services commented on SOLR-12881: Commit c4961d48dfb66223977ce6678f6de94f93f841c9 in lucene-solr's branch refs/heads/branch_7x from [~cp.erick...@gmail.com] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=c4961d4 ] SOLR-12881: Remove unneeded import statements (cherry picked from commit 763e64260f1ef470e6cc27ad3f0271135fff4a8f) > Remove unneeded import statements > - > > Key: SOLR-12881 > URL: https://issues.apache.org/jira/browse/SOLR-12881 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: master (8.0) >Reporter: Peter Somogyi >Assignee: Erick Erickson >Priority: Trivial > Attachments: SOLR-12881.patch, SOLR-12881.patch, SOLR-12881.patch, > SOLR-12881.patch > > Time Spent: 10m > Remaining Estimate: 0h > > There are unnecessary import statements: > * import from java.lang > * import from same package > * unused import -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-12881) Remove unneeded import statements
[ https://issues.apache.org/jira/browse/SOLR-12881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Erick Erickson resolved SOLR-12881. --- Resolution: Fixed Fix Version/s: 7.7 master (8.0) Thanks Peter! > Remove unneeded import statements > - > > Key: SOLR-12881 > URL: https://issues.apache.org/jira/browse/SOLR-12881 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: master (8.0) >Reporter: Peter Somogyi >Assignee: Erick Erickson >Priority: Trivial > Fix For: master (8.0), 7.7 > > Attachments: SOLR-12881.patch, SOLR-12881.patch, SOLR-12881.patch, > SOLR-12881.patch > > Time Spent: 10m > Remaining Estimate: 0h > > There are unnecessary import statements: > * import from java.lang > * import from same package > * unused import -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12881) Remove unneeded import statements
[ https://issues.apache.org/jira/browse/SOLR-12881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16687374#comment-16687374 ] ASF subversion and git services commented on SOLR-12881: Commit 763e64260f1ef470e6cc27ad3f0271135fff4a8f in lucene-solr's branch refs/heads/master from [~cp.erick...@gmail.com] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=763e642 ] SOLR-12881: Remove unneeded import statements > Remove unneeded import statements > - > > Key: SOLR-12881 > URL: https://issues.apache.org/jira/browse/SOLR-12881 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: master (8.0) >Reporter: Peter Somogyi >Assignee: Erick Erickson >Priority: Trivial > Attachments: SOLR-12881.patch, SOLR-12881.patch, SOLR-12881.patch, > SOLR-12881.patch > > Time Spent: 10m > Remaining Estimate: 0h > > There are unnecessary import statements: > * import from java.lang > * import from same package > * unused import -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-repro - Build # 1941 - Unstable
Build: https://builds.apache.org/job/Lucene-Solr-repro/1941/ [...truncated 28 lines...] [repro] Jenkins log URL: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1697/consoleText [repro] Revision: 95d01c6583b825b6b87591e4f27002c285ea25fb [repro] Ant options: -Dtests.multiplier=2 -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt [repro] Repro line: ant test -Dtestcase=RestartWhileUpdatingTest -Dtests.method=test -Dtests.seed=9119F349ACD044EC -Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt -Dtests.locale=ar-SY -Dtests.timezone=SystemV/MST7MDT -Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1 [repro] Repro line: ant test -Dtestcase=RestartWhileUpdatingTest -Dtests.seed=9119F349ACD044EC -Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt -Dtests.locale=ar-SY -Dtests.timezone=SystemV/MST7MDT -Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1 [repro] git rev-parse --abbrev-ref HEAD [repro] git rev-parse HEAD [repro] Initial local git branch/revision: 95d01c6583b825b6b87591e4f27002c285ea25fb [repro] git fetch [repro] git checkout 95d01c6583b825b6b87591e4f27002c285ea25fb [...truncated 1 lines...] [repro] git merge --ff-only [...truncated 1 lines...] [repro] ant clean [...truncated 6 lines...] [repro] Test suites by module: [repro]solr/core [repro] RestartWhileUpdatingTest [repro] ant compile-test [...truncated 3568 lines...] [repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 -Dtests.class="*.RestartWhileUpdatingTest" -Dtests.showOutput=onerror -Dtests.multiplier=2 -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt -Dtests.seed=9119F349ACD044EC -Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt -Dtests.locale=ar-SY -Dtests.timezone=SystemV/MST7MDT -Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1 [...truncated 124286 lines...] [repro] Setting last failure code to 256 [repro] Failures: [repro] 2/5 failed: org.apache.solr.cloud.RestartWhileUpdatingTest [repro] git checkout 95d01c6583b825b6b87591e4f27002c285ea25fb [...truncated 1 lines...] [repro] Exiting with code 256 [...truncated 6 lines...] - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk-10.0.1) - Build # 3093 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/3093/ Java: 64bit/jdk-10.0.1 -XX:+UseCompressedOops -XX:+UseG1GC 7 tests failed. FAILED: org.apache.solr.cloud.DeleteReplicaTest.raceConditionOnDeleteAndRegisterReplica Error Message: Expected new active leader null Live Nodes: [127.0.0.1:33453_solr, 127.0.0.1:40051_solr, 127.0.0.1:42603_solr] Last available state: DocCollection(raceDeleteReplica_false//collections/raceDeleteReplica_false/state.json/12)={ "pullReplicas":"0", "replicationFactor":"2", "shards":{"shard1":{ "range":"8000-7fff", "state":"active", "replicas":{ "core_node4":{ "core":"raceDeleteReplica_false_shard1_replica_n3", "base_url":"https://127.0.0.1:45385/solr;, "node_name":"127.0.0.1:45385_solr", "state":"down", "type":"NRT", "force_set_state":"false", "leader":"true"}, "core_node6":{ "core":"raceDeleteReplica_false_shard1_replica_n5", "base_url":"https://127.0.0.1:45385/solr;, "node_name":"127.0.0.1:45385_solr", "state":"down", "type":"NRT", "force_set_state":"false", "router":{"name":"compositeId"}, "maxShardsPerNode":"1", "autoAddReplicas":"false", "nrtReplicas":"2", "tlogReplicas":"0"} Stack Trace: java.lang.AssertionError: Expected new active leader null Live Nodes: [127.0.0.1:33453_solr, 127.0.0.1:40051_solr, 127.0.0.1:42603_solr] Last available state: DocCollection(raceDeleteReplica_false//collections/raceDeleteReplica_false/state.json/12)={ "pullReplicas":"0", "replicationFactor":"2", "shards":{"shard1":{ "range":"8000-7fff", "state":"active", "replicas":{ "core_node4":{ "core":"raceDeleteReplica_false_shard1_replica_n3", "base_url":"https://127.0.0.1:45385/solr;, "node_name":"127.0.0.1:45385_solr", "state":"down", "type":"NRT", "force_set_state":"false", "leader":"true"}, "core_node6":{ "core":"raceDeleteReplica_false_shard1_replica_n5", "base_url":"https://127.0.0.1:45385/solr;, "node_name":"127.0.0.1:45385_solr", "state":"down", "type":"NRT", "force_set_state":"false", "router":{"name":"compositeId"}, "maxShardsPerNode":"1", "autoAddReplicas":"false", "nrtReplicas":"2", "tlogReplicas":"0"} at __randomizedtesting.SeedInfo.seed([A5E10D5AC7D21626:CFF76C8AAF205CEC]:0) at org.junit.Assert.fail(Assert.java:93) at org.apache.solr.cloud.SolrCloudTestCase.waitForState(SolrCloudTestCase.java:280) at org.apache.solr.cloud.DeleteReplicaTest.raceConditionOnDeleteAndRegisterReplica(DeleteReplicaTest.java:334) at org.apache.solr.cloud.DeleteReplicaTest.raceConditionOnDeleteAndRegisterReplica(DeleteReplicaTest.java:230) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:564) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944) at
[JENKINS] Lucene-Solr-NightlyTests-7.6 - Build # 3 - Still unstable
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-7.6/3/ 4 tests failed. FAILED: org.apache.lucene.index.TestIndexFileDeleter.testExcInDecRef Error Message: ReaderPool is already closed Stack Trace: org.apache.lucene.store.AlreadyClosedException: ReaderPool is already closed at __randomizedtesting.SeedInfo.seed([10351D30C6136C03:F9A86A02B0DA8BFE]:0) at org.apache.lucene.index.ReaderPool.get(ReaderPool.java:367) at org.apache.lucene.index.IndexWriter.writeReaderPool(IndexWriter.java:3329) at org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:518) at org.apache.lucene.index.RandomIndexWriter.getReader(RandomIndexWriter.java:398) at org.apache.lucene.index.RandomIndexWriter.getReader(RandomIndexWriter.java:332) at org.apache.lucene.index.TestIndexFileDeleter.testExcInDecRef(TestIndexFileDeleter.java:465) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.lang.Thread.run(Thread.java:748) FAILED: org.apache.solr.core.TestDynamicLoading.testDynamicLoading Error Message: Could not load collection from ZK: collection1 Stack Trace: org.apache.solr.common.SolrException: Could not load collection from ZK: collection1 at __randomizedtesting.SeedInfo.seed([89C31CADE737792D:518E31FA10EADC8D]:0) at
[JENKINS] Lucene-Solr-NightlyTests-7.x - Build # 377 - Still unstable
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-7.x/377/ 5 tests failed. FAILED: org.apache.lucene.analysis.core.TestRandomChains.testRandomChainsWithLargeStrings Error Message: stage 2: inconsistent endOffset at pos=1: 2 vs 3; token=be s s Stack Trace: java.lang.IllegalStateException: stage 2: inconsistent endOffset at pos=1: 2 vs 3; token=be s s at __randomizedtesting.SeedInfo.seed([8021DE70475B4140:EA7A61611E1561B3]:0) at org.apache.lucene.analysis.ValidatingTokenFilter.incrementToken(ValidatingTokenFilter.java:125) at org.apache.lucene.analysis.FilteringTokenFilter.incrementToken(FilteringTokenFilter.java:49) at org.apache.lucene.analysis.ValidatingTokenFilter.incrementToken(ValidatingTokenFilter.java:68) at org.apache.lucene.analysis.BaseTokenStreamTestCase.checkResetException(BaseTokenStreamTestCase.java:441) at org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:546) at org.apache.lucene.analysis.core.TestRandomChains.testRandomChainsWithLargeStrings(TestRandomChains.java:897) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.lang.Thread.run(Thread.java:748) FAILED: org.apache.solr.cloud.api.collections.TestLocalFSCloudBackupRestore.test Error Message: Node 127.0.0.1:42030_solr has 3 replicas. Expected num
[jira] [Commented] (LUCENE-8563) Remove k1+1 from the numerator of BM25Similarity
[ https://issues.apache.org/jira/browse/LUCENE-8563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16687268#comment-16687268 ] Doug Turnbull commented on LUCENE-8563: --- I feel perhaps one way forward is to create a second (default?) similarity - FastBM25Similarity? ConstantCeilingBM25Similarity? and leave in place the current BM25 similarity as an optional similarity to configure. There may be existing practices around tuning BM25 similarity at many places where writing a similarity plugin is not an option > Remove k1+1 from the numerator of BM25Similarity > - > > Key: LUCENE-8563 > URL: https://issues.apache.org/jira/browse/LUCENE-8563 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Adrien Grand >Priority: Minor > > Our current implementation of BM25 does > {code:java} > boost * IDF * (k1+1) * tf / (tf + norm) > {code} > As (k1+1) is a constant, it is the same for every term and doesn't modify > ordering. It is often omitted and I found out that the "The Probabilistic > Relevance Framework: BM25 and Beyond" paper by Robertson (BM25's author) and > Zaragova even describes adding (k1+1) to the numerator as a variant whose > benefit is to be more comparable with Robertson/Sparck-Jones weighting, which > we don't care about. > {quote}A common variant is to add a (k1 + 1) component to the > numerator of the saturation function. This is the same for all > terms, and therefore does not affect the ranking produced. > The reason for including it was to make the final formula > more compatible with the RSJ weight used on its own > {quote} > Should we remove it from BM25Similarity as well? > A side-effect that I'm interested in is that integrating other score > contributions (eg. via oal.document.FeatureField) would be a bit easier to > reason about. For instance a weight of 3 in FeatureField#newSaturationQuery > would have a similar impact as a term whose IDF is 3 (and thus docFreq ~= 5%) > rather than a term whose IDF is 3/(k1 + 1). -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8563) Remove k1+1 from the numerator of BM25Similarity
[ https://issues.apache.org/jira/browse/LUCENE-8563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16687260#comment-16687260 ] Adrien Grand commented on LUCENE-8563: -- bq. "assuming a single similarity" – is this something that we want to assume? We can't indeed, even though this is the most common case. That said if you are searching multiple fields at once today, the I'm afraid that relevance isn't very good anyway as we don't support something like BM25F (LUCENE-8216) to merge index and document statistics (BlendedTermQuery merges index statistics, but not norms and term frequencies). By the way BM25F doesn't allow to configure the value of k1 on a per-field basis, only b may have different per-field values. bq. I'm sure this change would be appropriate for some scenarios, but it's a fundamental change that could in some cases have significant downstream consequences, with no easy way (as far as I can tell) to maintain existing behavior. Users could multiply their per-field boosts by (k1+1)? > Remove k1+1 from the numerator of BM25Similarity > - > > Key: LUCENE-8563 > URL: https://issues.apache.org/jira/browse/LUCENE-8563 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Adrien Grand >Priority: Minor > > Our current implementation of BM25 does > {code:java} > boost * IDF * (k1+1) * tf / (tf + norm) > {code} > As (k1+1) is a constant, it is the same for every term and doesn't modify > ordering. It is often omitted and I found out that the "The Probabilistic > Relevance Framework: BM25 and Beyond" paper by Robertson (BM25's author) and > Zaragova even describes adding (k1+1) to the numerator as a variant whose > benefit is to be more comparable with Robertson/Sparck-Jones weighting, which > we don't care about. > {quote}A common variant is to add a (k1 + 1) component to the > numerator of the saturation function. This is the same for all > terms, and therefore does not affect the ranking produced. > The reason for including it was to make the final formula > more compatible with the RSJ weight used on its own > {quote} > Should we remove it from BM25Similarity as well? > A side-effect that I'm interested in is that integrating other score > contributions (eg. via oal.document.FeatureField) would be a bit easier to > reason about. For instance a weight of 3 in FeatureField#newSaturationQuery > would have a similar impact as a term whose IDF is 3 (and thus docFreq ~= 5%) > rather than a term whose IDF is 3/(k1 + 1). -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-9.0.4) - Build # 23210 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/23210/ Java: 64bit/jdk-9.0.4 -XX:-UseCompressedOops -XX:+UseG1GC 9 tests failed. FAILED: org.apache.solr.client.solrj.impl.CloudSolrClientTest.testParallelUpdateQTime Error Message: Error from server at https://127.0.0.1:34745/solr/collection1_shard2_replica_n3: Expected mime type application/octet-stream but got text/html.Error 404 Can not find: /solr/collection1_shard2_replica_n3/update HTTP ERROR 404 Problem accessing /solr/collection1_shard2_replica_n3/update. Reason: Can not find: /solr/collection1_shard2_replica_n3/updatehttp://eclipse.org/jetty;>Powered by Jetty:// 9.4.11.v20180605 Stack Trace: org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: Error from server at https://127.0.0.1:34745/solr/collection1_shard2_replica_n3: Expected mime type application/octet-stream but got text/html. Error 404 Can not find: /solr/collection1_shard2_replica_n3/update HTTP ERROR 404 Problem accessing /solr/collection1_shard2_replica_n3/update. Reason: Can not find: /solr/collection1_shard2_replica_n3/updatehttp://eclipse.org/jetty;>Powered by Jetty:// 9.4.11.v20180605 at __randomizedtesting.SeedInfo.seed([443FA3708CAB845E:AAE7D8E1C2D451CD]:0) at org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:551) at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1016) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:884) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949) at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194) at org.apache.solr.client.solrj.impl.CloudSolrClientTest.testParallelUpdateQTime(CloudSolrClientTest.java:146) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:564) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985) at org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:110) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891) at
[jira] [Commented] (SOLR-12989) facilitate -Dsolr.log.muteconsole opt-out
[ https://issues.apache.org/jira/browse/SOLR-12989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16687176#comment-16687176 ] Jan Høydahl commented on SOLR-12989: So the consequence of not having {{-Dsolr.log.muteconsole}} enabled by default anymore (unless you add it to your solr.in.xx), would be that your console-log would log errors only. Guess that is ok, since the situation we had before the muteconsole option was that console log grew non-stop without rotation. > facilitate -Dsolr.log.muteconsole opt-out > - > > Key: SOLR-12989 > URL: https://issues.apache.org/jira/browse/SOLR-12989 > Project: Solr > Issue Type: Improvement > Components: logging, scripts and tools >Reporter: Christine Poerschke >Assignee: Christine Poerschke >Priority: Minor > Attachments: SOLR-12989.patch > > > Having made a small log4j2.xml edit > {code} > > > - > + > > {code} > I was surprised to find the errors logged in the {{solr.log}} file but not in > the {{solr-*-console.log}} file. > https://lucene.apache.org/solr/guide/7_5/configuring-logging.html#permanent-logging-settings > very helpfully mentioned how the console logger is disabled when running in > the background. > This ticket proposes to facilitate opting out of the muting via a > {{SOLR_LOG_MUTECONSOLE_OPT}} option. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12985) ClassNotFound indexing crypted documents
[ https://issues.apache.org/jira/browse/SOLR-12985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16687159#comment-16687159 ] Jan Høydahl commented on SOLR-12985: Have you tried * To load the jars using {{}} tags in solconfig? * Place all extraction/lib jars in {{$SOLR_HOME/lib}} ? * Place those jars in {{server/solr-webapp/WEB-INF/lib/}} ? Where jars are located (by which class loader) can affect their visibility and some plugins actually need to be in WEB-INF/lib to work. I guess that SOLR_HOME/lib would work in your case. > ClassNotFound indexing crypted documents > > > Key: SOLR-12985 > URL: https://issues.apache.org/jira/browse/SOLR-12985 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: contrib - DataImportHandler >Affects Versions: 7.3.1 >Reporter: Luca >Priority: Critical > Attachments: crypted.xlsx, db.sql, logs.zip, notcrypted.docx, > schema.zip > > > When indexing a BLOB containing an encrypted Office Document (xls or xlsx but > I think all types) it fail with a very bad exception, if the document is not > encrypted works fine. > I'm using the DataImportHandler. > The exception seems also avoid the onError=skip or continue, making the > import fail. > I tried to move the libraries from contrib/extraction/lib/ to server/lib and > the unfounded class changes, so it's a class loading issue. > This is the base exception: > Exception while processing: document_index document : > SolrInputDocument(fields: [site=187, index_type=document, resource_id=3, > title_full=Dati cliente.docx, id=d-XXX-3, publish_date=2018-09-28 00:00:00.0, > abstract= Azioni di recupero intraprese sulle Fatture telefoniche, > insert_date=2019-09-28 00:00:00.0, type=Documenti, > url=http://]):org.apache.solr.handler.dataimport.DataImportHandlerException: > Unable to read content Processing Document # 1 > at > org.apache.solr.handler.dataimport.DataImportHandlerException.wrapAndThrow(DataImportHandlerException.java:69) > at > org.apache.solr.handler.dataimport.TikaEntityProcessor.nextRow(TikaEntityProcessor.java:171) > at > org.apache.solr.handler.dataimport.EntityProcessorWrapper.nextRow(EntityProcessorWrapper.java:267) > at > org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:476) > at > org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:517) > at > org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:415) > at > org.apache.solr.handler.dataimport.DocBuilder.doDelta(DocBuilder.java:364) > at > org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:225) > at > org.apache.solr.handler.dataimport.DataImporter.doDeltaImport(DataImporter.java:452) > at > org.apache.solr.handler.dataimport.DataImporter.runCmd(DataImporter.java:485) > at > org.apache.solr.handler.dataimport.DataImporter.lambda$runAsync$0(DataImporter.java:466) > at java.lang.Thread.run(Thread.java:748) > Caused by: org.apache.tika.exception.TikaException: TIKA-198: Illegal > IOException from org.apache.tika.parser.microsoft.OfficeParser@500efcf1 > at org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:286) > at org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:280) > at > org.apache.tika.parser.AutoDetectParser.parse(AutoDetectParser.java:143) > at > org.apache.solr.handler.dataimport.TikaEntityProcessor.nextRow(TikaEntityProcessor.java:165) > ... 10 more > Caused by: java.io.IOException: java.lang.ClassNotFoundException: > org.apache.poi.poifs.crypt.agile.AgileEncryptionInfoBuilder > at > org.apache.poi.poifs.crypt.EncryptionInfo.(EncryptionInfo.java:150) > at > org.apache.poi.poifs.crypt.EncryptionInfo.(EncryptionInfo.java:102) > at > org.apache.tika.parser.microsoft.OfficeParser.parse(OfficeParser.java:203) > at > org.apache.tika.parser.microsoft.OfficeParser.parse(OfficeParser.java:132) > at org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:280) > ... 13 more > Caused by: java.lang.ClassNotFoundException: > org.apache.poi.poifs.crypt.agile.AgileEncryptionInfoBuilder > at java.net.URLClassLoader.findClass(URLClassLoader.java:381) > at java.lang.ClassLoader.loadClass(ClassLoader.java:424) > at java.lang.ClassLoader.loadClass(ClassLoader.java:357) > at > org.eclipse.jetty.webapp.WebAppClassLoader.loadClass(WebAppClassLoader.java:565) > at java.lang.ClassLoader.loadClass(ClassLoader.java:357) > at > org.apache.poi.poifs.crypt.EncryptionInfo.getBuilder(EncryptionInfo.java:222) > at > org.apache.poi.poifs.crypt.EncryptionInfo.(EncryptionInfo.java:148) > ... 17 more -- This message was sent by
[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 2156 - Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/2156/ Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseSerialGC 1 tests failed. FAILED: org.apache.solr.cloud.ChaosMonkeySafeLeaderTest.test Error Message: Error from server at http://127.0.0.1:46030/uo_igj: KeeperErrorCode = Session expired for /configs/conf1 Stack Trace: org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error from server at http://127.0.0.1:46030/uo_igj: KeeperErrorCode = Session expired for /configs/conf1 at __randomizedtesting.SeedInfo.seed([2A1D694AEFCD8810:A24956904131E5E8]:0) at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:643) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244) at org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483) at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413) at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1107) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:884) at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817) at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1260) at org.apache.solr.cloud.AbstractFullDistribZkTestBase.createCollection(AbstractFullDistribZkTestBase.java:1676) at org.apache.solr.cloud.AbstractFullDistribZkTestBase.createCollection(AbstractFullDistribZkTestBase.java:1703) at org.apache.solr.cloud.ChaosMonkeySafeLeaderTest.test(ChaosMonkeySafeLeaderTest.java:177) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:1010) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:985) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at
[jira] [Commented] (SOLR-12965) Add JSON faceting support to SolrJ
[ https://issues.apache.org/jira/browse/SOLR-12965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16687085#comment-16687085 ] ASF subversion and git services commented on SOLR-12965: Commit b502ba2882a86958ef8769c3cb2fd65cf2d9c7e1 in lucene-solr's branch refs/heads/branch_7x from [~gerlowskija] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=b502ba2 ] SOLR-12965: Add facet support to JsonQueryRequest > Add JSON faceting support to SolrJ > -- > > Key: SOLR-12965 > URL: https://issues.apache.org/jira/browse/SOLR-12965 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: clients - java, SolrJ >Affects Versions: 7.5, master (8.0) >Reporter: Jason Gerlowski >Assignee: Jason Gerlowski >Priority: Major > Attachments: SOLR-12965.patch, SOLR-12965.patch, SOLR-12965.patch, > SOLR-12965.patch > > > SOLR-12947 created {{JsonQueryRequest}}, a SolrJ class that makes it easier > for users to make JSON-api requests in their Java/SolrJ code. Currently this > class is missing any sort of faceting capabilities (I'd held off on adding > this as a part of SOLR-12947 just to keep the issues smaller). > This JIRA covers adding that missing faceting capability. > There's a few ways we could handle it, but my first attempt at adding > faceting support will probably have users specify a Map for > each facet that they wish to add, similar to how complex queries were > supported in SOLR-12947. This approach has some pros and cons: > The benefit is how general the approach is- our interface stays resilient to > any future changes to the syntax of the JSON API, and users can build facets > that I'd never thought to explicitly test. The downside is that this doesn't > offer much abstraction for users who are unfamiliar with our JSON syntax- > they still have to know the JSON "schema" to build a map representing their > facet. But in practice we can probably mitigate this downside by providing > "facet builders" or some other helper classes to provide this abstraction in > the common case. > Hope to have a skeleton patch up soon. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12947) SolrJ Helper for JSON Request API
[ https://issues.apache.org/jira/browse/SOLR-12947?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16687054#comment-16687054 ] ASF subversion and git services commented on SOLR-12947: Commit 8754970c70088afa941cafb4edd7fd45497ed772 in lucene-solr's branch refs/heads/branch_7x from [~gerlowskija] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=8754970 ] SOLR-12947: Add SolrJ helper for making JSON DSL requests The JSON request API is great, but it's hard to use from SolrJ. This commit adds 'JsonQueryRequest', which makes it much easier to write JSON API requests in SolrJ applications. > SolrJ Helper for JSON Request API > - > > Key: SOLR-12947 > URL: https://issues.apache.org/jira/browse/SOLR-12947 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: clients - java, SolrJ >Affects Versions: 7.5 >Reporter: Jason Gerlowski >Assignee: Jason Gerlowski >Priority: Minor > Attachments: SOLR-12947.patch, SOLR-12947.patch, SOLR-12947.patch > > > The JSON request API is becoming increasingly popular for sending querying or > accessing the JSON faceting functionality. The query DSL is simple and easy > to understand, but crafting requests programmatically is tough in SolrJ. > Currently, SolrJ users must hardcode in the JSON body they want their request > to convey. Nothing helps them build the JSON request they're going for, > making use of these APIs manual and painful. > We should see what we can do to alleviate this. I'd like to tackle this work > in two pieces. This (the first piece) would introduces classes that make it > easier to craft non-faceting requests that use the JSON Request API. > Improving JSON Faceting support is a bit more involved (it likely requires > improvements to the Response as well as the Request objects), so I'll aim to > tackle that in a separate JIRA to keep things moving. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12947) SolrJ Helper for JSON Request API
[ https://issues.apache.org/jira/browse/SOLR-12947?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16687056#comment-16687056 ] ASF subversion and git services commented on SOLR-12947: Commit 6faddfe3b411fa5d9e9d06cc599d01a608112ed4 in lucene-solr's branch refs/heads/branch_7x from [~gerlowskija] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=6faddfe ] SOLR-12947: Misc JsonQueryRequest code cleanup > SolrJ Helper for JSON Request API > - > > Key: SOLR-12947 > URL: https://issues.apache.org/jira/browse/SOLR-12947 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: clients - java, SolrJ >Affects Versions: 7.5 >Reporter: Jason Gerlowski >Assignee: Jason Gerlowski >Priority: Minor > Attachments: SOLR-12947.patch, SOLR-12947.patch, SOLR-12947.patch > > > The JSON request API is becoming increasingly popular for sending querying or > accessing the JSON faceting functionality. The query DSL is simple and easy > to understand, but crafting requests programmatically is tough in SolrJ. > Currently, SolrJ users must hardcode in the JSON body they want their request > to convey. Nothing helps them build the JSON request they're going for, > making use of these APIs manual and painful. > We should see what we can do to alleviate this. I'd like to tackle this work > in two pieces. This (the first piece) would introduces classes that make it > easier to craft non-faceting requests that use the JSON Request API. > Improving JSON Faceting support is a bit more involved (it likely requires > improvements to the Response as well as the Request objects), so I'll aim to > tackle that in a separate JIRA to keep things moving. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-12947) SolrJ Helper for JSON Request API
[ https://issues.apache.org/jira/browse/SOLR-12947?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Gerlowski resolved SOLR-12947. Resolution: Fixed Fix Version/s: master (8.0) 7.7 This has been in {{master}} for awhile without causing any issues, so I've attached it to {{branch_7x}} as well. I was just short of this making it in to 7.6, but It should be around for the next release in the 7x line, assuming there is one. (I noticed there wasn't a "Fix Version" tag for 7.7, so I'm creating one with this comment. I'm not sure how JIRA categorizes the versions as being released/unreleased yet though. Hopefully I'm not adding this in error.) > SolrJ Helper for JSON Request API > - > > Key: SOLR-12947 > URL: https://issues.apache.org/jira/browse/SOLR-12947 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: clients - java, SolrJ >Affects Versions: 7.5 >Reporter: Jason Gerlowski >Assignee: Jason Gerlowski >Priority: Minor > Fix For: 7.7, master (8.0) > > Attachments: SOLR-12947.patch, SOLR-12947.patch, SOLR-12947.patch > > > The JSON request API is becoming increasingly popular for sending querying or > accessing the JSON faceting functionality. The query DSL is simple and easy > to understand, but crafting requests programmatically is tough in SolrJ. > Currently, SolrJ users must hardcode in the JSON body they want their request > to convey. Nothing helps them build the JSON request they're going for, > making use of these APIs manual and painful. > We should see what we can do to alleviate this. I'd like to tackle this work > in two pieces. This (the first piece) would introduces classes that make it > easier to craft non-faceting requests that use the JSON Request API. > Improving JSON Faceting support is a bit more involved (it likely requires > improvements to the Response as well as the Request objects), so I'll aim to > tackle that in a separate JIRA to keep things moving. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12947) SolrJ Helper for JSON Request API
[ https://issues.apache.org/jira/browse/SOLR-12947?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16687055#comment-16687055 ] ASF subversion and git services commented on SOLR-12947: Commit 3dba71d397df44302b257f224757484b9831f23d in lucene-solr's branch refs/heads/branch_7x from [~gerlowskija] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=3dba71d ] SOLR-12947: Add MapWriter compatibility to JsonQueryRequest JsonQueryRequest had `setQuery` methods that took in a query either as a String or as a Map. But no such overload for MapWriter, a SolrJ interface used to transmit Maps via "push writing" over the wire. This commit adds an overload taking this type, so that users can specify their queries this way as well. This commit also changes JsonQueryRequest writes out the request, to ensure it uses "push writing" in non-MapWriter cases as well. > SolrJ Helper for JSON Request API > - > > Key: SOLR-12947 > URL: https://issues.apache.org/jira/browse/SOLR-12947 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: clients - java, SolrJ >Affects Versions: 7.5 >Reporter: Jason Gerlowski >Assignee: Jason Gerlowski >Priority: Minor > Attachments: SOLR-12947.patch, SOLR-12947.patch, SOLR-12947.patch > > > The JSON request API is becoming increasingly popular for sending querying or > accessing the JSON faceting functionality. The query DSL is simple and easy > to understand, but crafting requests programmatically is tough in SolrJ. > Currently, SolrJ users must hardcode in the JSON body they want their request > to convey. Nothing helps them build the JSON request they're going for, > making use of these APIs manual and painful. > We should see what we can do to alleviate this. I'd like to tackle this work > in two pieces. This (the first piece) would introduces classes that make it > easier to craft non-faceting requests that use the JSON Request API. > Improving JSON Faceting support is a bit more involved (it likely requires > improvements to the Response as well as the Request objects), so I'll aim to > tackle that in a separate JIRA to keep things moving. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-7.x-Linux (32bit/jdk1.8.0_172) - Build # 3092 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/3092/ Java: 32bit/jdk1.8.0_172 -server -XX:+UseConcMarkSweepGC 1 tests failed. FAILED: org.apache.solr.cloud.autoscaling.sim.TestSimPolicyCloud.testCreateCollectionAddReplica Error Message: Timeout waiting for collection to become active Live Nodes: [127.0.0.1:10219_solr, 127.0.0.1:10218_solr, 127.0.0.1:10220_solr, 127.0.0.1:10217_solr, 127.0.0.1:10221_solr] Last available state: DocCollection(testCreateCollectionAddReplica//clusterstate.json/26)={ "replicationFactor":"1", "pullReplicas":"0", "router":{"name":"compositeId"}, "maxShardsPerNode":"1", "autoAddReplicas":"false", "nrtReplicas":"1", "tlogReplicas":"0", "autoCreated":"true", "policy":"c1", "shards":{"shard1":{ "replicas":{"core_node1":{ "core":"testCreateCollectionAddReplica_shard1_replica_n1", "SEARCHER.searcher.maxDoc":0, "SEARCHER.searcher.deletedDocs":0, "INDEX.sizeInBytes":10240, "node_name":"127.0.0.1:10218_solr", "state":"active", "type":"NRT", "INDEX.sizeInGB":9.5367431640625E-6, "SEARCHER.searcher.numDocs":0}}, "range":"8000-7fff", "state":"active"}}} Stack Trace: java.lang.AssertionError: Timeout waiting for collection to become active Live Nodes: [127.0.0.1:10219_solr, 127.0.0.1:10218_solr, 127.0.0.1:10220_solr, 127.0.0.1:10217_solr, 127.0.0.1:10221_solr] Last available state: DocCollection(testCreateCollectionAddReplica//clusterstate.json/26)={ "replicationFactor":"1", "pullReplicas":"0", "router":{"name":"compositeId"}, "maxShardsPerNode":"1", "autoAddReplicas":"false", "nrtReplicas":"1", "tlogReplicas":"0", "autoCreated":"true", "policy":"c1", "shards":{"shard1":{ "replicas":{"core_node1":{ "core":"testCreateCollectionAddReplica_shard1_replica_n1", "SEARCHER.searcher.maxDoc":0, "SEARCHER.searcher.deletedDocs":0, "INDEX.sizeInBytes":10240, "node_name":"127.0.0.1:10218_solr", "state":"active", "type":"NRT", "INDEX.sizeInGB":9.5367431640625E-6, "SEARCHER.searcher.numDocs":0}}, "range":"8000-7fff", "state":"active"}}} at __randomizedtesting.SeedInfo.seed([2ED281EBBB84D7A:82CD4D30AAFBA5DC]:0) at org.apache.solr.cloud.CloudTestUtils.waitForState(CloudTestUtils.java:70) at org.apache.solr.cloud.autoscaling.sim.TestSimPolicyCloud.testCreateCollectionAddReplica(TestSimPolicyCloud.java:123) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985) at org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:110) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891) at
[jira] [Created] (SOLR-12989) facilitate -Dsolr.log.muteconsole opt-out
Christine Poerschke created SOLR-12989: -- Summary: facilitate -Dsolr.log.muteconsole opt-out Key: SOLR-12989 URL: https://issues.apache.org/jira/browse/SOLR-12989 Project: Solr Issue Type: Improvement Reporter: Christine Poerschke Assignee: Christine Poerschke Having made a small log4j2.xml edit {code} - + {code} I was surprised to find the errors logged in the {{solr.log}} file but not in the {{solr-*-console.log}} file. https://lucene.apache.org/solr/guide/7_5/configuring-logging.html#permanent-logging-settings very helpfully mentioned how the console logger is disabled when running in the background. This ticket proposes to facilitate opting out of the muting via a {{SOLR_LOG_MUTECONSOLE_OPT}} option. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-12989) facilitate -Dsolr.log.muteconsole opt-out
[ https://issues.apache.org/jira/browse/SOLR-12989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Christine Poerschke updated SOLR-12989: --- Attachment: SOLR-12989.patch > facilitate -Dsolr.log.muteconsole opt-out > - > > Key: SOLR-12989 > URL: https://issues.apache.org/jira/browse/SOLR-12989 > Project: Solr > Issue Type: Improvement > Components: logging, scripts and tools >Reporter: Christine Poerschke >Assignee: Christine Poerschke >Priority: Minor > Attachments: SOLR-12989.patch > > > Having made a small log4j2.xml edit > {code} > > > - > + > > {code} > I was surprised to find the errors logged in the {{solr.log}} file but not in > the {{solr-*-console.log}} file. > https://lucene.apache.org/solr/guide/7_5/configuring-logging.html#permanent-logging-settings > very helpfully mentioned how the console logger is disabled when running in > the background. > This ticket proposes to facilitate opting out of the muting via a > {{SOLR_LOG_MUTECONSOLE_OPT}} option. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-12989) facilitate -Dsolr.log.muteconsole opt-out
[ https://issues.apache.org/jira/browse/SOLR-12989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Christine Poerschke updated SOLR-12989: --- Component/s: logging > facilitate -Dsolr.log.muteconsole opt-out > - > > Key: SOLR-12989 > URL: https://issues.apache.org/jira/browse/SOLR-12989 > Project: Solr > Issue Type: Improvement > Components: logging, scripts and tools >Reporter: Christine Poerschke >Assignee: Christine Poerschke >Priority: Minor > > Having made a small log4j2.xml edit > {code} > > > - > + > > {code} > I was surprised to find the errors logged in the {{solr.log}} file but not in > the {{solr-*-console.log}} file. > https://lucene.apache.org/solr/guide/7_5/configuring-logging.html#permanent-logging-settings > very helpfully mentioned how the console logger is disabled when running in > the background. > This ticket proposes to facilitate opting out of the muting via a > {{SOLR_LOG_MUTECONSOLE_OPT}} option. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-12989) facilitate -Dsolr.log.muteconsole opt-out
[ https://issues.apache.org/jira/browse/SOLR-12989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Christine Poerschke updated SOLR-12989: --- Component/s: scripts and tools > facilitate -Dsolr.log.muteconsole opt-out > - > > Key: SOLR-12989 > URL: https://issues.apache.org/jira/browse/SOLR-12989 > Project: Solr > Issue Type: Improvement > Components: scripts and tools >Reporter: Christine Poerschke >Assignee: Christine Poerschke >Priority: Minor > > Having made a small log4j2.xml edit > {code} > > > - > + > > {code} > I was surprised to find the errors logged in the {{solr.log}} file but not in > the {{solr-*-console.log}} file. > https://lucene.apache.org/solr/guide/7_5/configuring-logging.html#permanent-logging-settings > very helpfully mentioned how the console logger is disabled when running in > the background. > This ticket proposes to facilitate opting out of the muting via a > {{SOLR_LOG_MUTECONSOLE_OPT}} option. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12988) TestMiniSolrCloudClusterSSL.testSslWithCheckPeerName fails reliably on java11: "SSLPeerUnverifiedException: peer not authenticated"
[ https://issues.apache.org/jira/browse/SOLR-12988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16687008#comment-16687008 ] Hoss Man commented on SOLR-12988: - Example seed as of master/95d01c6583b825b6b87591e4f27002c285ea25fb .. {noformat} [junit4] 2> NOTE: reproduce with: ant test -Dtestcase=TestMiniSolrCloudClusterSSL -Dtests.method=testSslWithCheckPeerName -Dtests.seed=BD8A4C6891EB95BD -Dtests.slow=true -Dtests.badapples=true -Dtests.locale=mzn -Dtests.timezone=UTC -Dtests.asserts=true -Dtests.file.encoding=UTF-8 [junit4] ERROR 3.40s | TestMiniSolrCloudClusterSSL.testSslWithCheckPeerName <<< [junit4]> Throwable #1: org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error from server at https://127.0.0.1:38383/solr: Error getting replica locations : unable to get autoscaling policy session [junit4]>at __randomizedtesting.SeedInfo.seed([BD8A4C6891EB95BD:E0E9B982ABA0D6C]:0) [junit4]>at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:643) [junit4]>at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255) [junit4]>at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244) [junit4]>at org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483) [junit4]>at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413) [junit4]>at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1107) [junit4]>at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:884) [junit4]>at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817) [junit4]>at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194) [junit4]>at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211) [junit4]>at org.apache.solr.cloud.TestMiniSolrCloudClusterSSL.checkCreateCollection(TestMiniSolrCloudClusterSSL.java:261) [junit4]>at org.apache.solr.cloud.TestMiniSolrCloudClusterSSL.checkClusterWithCollectionCreations(TestMiniSolrCloudClusterSSL.java:233) [junit4]>at org.apache.solr.cloud.TestMiniSolrCloudClusterSSL.checkClusterWithNodeReplacement(TestMiniSolrCloudClusterSSL.java:157) [junit4]>at org.apache.solr.cloud.TestMiniSolrCloudClusterSSL.testSslWithCheckPeerName(TestMiniSolrCloudClusterSSL.java:139) [junit4]>at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) [junit4]>at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) [junit4]>at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) [junit4]>at java.base/java.lang.reflect.Method.invoke(Method.java:566) [junit4]>at java.base/java.lang.Thread.run(Thread.java:834) {noformat} >From the logs... {noformat} [junit4] 2> Caused by: javax.net.ssl.SSLPeerUnverifiedException: peer not authenticated [junit4] 2>at java.base/sun.security.ssl.SSLSessionImpl.getPeerCertificates(SSLSessionImpl.java:526) [junit4] 2>at org.apache.http.conn.ssl.SSLConnectionSocketFactory.verifyHostname(SSLConnectionSocketFactory.java:464) [junit4] 2>at org.apache.http.conn.ssl.SSLConnectionSocketFactory.createLayeredSocket(SSLConnectionSocketFactory.java:397) [junit4] 2>at org.apache.http.conn.ssl.SSLConnectionSocketFactory.connectSocket(SSLConnectionSocketFactory.java:355) [junit4] 2>at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:142) [junit4] 2>at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:359) [junit4] 2>at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:381) [junit4] 2>at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:237) [junit4] 2>at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:185) [junit4] 2>at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89) [junit4] 2>at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:111) [junit4] 2>at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185) [junit4] 2>at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83) [junit4] 2>at
[jira] [Created] (SOLR-12988) TestMiniSolrCloudClusterSSL.testSslWithCheckPeerName fails reliably on java11: "SSLPeerUnverifiedException: peer not authenticated"
Hoss Man created SOLR-12988: --- Summary: TestMiniSolrCloudClusterSSL.testSslWithCheckPeerName fails reliably on java11: "SSLPeerUnverifiedException: peer not authenticated" Key: SOLR-12988 URL: https://issues.apache.org/jira/browse/SOLR-12988 Project: Solr Issue Type: Test Security Level: Public (Default Security Level. Issues are Public) Reporter: Hoss Man TestMiniSolrCloudClusterSSL.testSslWithCheckPeerName seems to fail 100% of the time when run with java11 (or java12), regardless of seed, on both master & 7x. The nature of the problem and the way our htp stack works suggests it *may* ultimately be a jetty bug (perhaps related to [jetty issue#2711|https://github.com/eclipse/jetty.project/issues/2711]?) *HOWEVER* ... as far as i can tell, whatever the root cause is, seems to have been fixed on the {{jira/http2}} branch (as of 52bc163dc1804c31af09c1fba99647005da415ad) which should hopefully be getting merged to master soon. Filing this issue largely for tracking purpose, although we may also want to use it for discussions/considerations of other backports/fixes to 7x -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11078) Solr query performance degradation since Solr 6.4.2
[ https://issues.apache.org/jira/browse/SOLR-11078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16686997#comment-16686997 ] David Smiley commented on SOLR-11078: - Thanks Gerd; you are welcome! It's not every day that we get such nice analysis, so I'm appreciative of your efforts too. I suspect the slower performance on StrField might be because you are using it for all functions (range queries) in addition to lookups, whereas my recommendation is to use StrField only for lookups. Also it doesn't help matters that the terms are longer (bigger) with a decimal encoding instead of 4-8 bytes that we'd get with SOLR-12074. > Solr query performance degradation since Solr 6.4.2 > --- > > Key: SOLR-11078 > URL: https://issues.apache.org/jira/browse/SOLR-11078 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: search, Server >Affects Versions: 6.6, 7.1 > Environment: * CentOS 7.3 (Linux zasolrm03 3.10.0-514.26.2.el7.x86_64 > #1 SMP Tue Jul 4 15:04:05 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux) > * Java HotSpot(TM) 64-Bit Server VM (build 25.121-b13, mixed mode) > * 4 CPU, 10GB RAM > Running Solr 6.6.0 with the following JVM settings: > java -server -Xms4G -Xmx4G -XX:NewRatio=3 -XX:SurvivorRatio=4 > -XX:TargetSurvivorRatio=90 -XX:MaxTenuringThreshold=8 -XX:+UseConcMarkSweepGC > -XX:+UseParNewGC -XX:ConcGCThreads=4 -XX:ParallelGCThreads=4 > -XX:+CMSScavengeBeforeRemark -XX:PretenureSizeThreshold=64m > -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=50 > -XX:CMSMaxAbortablePrecleanTime=6000 -XX:+CMSParallelRemarkEnabled > -XX:+ParallelRefProcEnabled -verbose:gc -XX:+PrintHeapAtGC > -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps > -XX:+PrintTenuringDistribution -XX:+PrintGCApplicationStoppedTime > -Xloggc:/home/prodza/solrserver/../logs/solr_gc.log -XX:+UseGCLogFileRotation > -XX:NumberOfGCLogFiles=9 -XX:GCLogFileSize=20M > -Dsolr.log.dir=/home/prodza/solrserver/../logs -Djetty.port=8983 > -DSTOP.PORT=7983 -DSTOP.KEY=solrrocks -Duser.timezone=SAST > -Djetty.home=/home/prodza/solrserver/server > -Dsolr.solr.home=/home/prodza/solrserver/../solr > -Dsolr.install.dir=/home/prodza/solrserver > -Dlog4j.configuration=file:/home/prodza/solrserver/../config/log4j.properties > -Xss256k -Xss256k -Dsolr.log.muteconsole > -XX:OnOutOfMemoryError=/home/prodza/solrserver/bin/oom_solr.sh 8983 > /home/prodza/solrserver/../logs -jar start.jar --module=http >Reporter: bidorbuy >Priority: Major > Attachments: compare-6.4.2-6.6.0.png, core-admin-tradesearch.png, > image-2018-11-14-19-00-39-395.png, image-2018-11-14-19-02-20-216.png, > jvm-stats.png, schema.xml, screenshot-1.png, screenshot-2.png, > screenshot-3.png, solr-6-4-2-schema.xml, solr-6-4-2-solrconfig.xml, > solr-7-1-0-managed-schema, solr-7-1-0-solrconfig.xml, solr-71-vs-64.png, > solr-sample-warning-log.txt, solr.in.sh, solrconfig.xml > > > We are currently running 2 separate Solr servers - refer to screenshots: > * zasolrm02 is running on Solr 6.4.2 > * zasolrm03 is running on Solr 6.6.0 > Both servers have the same OS / JVM configuration and are using their own > indexes. We round-robin load-balance through our Tomcats and notice that > Since Solr 6.4.2 performance has dropped. We have two indices per server > "searchsuggestions" and "tradesearch". There is a noticeable drop in > performance since Solr 6.4.2. > I am not sure if this is perhaps related to metric collation or other > underlying changes. I am not sure if other high transaction users have > noticed similar issues. > *1) zasolrm03 (6.6.0) is almost twice as slow on the tradesearch index:* > !compare-6.4.2-6.6.0.png! > *2) This is also visible in the searchsuggestion index:* > !screenshot-1.png! > *3) The Tradesearch index shows the biggest difference:* > !screenshot-2.png! -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12981) Better support JSON faceting responses in SolrJ
[ https://issues.apache.org/jira/browse/SOLR-12981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16686976#comment-16686976 ] Jason Gerlowski commented on SOLR-12981: The updated patch makes some improvements on the original design, but still takes essentially the same approach. There's now a few different classes used to represent the different sorts of facets and buckets present in a faceting response. * {{NestableJsonFacet}} - represents "query" facets. Has a domain count getter and can have nested subfacets. Parent of BucketJsonFacet. * {{BucketBasedJsonFacet}} - the top level entry for a "terms" or "range" facet. Provides access to allBuckets, numBuckets, before, after, between properties. Also * {{BucketJsonFacet}} - the individual buckets for a "terms" or "range" facet. Like its parent NestableJsonFacet, it can have subfacets * {{HeatmapJsonFacet}} - represents "heatmap" facets. Having separate classes for the different types of facets makes the interfaces a lot cleaner, generally. (e.g. {{getMinX()}} is only on HeatmapJsonFacet objects, {{getVal()}} is only on {{BucketJsonFacet}} objects, etc.). The downside of this approach though, as I touched on in an earlier comment, is that the JSON response for faceting requests isn't 100% unambiguous, semantically. The json faceting code doesn't reserve any of its own keywords- if users name their facets "val", or "minX", or any other of a few semantically meaningful words, then the parsing done by this patch will break down. One workaround for this would be to make the JSON response less ambiguous by adding a "type" field to make explicit the type of each (sub)facet. Another workaround would involve restricting the facet-names that users can specify. I think both of these would be A Good Thing for json faceting, though I won't tackle them here. I think the simplest approach is to make this limitation clear in the documentation. If a user for some reason absolutely can't change their facet naming, they just won't be able to take advantage of these new helper classes and can parse their response manually as they have up to this point. > Better support JSON faceting responses in SolrJ > --- > > Key: SOLR-12981 > URL: https://issues.apache.org/jira/browse/SOLR-12981 > Project: Solr > Issue Type: Sub-task > Security Level: Public(Default Security Level. Issues are Public) > Components: clients - java, SolrJ >Affects Versions: 7.5, master (8.0) >Reporter: Jason Gerlowski >Assignee: Jason Gerlowski >Priority: Major > Attachments: SOLR-12981.patch, SOLR-12981.patch > > > SOLR-12947 created JsonQueryRequest to make using the JSON request API easier > in SolrJ. SOLR-12965 is adding faceting support to this request object. > This subtask of SOLR-12965 involves providing a way to parse the JSON > faceting responses into easy-to-use SolrJ objects. > Currently the only option for users is to manipulate the underlying NamedList > directly. We should create a "JsonFacetingResponse" in the model of > ClusteringResponse, SuggesterResponse, TermsResponse, etc. and add an > accessor to {{QueryResponse}} for getting at the faceting results. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12985) ClassNotFound indexing crypted documents
[ https://issues.apache.org/jira/browse/SOLR-12985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16686959#comment-16686959 ] Luca commented on SOLR-12985: - I made a clean installation and configuration to isolate the issue. Solr Version: 7.3.1 The directory structure is the default structure, made by the installation script over Ubuntu with openJDK 1.8.0_171. I have 3 node on the same machine, with NOT shared binaries, in /opt/solr1,2 and 3 I added only the mysql connector jar in the server/lib I attached the following: # Log folder of the first node, from the start through the exception to the stop server # Configuration files # DB script (one table with id and blob # A crypted excel that made the exception happen # An uncrypted word that has no problem Let me know if I forgot something > ClassNotFound indexing crypted documents > > > Key: SOLR-12985 > URL: https://issues.apache.org/jira/browse/SOLR-12985 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: contrib - DataImportHandler >Affects Versions: 7.3.1 >Reporter: Luca >Priority: Critical > Attachments: crypted.xlsx, db.sql, logs.zip, notcrypted.docx, > schema.zip > > > When indexing a BLOB containing an encrypted Office Document (xls or xlsx but > I think all types) it fail with a very bad exception, if the document is not > encrypted works fine. > I'm using the DataImportHandler. > The exception seems also avoid the onError=skip or continue, making the > import fail. > I tried to move the libraries from contrib/extraction/lib/ to server/lib and > the unfounded class changes, so it's a class loading issue. > This is the base exception: > Exception while processing: document_index document : > SolrInputDocument(fields: [site=187, index_type=document, resource_id=3, > title_full=Dati cliente.docx, id=d-XXX-3, publish_date=2018-09-28 00:00:00.0, > abstract= Azioni di recupero intraprese sulle Fatture telefoniche, > insert_date=2019-09-28 00:00:00.0, type=Documenti, > url=http://]):org.apache.solr.handler.dataimport.DataImportHandlerException: > Unable to read content Processing Document # 1 > at > org.apache.solr.handler.dataimport.DataImportHandlerException.wrapAndThrow(DataImportHandlerException.java:69) > at > org.apache.solr.handler.dataimport.TikaEntityProcessor.nextRow(TikaEntityProcessor.java:171) > at > org.apache.solr.handler.dataimport.EntityProcessorWrapper.nextRow(EntityProcessorWrapper.java:267) > at > org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:476) > at > org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:517) > at > org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:415) > at > org.apache.solr.handler.dataimport.DocBuilder.doDelta(DocBuilder.java:364) > at > org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:225) > at > org.apache.solr.handler.dataimport.DataImporter.doDeltaImport(DataImporter.java:452) > at > org.apache.solr.handler.dataimport.DataImporter.runCmd(DataImporter.java:485) > at > org.apache.solr.handler.dataimport.DataImporter.lambda$runAsync$0(DataImporter.java:466) > at java.lang.Thread.run(Thread.java:748) > Caused by: org.apache.tika.exception.TikaException: TIKA-198: Illegal > IOException from org.apache.tika.parser.microsoft.OfficeParser@500efcf1 > at org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:286) > at org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:280) > at > org.apache.tika.parser.AutoDetectParser.parse(AutoDetectParser.java:143) > at > org.apache.solr.handler.dataimport.TikaEntityProcessor.nextRow(TikaEntityProcessor.java:165) > ... 10 more > Caused by: java.io.IOException: java.lang.ClassNotFoundException: > org.apache.poi.poifs.crypt.agile.AgileEncryptionInfoBuilder > at > org.apache.poi.poifs.crypt.EncryptionInfo.(EncryptionInfo.java:150) > at > org.apache.poi.poifs.crypt.EncryptionInfo.(EncryptionInfo.java:102) > at > org.apache.tika.parser.microsoft.OfficeParser.parse(OfficeParser.java:203) > at > org.apache.tika.parser.microsoft.OfficeParser.parse(OfficeParser.java:132) > at org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:280) > ... 13 more > Caused by: java.lang.ClassNotFoundException: > org.apache.poi.poifs.crypt.agile.AgileEncryptionInfoBuilder > at java.net.URLClassLoader.findClass(URLClassLoader.java:381) > at java.lang.ClassLoader.loadClass(ClassLoader.java:424) > at java.lang.ClassLoader.loadClass(ClassLoader.java:357) > at > org.eclipse.jetty.webapp.WebAppClassLoader.loadClass(WebAppClassLoader.java:565) > at
[jira] [Updated] (SOLR-12985) ClassNotFound indexing crypted documents
[ https://issues.apache.org/jira/browse/SOLR-12985?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Luca updated SOLR-12985: Attachment: notcrypted.docx > ClassNotFound indexing crypted documents > > > Key: SOLR-12985 > URL: https://issues.apache.org/jira/browse/SOLR-12985 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: contrib - DataImportHandler >Affects Versions: 7.3.1 >Reporter: Luca >Priority: Critical > Attachments: crypted.xlsx, db.sql, logs.zip, notcrypted.docx, > schema.zip > > > When indexing a BLOB containing an encrypted Office Document (xls or xlsx but > I think all types) it fail with a very bad exception, if the document is not > encrypted works fine. > I'm using the DataImportHandler. > The exception seems also avoid the onError=skip or continue, making the > import fail. > I tried to move the libraries from contrib/extraction/lib/ to server/lib and > the unfounded class changes, so it's a class loading issue. > This is the base exception: > Exception while processing: document_index document : > SolrInputDocument(fields: [site=187, index_type=document, resource_id=3, > title_full=Dati cliente.docx, id=d-XXX-3, publish_date=2018-09-28 00:00:00.0, > abstract= Azioni di recupero intraprese sulle Fatture telefoniche, > insert_date=2019-09-28 00:00:00.0, type=Documenti, > url=http://]):org.apache.solr.handler.dataimport.DataImportHandlerException: > Unable to read content Processing Document # 1 > at > org.apache.solr.handler.dataimport.DataImportHandlerException.wrapAndThrow(DataImportHandlerException.java:69) > at > org.apache.solr.handler.dataimport.TikaEntityProcessor.nextRow(TikaEntityProcessor.java:171) > at > org.apache.solr.handler.dataimport.EntityProcessorWrapper.nextRow(EntityProcessorWrapper.java:267) > at > org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:476) > at > org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:517) > at > org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:415) > at > org.apache.solr.handler.dataimport.DocBuilder.doDelta(DocBuilder.java:364) > at > org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:225) > at > org.apache.solr.handler.dataimport.DataImporter.doDeltaImport(DataImporter.java:452) > at > org.apache.solr.handler.dataimport.DataImporter.runCmd(DataImporter.java:485) > at > org.apache.solr.handler.dataimport.DataImporter.lambda$runAsync$0(DataImporter.java:466) > at java.lang.Thread.run(Thread.java:748) > Caused by: org.apache.tika.exception.TikaException: TIKA-198: Illegal > IOException from org.apache.tika.parser.microsoft.OfficeParser@500efcf1 > at org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:286) > at org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:280) > at > org.apache.tika.parser.AutoDetectParser.parse(AutoDetectParser.java:143) > at > org.apache.solr.handler.dataimport.TikaEntityProcessor.nextRow(TikaEntityProcessor.java:165) > ... 10 more > Caused by: java.io.IOException: java.lang.ClassNotFoundException: > org.apache.poi.poifs.crypt.agile.AgileEncryptionInfoBuilder > at > org.apache.poi.poifs.crypt.EncryptionInfo.(EncryptionInfo.java:150) > at > org.apache.poi.poifs.crypt.EncryptionInfo.(EncryptionInfo.java:102) > at > org.apache.tika.parser.microsoft.OfficeParser.parse(OfficeParser.java:203) > at > org.apache.tika.parser.microsoft.OfficeParser.parse(OfficeParser.java:132) > at org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:280) > ... 13 more > Caused by: java.lang.ClassNotFoundException: > org.apache.poi.poifs.crypt.agile.AgileEncryptionInfoBuilder > at java.net.URLClassLoader.findClass(URLClassLoader.java:381) > at java.lang.ClassLoader.loadClass(ClassLoader.java:424) > at java.lang.ClassLoader.loadClass(ClassLoader.java:357) > at > org.eclipse.jetty.webapp.WebAppClassLoader.loadClass(WebAppClassLoader.java:565) > at java.lang.ClassLoader.loadClass(ClassLoader.java:357) > at > org.apache.poi.poifs.crypt.EncryptionInfo.getBuilder(EncryptionInfo.java:222) > at > org.apache.poi.poifs.crypt.EncryptionInfo.(EncryptionInfo.java:148) > ... 17 more -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-12985) ClassNotFound indexing crypted documents
[ https://issues.apache.org/jira/browse/SOLR-12985?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Luca updated SOLR-12985: Attachment: crypted.xlsx > ClassNotFound indexing crypted documents > > > Key: SOLR-12985 > URL: https://issues.apache.org/jira/browse/SOLR-12985 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: contrib - DataImportHandler >Affects Versions: 7.3.1 >Reporter: Luca >Priority: Critical > Attachments: crypted.xlsx, db.sql, logs.zip, schema.zip > > > When indexing a BLOB containing an encrypted Office Document (xls or xlsx but > I think all types) it fail with a very bad exception, if the document is not > encrypted works fine. > I'm using the DataImportHandler. > The exception seems also avoid the onError=skip or continue, making the > import fail. > I tried to move the libraries from contrib/extraction/lib/ to server/lib and > the unfounded class changes, so it's a class loading issue. > This is the base exception: > Exception while processing: document_index document : > SolrInputDocument(fields: [site=187, index_type=document, resource_id=3, > title_full=Dati cliente.docx, id=d-XXX-3, publish_date=2018-09-28 00:00:00.0, > abstract= Azioni di recupero intraprese sulle Fatture telefoniche, > insert_date=2019-09-28 00:00:00.0, type=Documenti, > url=http://]):org.apache.solr.handler.dataimport.DataImportHandlerException: > Unable to read content Processing Document # 1 > at > org.apache.solr.handler.dataimport.DataImportHandlerException.wrapAndThrow(DataImportHandlerException.java:69) > at > org.apache.solr.handler.dataimport.TikaEntityProcessor.nextRow(TikaEntityProcessor.java:171) > at > org.apache.solr.handler.dataimport.EntityProcessorWrapper.nextRow(EntityProcessorWrapper.java:267) > at > org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:476) > at > org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:517) > at > org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:415) > at > org.apache.solr.handler.dataimport.DocBuilder.doDelta(DocBuilder.java:364) > at > org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:225) > at > org.apache.solr.handler.dataimport.DataImporter.doDeltaImport(DataImporter.java:452) > at > org.apache.solr.handler.dataimport.DataImporter.runCmd(DataImporter.java:485) > at > org.apache.solr.handler.dataimport.DataImporter.lambda$runAsync$0(DataImporter.java:466) > at java.lang.Thread.run(Thread.java:748) > Caused by: org.apache.tika.exception.TikaException: TIKA-198: Illegal > IOException from org.apache.tika.parser.microsoft.OfficeParser@500efcf1 > at org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:286) > at org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:280) > at > org.apache.tika.parser.AutoDetectParser.parse(AutoDetectParser.java:143) > at > org.apache.solr.handler.dataimport.TikaEntityProcessor.nextRow(TikaEntityProcessor.java:165) > ... 10 more > Caused by: java.io.IOException: java.lang.ClassNotFoundException: > org.apache.poi.poifs.crypt.agile.AgileEncryptionInfoBuilder > at > org.apache.poi.poifs.crypt.EncryptionInfo.(EncryptionInfo.java:150) > at > org.apache.poi.poifs.crypt.EncryptionInfo.(EncryptionInfo.java:102) > at > org.apache.tika.parser.microsoft.OfficeParser.parse(OfficeParser.java:203) > at > org.apache.tika.parser.microsoft.OfficeParser.parse(OfficeParser.java:132) > at org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:280) > ... 13 more > Caused by: java.lang.ClassNotFoundException: > org.apache.poi.poifs.crypt.agile.AgileEncryptionInfoBuilder > at java.net.URLClassLoader.findClass(URLClassLoader.java:381) > at java.lang.ClassLoader.loadClass(ClassLoader.java:424) > at java.lang.ClassLoader.loadClass(ClassLoader.java:357) > at > org.eclipse.jetty.webapp.WebAppClassLoader.loadClass(WebAppClassLoader.java:565) > at java.lang.ClassLoader.loadClass(ClassLoader.java:357) > at > org.apache.poi.poifs.crypt.EncryptionInfo.getBuilder(EncryptionInfo.java:222) > at > org.apache.poi.poifs.crypt.EncryptionInfo.(EncryptionInfo.java:148) > ... 17 more -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-12985) ClassNotFound indexing crypted documents
[ https://issues.apache.org/jira/browse/SOLR-12985?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Luca updated SOLR-12985: Attachment: schema.zip > ClassNotFound indexing crypted documents > > > Key: SOLR-12985 > URL: https://issues.apache.org/jira/browse/SOLR-12985 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: contrib - DataImportHandler >Affects Versions: 7.3.1 >Reporter: Luca >Priority: Critical > Attachments: crypted.xlsx, db.sql, logs.zip, schema.zip > > > When indexing a BLOB containing an encrypted Office Document (xls or xlsx but > I think all types) it fail with a very bad exception, if the document is not > encrypted works fine. > I'm using the DataImportHandler. > The exception seems also avoid the onError=skip or continue, making the > import fail. > I tried to move the libraries from contrib/extraction/lib/ to server/lib and > the unfounded class changes, so it's a class loading issue. > This is the base exception: > Exception while processing: document_index document : > SolrInputDocument(fields: [site=187, index_type=document, resource_id=3, > title_full=Dati cliente.docx, id=d-XXX-3, publish_date=2018-09-28 00:00:00.0, > abstract= Azioni di recupero intraprese sulle Fatture telefoniche, > insert_date=2019-09-28 00:00:00.0, type=Documenti, > url=http://]):org.apache.solr.handler.dataimport.DataImportHandlerException: > Unable to read content Processing Document # 1 > at > org.apache.solr.handler.dataimport.DataImportHandlerException.wrapAndThrow(DataImportHandlerException.java:69) > at > org.apache.solr.handler.dataimport.TikaEntityProcessor.nextRow(TikaEntityProcessor.java:171) > at > org.apache.solr.handler.dataimport.EntityProcessorWrapper.nextRow(EntityProcessorWrapper.java:267) > at > org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:476) > at > org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:517) > at > org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:415) > at > org.apache.solr.handler.dataimport.DocBuilder.doDelta(DocBuilder.java:364) > at > org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:225) > at > org.apache.solr.handler.dataimport.DataImporter.doDeltaImport(DataImporter.java:452) > at > org.apache.solr.handler.dataimport.DataImporter.runCmd(DataImporter.java:485) > at > org.apache.solr.handler.dataimport.DataImporter.lambda$runAsync$0(DataImporter.java:466) > at java.lang.Thread.run(Thread.java:748) > Caused by: org.apache.tika.exception.TikaException: TIKA-198: Illegal > IOException from org.apache.tika.parser.microsoft.OfficeParser@500efcf1 > at org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:286) > at org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:280) > at > org.apache.tika.parser.AutoDetectParser.parse(AutoDetectParser.java:143) > at > org.apache.solr.handler.dataimport.TikaEntityProcessor.nextRow(TikaEntityProcessor.java:165) > ... 10 more > Caused by: java.io.IOException: java.lang.ClassNotFoundException: > org.apache.poi.poifs.crypt.agile.AgileEncryptionInfoBuilder > at > org.apache.poi.poifs.crypt.EncryptionInfo.(EncryptionInfo.java:150) > at > org.apache.poi.poifs.crypt.EncryptionInfo.(EncryptionInfo.java:102) > at > org.apache.tika.parser.microsoft.OfficeParser.parse(OfficeParser.java:203) > at > org.apache.tika.parser.microsoft.OfficeParser.parse(OfficeParser.java:132) > at org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:280) > ... 13 more > Caused by: java.lang.ClassNotFoundException: > org.apache.poi.poifs.crypt.agile.AgileEncryptionInfoBuilder > at java.net.URLClassLoader.findClass(URLClassLoader.java:381) > at java.lang.ClassLoader.loadClass(ClassLoader.java:424) > at java.lang.ClassLoader.loadClass(ClassLoader.java:357) > at > org.eclipse.jetty.webapp.WebAppClassLoader.loadClass(WebAppClassLoader.java:565) > at java.lang.ClassLoader.loadClass(ClassLoader.java:357) > at > org.apache.poi.poifs.crypt.EncryptionInfo.getBuilder(EncryptionInfo.java:222) > at > org.apache.poi.poifs.crypt.EncryptionInfo.(EncryptionInfo.java:148) > ... 17 more -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-12985) ClassNotFound indexing crypted documents
[ https://issues.apache.org/jira/browse/SOLR-12985?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Luca updated SOLR-12985: Attachment: logs.zip > ClassNotFound indexing crypted documents > > > Key: SOLR-12985 > URL: https://issues.apache.org/jira/browse/SOLR-12985 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: contrib - DataImportHandler >Affects Versions: 7.3.1 >Reporter: Luca >Priority: Critical > Attachments: crypted.xlsx, db.sql, logs.zip, schema.zip > > > When indexing a BLOB containing an encrypted Office Document (xls or xlsx but > I think all types) it fail with a very bad exception, if the document is not > encrypted works fine. > I'm using the DataImportHandler. > The exception seems also avoid the onError=skip or continue, making the > import fail. > I tried to move the libraries from contrib/extraction/lib/ to server/lib and > the unfounded class changes, so it's a class loading issue. > This is the base exception: > Exception while processing: document_index document : > SolrInputDocument(fields: [site=187, index_type=document, resource_id=3, > title_full=Dati cliente.docx, id=d-XXX-3, publish_date=2018-09-28 00:00:00.0, > abstract= Azioni di recupero intraprese sulle Fatture telefoniche, > insert_date=2019-09-28 00:00:00.0, type=Documenti, > url=http://]):org.apache.solr.handler.dataimport.DataImportHandlerException: > Unable to read content Processing Document # 1 > at > org.apache.solr.handler.dataimport.DataImportHandlerException.wrapAndThrow(DataImportHandlerException.java:69) > at > org.apache.solr.handler.dataimport.TikaEntityProcessor.nextRow(TikaEntityProcessor.java:171) > at > org.apache.solr.handler.dataimport.EntityProcessorWrapper.nextRow(EntityProcessorWrapper.java:267) > at > org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:476) > at > org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:517) > at > org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:415) > at > org.apache.solr.handler.dataimport.DocBuilder.doDelta(DocBuilder.java:364) > at > org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:225) > at > org.apache.solr.handler.dataimport.DataImporter.doDeltaImport(DataImporter.java:452) > at > org.apache.solr.handler.dataimport.DataImporter.runCmd(DataImporter.java:485) > at > org.apache.solr.handler.dataimport.DataImporter.lambda$runAsync$0(DataImporter.java:466) > at java.lang.Thread.run(Thread.java:748) > Caused by: org.apache.tika.exception.TikaException: TIKA-198: Illegal > IOException from org.apache.tika.parser.microsoft.OfficeParser@500efcf1 > at org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:286) > at org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:280) > at > org.apache.tika.parser.AutoDetectParser.parse(AutoDetectParser.java:143) > at > org.apache.solr.handler.dataimport.TikaEntityProcessor.nextRow(TikaEntityProcessor.java:165) > ... 10 more > Caused by: java.io.IOException: java.lang.ClassNotFoundException: > org.apache.poi.poifs.crypt.agile.AgileEncryptionInfoBuilder > at > org.apache.poi.poifs.crypt.EncryptionInfo.(EncryptionInfo.java:150) > at > org.apache.poi.poifs.crypt.EncryptionInfo.(EncryptionInfo.java:102) > at > org.apache.tika.parser.microsoft.OfficeParser.parse(OfficeParser.java:203) > at > org.apache.tika.parser.microsoft.OfficeParser.parse(OfficeParser.java:132) > at org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:280) > ... 13 more > Caused by: java.lang.ClassNotFoundException: > org.apache.poi.poifs.crypt.agile.AgileEncryptionInfoBuilder > at java.net.URLClassLoader.findClass(URLClassLoader.java:381) > at java.lang.ClassLoader.loadClass(ClassLoader.java:424) > at java.lang.ClassLoader.loadClass(ClassLoader.java:357) > at > org.eclipse.jetty.webapp.WebAppClassLoader.loadClass(WebAppClassLoader.java:565) > at java.lang.ClassLoader.loadClass(ClassLoader.java:357) > at > org.apache.poi.poifs.crypt.EncryptionInfo.getBuilder(EncryptionInfo.java:222) > at > org.apache.poi.poifs.crypt.EncryptionInfo.(EncryptionInfo.java:148) > ... 17 more -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-12985) ClassNotFound indexing crypted documents
[ https://issues.apache.org/jira/browse/SOLR-12985?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Luca updated SOLR-12985: Attachment: db.sql > ClassNotFound indexing crypted documents > > > Key: SOLR-12985 > URL: https://issues.apache.org/jira/browse/SOLR-12985 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: contrib - DataImportHandler >Affects Versions: 7.3.1 >Reporter: Luca >Priority: Critical > Attachments: db.sql > > > When indexing a BLOB containing an encrypted Office Document (xls or xlsx but > I think all types) it fail with a very bad exception, if the document is not > encrypted works fine. > I'm using the DataImportHandler. > The exception seems also avoid the onError=skip or continue, making the > import fail. > I tried to move the libraries from contrib/extraction/lib/ to server/lib and > the unfounded class changes, so it's a class loading issue. > This is the base exception: > Exception while processing: document_index document : > SolrInputDocument(fields: [site=187, index_type=document, resource_id=3, > title_full=Dati cliente.docx, id=d-XXX-3, publish_date=2018-09-28 00:00:00.0, > abstract= Azioni di recupero intraprese sulle Fatture telefoniche, > insert_date=2019-09-28 00:00:00.0, type=Documenti, > url=http://]):org.apache.solr.handler.dataimport.DataImportHandlerException: > Unable to read content Processing Document # 1 > at > org.apache.solr.handler.dataimport.DataImportHandlerException.wrapAndThrow(DataImportHandlerException.java:69) > at > org.apache.solr.handler.dataimport.TikaEntityProcessor.nextRow(TikaEntityProcessor.java:171) > at > org.apache.solr.handler.dataimport.EntityProcessorWrapper.nextRow(EntityProcessorWrapper.java:267) > at > org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:476) > at > org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:517) > at > org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:415) > at > org.apache.solr.handler.dataimport.DocBuilder.doDelta(DocBuilder.java:364) > at > org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:225) > at > org.apache.solr.handler.dataimport.DataImporter.doDeltaImport(DataImporter.java:452) > at > org.apache.solr.handler.dataimport.DataImporter.runCmd(DataImporter.java:485) > at > org.apache.solr.handler.dataimport.DataImporter.lambda$runAsync$0(DataImporter.java:466) > at java.lang.Thread.run(Thread.java:748) > Caused by: org.apache.tika.exception.TikaException: TIKA-198: Illegal > IOException from org.apache.tika.parser.microsoft.OfficeParser@500efcf1 > at org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:286) > at org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:280) > at > org.apache.tika.parser.AutoDetectParser.parse(AutoDetectParser.java:143) > at > org.apache.solr.handler.dataimport.TikaEntityProcessor.nextRow(TikaEntityProcessor.java:165) > ... 10 more > Caused by: java.io.IOException: java.lang.ClassNotFoundException: > org.apache.poi.poifs.crypt.agile.AgileEncryptionInfoBuilder > at > org.apache.poi.poifs.crypt.EncryptionInfo.(EncryptionInfo.java:150) > at > org.apache.poi.poifs.crypt.EncryptionInfo.(EncryptionInfo.java:102) > at > org.apache.tika.parser.microsoft.OfficeParser.parse(OfficeParser.java:203) > at > org.apache.tika.parser.microsoft.OfficeParser.parse(OfficeParser.java:132) > at org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:280) > ... 13 more > Caused by: java.lang.ClassNotFoundException: > org.apache.poi.poifs.crypt.agile.AgileEncryptionInfoBuilder > at java.net.URLClassLoader.findClass(URLClassLoader.java:381) > at java.lang.ClassLoader.loadClass(ClassLoader.java:424) > at java.lang.ClassLoader.loadClass(ClassLoader.java:357) > at > org.eclipse.jetty.webapp.WebAppClassLoader.loadClass(WebAppClassLoader.java:565) > at java.lang.ClassLoader.loadClass(ClassLoader.java:357) > at > org.apache.poi.poifs.crypt.EncryptionInfo.getBuilder(EncryptionInfo.java:222) > at > org.apache.poi.poifs.crypt.EncryptionInfo.(EncryptionInfo.java:148) > ... 17 more -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-12981) Better support JSON faceting responses in SolrJ
[ https://issues.apache.org/jira/browse/SOLR-12981?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Gerlowski updated SOLR-12981: --- Attachment: SOLR-12981.patch > Better support JSON faceting responses in SolrJ > --- > > Key: SOLR-12981 > URL: https://issues.apache.org/jira/browse/SOLR-12981 > Project: Solr > Issue Type: Sub-task > Security Level: Public(Default Security Level. Issues are Public) > Components: clients - java, SolrJ >Affects Versions: 7.5, master (8.0) >Reporter: Jason Gerlowski >Assignee: Jason Gerlowski >Priority: Major > Attachments: SOLR-12981.patch, SOLR-12981.patch > > > SOLR-12947 created JsonQueryRequest to make using the JSON request API easier > in SolrJ. SOLR-12965 is adding faceting support to this request object. > This subtask of SOLR-12965 involves providing a way to parse the JSON > faceting responses into easy-to-use SolrJ objects. > Currently the only option for users is to manipulate the underlying NamedList > directly. We should create a "JsonFacetingResponse" in the model of > ClusteringResponse, SuggesterResponse, TermsResponse, etc. and add an > accessor to {{QueryResponse}} for getting at the faceting results. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[GitHub] lucene-solr pull request #497: LUCENE-8026: ExitableDirectoryReader does not...
Github user cbismuth commented on a diff in the pull request: https://github.com/apache/lucene-solr/pull/497#discussion_r233556789 --- Diff: lucene/core/src/java/org/apache/lucene/index/ExitableDirectoryReader.java --- @@ -117,6 +117,7 @@ public CacheHelper getCoreCacheHelper() { private final PointValues in; private final QueryTimeout queryTimeout; +/** Constructor **/ --- End diff -- My bad, I though I used this class in tests, but I didn't. I've turned it `private` in 6f48d4f851b4eccf4c3d8c4cfbe6d540e13b4d85, thanks :+1: --- - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[GitHub] lucene-solr pull request #497: LUCENE-8026: ExitableDirectoryReader does not...
Github user pzygielo commented on a diff in the pull request: https://github.com/apache/lucene-solr/pull/497#discussion_r233553589 --- Diff: lucene/core/src/java/org/apache/lucene/index/ExitableDirectoryReader.java --- @@ -117,6 +117,7 @@ public CacheHelper getCoreCacheHelper() { private final PointValues in; private final QueryTimeout queryTimeout; +/** Constructor **/ --- End diff -- My comment > But isn't it worse now? precommit is satisfied, but nothing more than (obvious) noise was added. was for this line. --- - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk-9) - Build # 4925 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/4925/ Java: 64bit/jdk-9 -XX:+UseCompressedOops -XX:+UseSerialGC 5 tests failed. FAILED: org.apache.solr.cloud.TestTlogReplica.testRecovery Error Message: Can not find doc 7 in https://127.0.0.1:51633/solr Stack Trace: java.lang.AssertionError: Can not find doc 7 in https://127.0.0.1:51633/solr at __randomizedtesting.SeedInfo.seed([EEDE29EE60EA9EE6:2F2E50424DBA5441]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.junit.Assert.assertNotNull(Assert.java:526) at org.apache.solr.cloud.TestTlogReplica.checkRTG(TestTlogReplica.java:902) at org.apache.solr.cloud.TestTlogReplica.testRecovery(TestTlogReplica.java:567) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:564) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.base/java.lang.Thread.run(Thread.java:844) FAILED: org.apache.solr.cloud.TestTlogReplica.testRecovery Error Message: Can not find doc 7 in https://127.0.0.1:51948/solr Stack Trace:
[JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-12-ea+12) - Build # 23209 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/23209/ Java: 64bit/jdk-12-ea+12 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC 26 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.cloud.BasicZkTest Error Message: SolrCore.getOpenCount()==2 Stack Trace: java.lang.RuntimeException: SolrCore.getOpenCount()==2 at __randomizedtesting.SeedInfo.seed([65D3AD738352CB67]:0) at org.apache.solr.util.TestHarness.close(TestHarness.java:380) at org.apache.solr.SolrTestCaseJ4.deleteCore(SolrTestCaseJ4.java:802) at org.apache.solr.cloud.AbstractZkTestCase.azt_afterClass(AbstractZkTestCase.java:147) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:898) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.base/java.lang.Thread.run(Thread.java:835) FAILED: junit.framework.TestSuite.org.apache.solr.cloud.BasicZkTest Error Message: SolrCore.getOpenCount()==2 Stack Trace: java.lang.RuntimeException: SolrCore.getOpenCount()==2 at __randomizedtesting.SeedInfo.seed([65D3AD738352CB67]:0) at org.apache.solr.util.TestHarness.close(TestHarness.java:380) at org.apache.solr.SolrTestCaseJ4.deleteCore(SolrTestCaseJ4.java:802) at org.apache.solr.SolrTestCaseJ4.teardownTestCases(SolrTestCaseJ4.java:297) at jdk.internal.reflect.GeneratedMethodAccessor54.invoke(Unknown Source) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:898) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at
[GitHub] lucene-solr pull request #497: LUCENE-8026: ExitableDirectoryReader does not...
Github user cbismuth commented on a diff in the pull request: https://github.com/apache/lucene-solr/pull/497#discussion_r233551529 --- Diff: lucene/core/src/java/org/apache/lucene/index/ExitableDirectoryReader.java --- @@ -100,13 +109,156 @@ public CacheHelper getCoreCacheHelper() { } + /** + * Wrapper class for another PointValues implementation that is used by ExitableFields. + */ + public static class ExitablePointValues extends PointValues { + +private final PointValues in; +private final QueryTimeout queryTimeout; + +public ExitablePointValues(PointValues in, QueryTimeout queryTimeout) { --- End diff -- I'm not sure to understand, the `set -e` flag prevents the second Ant command line from being run if the `precommit` task fails, and therefore the latest output in my terminal is a build failure. The `set -x` (debug option) flag would add a lot noise. --- - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11078) Solr query performance degradation since Solr 6.4.2
[ https://issues.apache.org/jira/browse/SOLR-11078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16686906#comment-16686906 ] bidorbuy commented on SOLR-11078: - Hi all, ex CTO from bidorbuy here. I had planned to migrate to 7.5 before I am leaving the company end of November, but other priorities took over. Curiosity still got the better of me and over the last week I pulled up three 7.5. instances running on 8 cores, 10GB RAM (the Solr- and JVM tuning is unchanged from the 2017 configs I attached in the original post). I apologise for the lack of depth of the analysis - it is not how I typically work, but instead of not providing a follow-up I thought "better little than nothing". I am sure that my successor / team will follow up when they have an in-depth analysis by January 2019. The test was done by replaying Solr queries from an existing 7.4. Solr install across the 7.5 nodes and replayed the same production-load on different schemas using the main-index which consists of 7,5m documents and a size of 4GB: * Green: Now deprecated Trie*-fields (this is what we currently running in production) * Orange: Point*-fields * Blue: StrField Trie*-fields (Green or bottom line) still produce overall lowest CPU and memory utilisation. Throughput: 19,5m queries, 21ms avg query time, 7.3sec max query time Point*-fields (Orange or middle line) consumed more resources. Throughput: 18,6m queries, 35ms avg query time, 10sec max query time StrField (Blue or top-line) was visibly using more resources. GC was also more frequent. Throughput: 17,6m queries, 42ms avg query time, 13sec max query time Admittedly, I did not tune JVM (using G1) and I am sure that if I had the time I could have tuned resource utilisation better. I doubt that such tuning would have dramatically improved throughput or query time. Perhaps I misunderstood this, but I thought Point-field types would outperform Trie*fields in range queries. In my tests I did not notice a dramatic difference for range queries when using Point*fields. I did however notice that some simple field/value queries underperformed when using Point*-fields over Trie*fields. I know it was mentioned that if Point-fields are not performing we should use StrFields, and maybe I missed a fundamental configuration step, but StrField generally performed worse than Point or Trie fields. I wish I had more time to analyse queries in more detail and perhaps play with variations in the schema to possibly assist with an improvement in this. As you can see from our schema, it is pretty much "vanilla" and I am surprised that other users have not raised issues - in our business (ecommerce) a difference of 14ms in average queries (or a variance of a good 2.7 seconds in poor performing queries) has a dramatic result on performance. Perhaps in other use cases such small variances between field-types do not matter. !image-2018-11-14-19-02-20-216.png! I am unfortunately not going to be able to contribute to this issue any more as I will not have access to the Solr environments. I would like to thank all Solr contributors for your outstanding work and deep level of knowledge and commitment as well as the helping hand you lend in assisting us with questions, issues and patches. I am grateful to have had the opportunity to use Solr when it started "becoming mainstream" as version 3.6 and thank you all for your contributions! All the best ~ Gerd Naschenweng. > Solr query performance degradation since Solr 6.4.2 > --- > > Key: SOLR-11078 > URL: https://issues.apache.org/jira/browse/SOLR-11078 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: search, Server >Affects Versions: 6.6, 7.1 > Environment: * CentOS 7.3 (Linux zasolrm03 3.10.0-514.26.2.el7.x86_64 > #1 SMP Tue Jul 4 15:04:05 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux) > * Java HotSpot(TM) 64-Bit Server VM (build 25.121-b13, mixed mode) > * 4 CPU, 10GB RAM > Running Solr 6.6.0 with the following JVM settings: > java -server -Xms4G -Xmx4G -XX:NewRatio=3 -XX:SurvivorRatio=4 > -XX:TargetSurvivorRatio=90 -XX:MaxTenuringThreshold=8 -XX:+UseConcMarkSweepGC > -XX:+UseParNewGC -XX:ConcGCThreads=4 -XX:ParallelGCThreads=4 > -XX:+CMSScavengeBeforeRemark -XX:PretenureSizeThreshold=64m > -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=50 > -XX:CMSMaxAbortablePrecleanTime=6000 -XX:+CMSParallelRemarkEnabled > -XX:+ParallelRefProcEnabled -verbose:gc -XX:+PrintHeapAtGC > -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps > -XX:+PrintTenuringDistribution -XX:+PrintGCApplicationStoppedTime > -Xloggc:/home/prodza/solrserver/../logs/solr_gc.log -XX:+UseGCLogFileRotation > -XX:NumberOfGCLogFiles=9 -XX:GCLogFileSize=20M
[GitHub] lucene-solr issue #496: LUCENE-8463: Early-terminate queries sorted by SortF...
Github user cbismuth commented on the issue: https://github.com/apache/lucene-solr/pull/496 Lucene tests are green and PR is up-to-date with latest `master` branch. --- - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[GitHub] lucene-solr pull request #497: LUCENE-8026: ExitableDirectoryReader does not...
Github user pzygielo commented on a diff in the pull request: https://github.com/apache/lucene-solr/pull/497#discussion_r233548542 --- Diff: lucene/core/src/java/org/apache/lucene/index/ExitableDirectoryReader.java --- @@ -100,13 +109,156 @@ public CacheHelper getCoreCacheHelper() { } + /** + * Wrapper class for another PointValues implementation that is used by ExitableFields. + */ + public static class ExitablePointValues extends PointValues { + +private final PointValues in; +private final QueryTimeout queryTimeout; + +public ExitablePointValues(PointValues in, QueryTimeout queryTimeout) { --- End diff -- But isn't it worse now? precommit is satisfied, but nothing more than (obvious) noise was added. --- - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8563) Remove k1+1 from the numerator of BM25Similarity
[ https://issues.apache.org/jira/browse/LUCENE-8563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16686877#comment-16686877 ] Michael Gibney commented on LUCENE-8563: "assuming a single similarity" -- is this something that we want to assume? If every field similarity uses the same k1 param, then sure, relative ordering among fields is maintained. But if we're using these scores outside of the context of single-similarity, and intend to preserve the ability to adjust the k1 param, it's worth noting that this change fundamentally alters the effect of the k1 param on absolute scores (and thus also on relative scores across similarities). Namely, removing k1 from the numerator places a hard cap on the score, regardless of TF or k1 setting. The concept of saturation is preserved, but with no numerator k1, saturation is implemented strictly by depressing scores (with respect to the hard cap, by varying amounts according to TF) as k1 increases. The model with k1 in the numerator strikes me as being more flexible, both depressing scores for lower TF _and increasing_ scores for higher TF, around an inflection point determined by length norms and the value of b. I'm sure this change would be appropriate for some scenarios, but it's a fundamental change that could in some cases have significant downstream consequences, with no easy way (as far as I can tell) to maintain existing behavior. > Remove k1+1 from the numerator of BM25Similarity > - > > Key: LUCENE-8563 > URL: https://issues.apache.org/jira/browse/LUCENE-8563 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Adrien Grand >Priority: Minor > > Our current implementation of BM25 does > {code:java} > boost * IDF * (k1+1) * tf / (tf + norm) > {code} > As (k1+1) is a constant, it is the same for every term and doesn't modify > ordering. It is often omitted and I found out that the "The Probabilistic > Relevance Framework: BM25 and Beyond" paper by Robertson (BM25's author) and > Zaragova even describes adding (k1+1) to the numerator as a variant whose > benefit is to be more comparable with Robertson/Sparck-Jones weighting, which > we don't care about. > {quote}A common variant is to add a (k1 + 1) component to the > numerator of the saturation function. This is the same for all > terms, and therefore does not affect the ranking produced. > The reason for including it was to make the final formula > more compatible with the RSJ weight used on its own > {quote} > Should we remove it from BM25Similarity as well? > A side-effect that I'm interested in is that integrating other score > contributions (eg. via oal.document.FeatureField) would be a bit easier to > reason about. For instance a weight of 3 in FeatureField#newSaturationQuery > would have a similar impact as a term whose IDF is 3 (and thus docFreq ~= 5%) > rather than a term whose IDF is 3/(k1 + 1). -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-12632) Completely remove Trie fields
[ https://issues.apache.org/jira/browse/SOLR-12632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16686865#comment-16686865 ] Joel Bernstein edited comment on SOLR-12632 at 11/14/18 4:57 PM: - I've been running queries locally and I was incorrect. This was driven by a very large drop in performance we were seeing in certain JSON facet queries when point fields were involved. But they were involved in two ways: 1) As facet buckets 2) In aggregations I removed the point fields from facet buckets and just used the points in aggregations. The points were actually faster. So it was #1 which is very, very slow. And in those scenarios the StrField copy does work. So, I'll remove my objection. was (Author: joel.bernstein): I've been running queries locally and I was incorrect. This was driven by a very large drop in performance we were seeing in certain JSON facet queries when point fields were involved. But they were involved in two ways: 1) As facet buckets 2) In aggregations I removed the point fields from facet buckets and just used the points in aggregations. The points were actually faster. So it was #1 which is very, very slow. And in those scenarios the StrField copy does work. So, I'll remove my objection. > Completely remove Trie fields > - > > Key: SOLR-12632 > URL: https://issues.apache.org/jira/browse/SOLR-12632 > Project: Solr > Issue Type: Task > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Steve Rowe >Priority: Blocker > Labels: numeric-tries-to-points > Fix For: master (8.0) > > > Trie fields were deprecated in Solr 7.0. We should remove them completely > before we release Solr 8.0. > Unresolved points-related issues: > [https://jira.apache.org/jira/issues/?jql=project=SOLR+AND+labels=numeric-tries-to-points+AND+resolution=unresolved] -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11078) Solr query performance degradation since Solr 6.4.2
[ https://issues.apache.org/jira/browse/SOLR-11078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16686873#comment-16686873 ] Adrien Grand commented on SOLR-11078: - I'm not aware of such benchmarks, but I've had a couple Elasticsearch users who reported slowdowns after we changed numeric fields to be backed by points rather than the inverted index. If your exact query doesn't match many documents, performance would be mostly the same, but otherwise the fact that points need to collect all doc IDs into an array and sort them when the inverted index has a ready-to-use DISI thanks to postings makes them slower. > Solr query performance degradation since Solr 6.4.2 > --- > > Key: SOLR-11078 > URL: https://issues.apache.org/jira/browse/SOLR-11078 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: search, Server >Affects Versions: 6.6, 7.1 > Environment: * CentOS 7.3 (Linux zasolrm03 3.10.0-514.26.2.el7.x86_64 > #1 SMP Tue Jul 4 15:04:05 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux) > * Java HotSpot(TM) 64-Bit Server VM (build 25.121-b13, mixed mode) > * 4 CPU, 10GB RAM > Running Solr 6.6.0 with the following JVM settings: > java -server -Xms4G -Xmx4G -XX:NewRatio=3 -XX:SurvivorRatio=4 > -XX:TargetSurvivorRatio=90 -XX:MaxTenuringThreshold=8 -XX:+UseConcMarkSweepGC > -XX:+UseParNewGC -XX:ConcGCThreads=4 -XX:ParallelGCThreads=4 > -XX:+CMSScavengeBeforeRemark -XX:PretenureSizeThreshold=64m > -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=50 > -XX:CMSMaxAbortablePrecleanTime=6000 -XX:+CMSParallelRemarkEnabled > -XX:+ParallelRefProcEnabled -verbose:gc -XX:+PrintHeapAtGC > -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps > -XX:+PrintTenuringDistribution -XX:+PrintGCApplicationStoppedTime > -Xloggc:/home/prodza/solrserver/../logs/solr_gc.log -XX:+UseGCLogFileRotation > -XX:NumberOfGCLogFiles=9 -XX:GCLogFileSize=20M > -Dsolr.log.dir=/home/prodza/solrserver/../logs -Djetty.port=8983 > -DSTOP.PORT=7983 -DSTOP.KEY=solrrocks -Duser.timezone=SAST > -Djetty.home=/home/prodza/solrserver/server > -Dsolr.solr.home=/home/prodza/solrserver/../solr > -Dsolr.install.dir=/home/prodza/solrserver > -Dlog4j.configuration=file:/home/prodza/solrserver/../config/log4j.properties > -Xss256k -Xss256k -Dsolr.log.muteconsole > -XX:OnOutOfMemoryError=/home/prodza/solrserver/bin/oom_solr.sh 8983 > /home/prodza/solrserver/../logs -jar start.jar --module=http >Reporter: bidorbuy >Priority: Major > Attachments: compare-6.4.2-6.6.0.png, core-admin-tradesearch.png, > image-2018-11-14-19-00-39-395.png, image-2018-11-14-19-02-20-216.png, > jvm-stats.png, schema.xml, screenshot-1.png, screenshot-2.png, > screenshot-3.png, solr-6-4-2-schema.xml, solr-6-4-2-solrconfig.xml, > solr-7-1-0-managed-schema, solr-7-1-0-solrconfig.xml, solr-71-vs-64.png, > solr-sample-warning-log.txt, solr.in.sh, solrconfig.xml > > > We are currently running 2 separate Solr servers - refer to screenshots: > * zasolrm02 is running on Solr 6.4.2 > * zasolrm03 is running on Solr 6.6.0 > Both servers have the same OS / JVM configuration and are using their own > indexes. We round-robin load-balance through our Tomcats and notice that > Since Solr 6.4.2 performance has dropped. We have two indices per server > "searchsuggestions" and "tradesearch". There is a noticeable drop in > performance since Solr 6.4.2. > I am not sure if this is perhaps related to metric collation or other > underlying changes. I am not sure if other high transaction users have > noticed similar issues. > *1) zasolrm03 (6.6.0) is almost twice as slow on the tradesearch index:* > !compare-6.4.2-6.6.0.png! > *2) This is also visible in the searchsuggestion index:* > !screenshot-1.png! > *3) The Tradesearch index shows the biggest difference:* > !screenshot-2.png! -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12632) Completely remove Trie fields
[ https://issues.apache.org/jira/browse/SOLR-12632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16686865#comment-16686865 ] Joel Bernstein commented on SOLR-12632: --- I've been running queries locally and I was incorrect. This was driven by a very large drop in performance we were seeing in certain JSON facet queries when point fields were involved. But they were involved in two ways: 1) As facet buckets 2) In aggregations I removed the point fields from facet buckets and just used them points in aggregations. The points were actually faster. So it was #1 which is very, very slow. And in those scenarios the StrField copy does work. So, I'll remove my objection. > Completely remove Trie fields > - > > Key: SOLR-12632 > URL: https://issues.apache.org/jira/browse/SOLR-12632 > Project: Solr > Issue Type: Task > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Steve Rowe >Priority: Blocker > Labels: numeric-tries-to-points > Fix For: master (8.0) > > > Trie fields were deprecated in Solr 7.0. We should remove them completely > before we release Solr 8.0. > Unresolved points-related issues: > [https://jira.apache.org/jira/issues/?jql=project=SOLR+AND+labels=numeric-tries-to-points+AND+resolution=unresolved] -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-11078) Solr query performance degradation since Solr 6.4.2
[ https://issues.apache.org/jira/browse/SOLR-11078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] bidorbuy updated SOLR-11078: Attachment: image-2018-11-14-19-02-20-216.png > Solr query performance degradation since Solr 6.4.2 > --- > > Key: SOLR-11078 > URL: https://issues.apache.org/jira/browse/SOLR-11078 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: search, Server >Affects Versions: 6.6, 7.1 > Environment: * CentOS 7.3 (Linux zasolrm03 3.10.0-514.26.2.el7.x86_64 > #1 SMP Tue Jul 4 15:04:05 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux) > * Java HotSpot(TM) 64-Bit Server VM (build 25.121-b13, mixed mode) > * 4 CPU, 10GB RAM > Running Solr 6.6.0 with the following JVM settings: > java -server -Xms4G -Xmx4G -XX:NewRatio=3 -XX:SurvivorRatio=4 > -XX:TargetSurvivorRatio=90 -XX:MaxTenuringThreshold=8 -XX:+UseConcMarkSweepGC > -XX:+UseParNewGC -XX:ConcGCThreads=4 -XX:ParallelGCThreads=4 > -XX:+CMSScavengeBeforeRemark -XX:PretenureSizeThreshold=64m > -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=50 > -XX:CMSMaxAbortablePrecleanTime=6000 -XX:+CMSParallelRemarkEnabled > -XX:+ParallelRefProcEnabled -verbose:gc -XX:+PrintHeapAtGC > -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps > -XX:+PrintTenuringDistribution -XX:+PrintGCApplicationStoppedTime > -Xloggc:/home/prodza/solrserver/../logs/solr_gc.log -XX:+UseGCLogFileRotation > -XX:NumberOfGCLogFiles=9 -XX:GCLogFileSize=20M > -Dsolr.log.dir=/home/prodza/solrserver/../logs -Djetty.port=8983 > -DSTOP.PORT=7983 -DSTOP.KEY=solrrocks -Duser.timezone=SAST > -Djetty.home=/home/prodza/solrserver/server > -Dsolr.solr.home=/home/prodza/solrserver/../solr > -Dsolr.install.dir=/home/prodza/solrserver > -Dlog4j.configuration=file:/home/prodza/solrserver/../config/log4j.properties > -Xss256k -Xss256k -Dsolr.log.muteconsole > -XX:OnOutOfMemoryError=/home/prodza/solrserver/bin/oom_solr.sh 8983 > /home/prodza/solrserver/../logs -jar start.jar --module=http >Reporter: bidorbuy >Priority: Major > Attachments: compare-6.4.2-6.6.0.png, core-admin-tradesearch.png, > image-2018-11-14-19-00-39-395.png, image-2018-11-14-19-02-20-216.png, > jvm-stats.png, schema.xml, screenshot-1.png, screenshot-2.png, > screenshot-3.png, solr-6-4-2-schema.xml, solr-6-4-2-solrconfig.xml, > solr-7-1-0-managed-schema, solr-7-1-0-solrconfig.xml, solr-71-vs-64.png, > solr-sample-warning-log.txt, solr.in.sh, solrconfig.xml > > > We are currently running 2 separate Solr servers - refer to screenshots: > * zasolrm02 is running on Solr 6.4.2 > * zasolrm03 is running on Solr 6.6.0 > Both servers have the same OS / JVM configuration and are using their own > indexes. We round-robin load-balance through our Tomcats and notice that > Since Solr 6.4.2 performance has dropped. We have two indices per server > "searchsuggestions" and "tradesearch". There is a noticeable drop in > performance since Solr 6.4.2. > I am not sure if this is perhaps related to metric collation or other > underlying changes. I am not sure if other high transaction users have > noticed similar issues. > *1) zasolrm03 (6.6.0) is almost twice as slow on the tradesearch index:* > !compare-6.4.2-6.6.0.png! > *2) This is also visible in the searchsuggestion index:* > !screenshot-1.png! > *3) The Tradesearch index shows the biggest difference:* > !screenshot-2.png! -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-12632) Completely remove Trie fields
[ https://issues.apache.org/jira/browse/SOLR-12632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16686865#comment-16686865 ] Joel Bernstein edited comment on SOLR-12632 at 11/14/18 5:01 PM: - I've been running queries locally and I was incorrect. This was driven by a very large drop in performance we were seeing in certain JSON facet queries when point fields were involved. But they were involved in two ways: 1) As facet buckets (field facets) 2) In aggregations I removed the point fields from facet buckets and just used the points in aggregations. The points were actually faster. So it was #1 which is very, very slow. And in those scenarios the StrField copy does work. So, I'll remove my objection. was (Author: joel.bernstein): I've been running queries locally and I was incorrect. This was driven by a very large drop in performance we were seeing in certain JSON facet queries when point fields were involved. But they were involved in two ways: 1) As facet buckets 2) In aggregations I removed the point fields from facet buckets and just used the points in aggregations. The points were actually faster. So it was #1 which is very, very slow. And in those scenarios the StrField copy does work. So, I'll remove my objection. > Completely remove Trie fields > - > > Key: SOLR-12632 > URL: https://issues.apache.org/jira/browse/SOLR-12632 > Project: Solr > Issue Type: Task > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Steve Rowe >Priority: Blocker > Labels: numeric-tries-to-points > Fix For: master (8.0) > > > Trie fields were deprecated in Solr 7.0. We should remove them completely > before we release Solr 8.0. > Unresolved points-related issues: > [https://jira.apache.org/jira/issues/?jql=project=SOLR+AND+labels=numeric-tries-to-points+AND+resolution=unresolved] -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-11078) Solr query performance degradation since Solr 6.4.2
[ https://issues.apache.org/jira/browse/SOLR-11078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] bidorbuy updated SOLR-11078: Attachment: image-2018-11-14-19-00-39-395.png > Solr query performance degradation since Solr 6.4.2 > --- > > Key: SOLR-11078 > URL: https://issues.apache.org/jira/browse/SOLR-11078 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: search, Server >Affects Versions: 6.6, 7.1 > Environment: * CentOS 7.3 (Linux zasolrm03 3.10.0-514.26.2.el7.x86_64 > #1 SMP Tue Jul 4 15:04:05 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux) > * Java HotSpot(TM) 64-Bit Server VM (build 25.121-b13, mixed mode) > * 4 CPU, 10GB RAM > Running Solr 6.6.0 with the following JVM settings: > java -server -Xms4G -Xmx4G -XX:NewRatio=3 -XX:SurvivorRatio=4 > -XX:TargetSurvivorRatio=90 -XX:MaxTenuringThreshold=8 -XX:+UseConcMarkSweepGC > -XX:+UseParNewGC -XX:ConcGCThreads=4 -XX:ParallelGCThreads=4 > -XX:+CMSScavengeBeforeRemark -XX:PretenureSizeThreshold=64m > -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=50 > -XX:CMSMaxAbortablePrecleanTime=6000 -XX:+CMSParallelRemarkEnabled > -XX:+ParallelRefProcEnabled -verbose:gc -XX:+PrintHeapAtGC > -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps > -XX:+PrintTenuringDistribution -XX:+PrintGCApplicationStoppedTime > -Xloggc:/home/prodza/solrserver/../logs/solr_gc.log -XX:+UseGCLogFileRotation > -XX:NumberOfGCLogFiles=9 -XX:GCLogFileSize=20M > -Dsolr.log.dir=/home/prodza/solrserver/../logs -Djetty.port=8983 > -DSTOP.PORT=7983 -DSTOP.KEY=solrrocks -Duser.timezone=SAST > -Djetty.home=/home/prodza/solrserver/server > -Dsolr.solr.home=/home/prodza/solrserver/../solr > -Dsolr.install.dir=/home/prodza/solrserver > -Dlog4j.configuration=file:/home/prodza/solrserver/../config/log4j.properties > -Xss256k -Xss256k -Dsolr.log.muteconsole > -XX:OnOutOfMemoryError=/home/prodza/solrserver/bin/oom_solr.sh 8983 > /home/prodza/solrserver/../logs -jar start.jar --module=http >Reporter: bidorbuy >Priority: Major > Attachments: compare-6.4.2-6.6.0.png, core-admin-tradesearch.png, > image-2018-11-14-19-00-39-395.png, jvm-stats.png, schema.xml, > screenshot-1.png, screenshot-2.png, screenshot-3.png, solr-6-4-2-schema.xml, > solr-6-4-2-solrconfig.xml, solr-7-1-0-managed-schema, > solr-7-1-0-solrconfig.xml, solr-71-vs-64.png, solr-sample-warning-log.txt, > solr.in.sh, solrconfig.xml > > > We are currently running 2 separate Solr servers - refer to screenshots: > * zasolrm02 is running on Solr 6.4.2 > * zasolrm03 is running on Solr 6.6.0 > Both servers have the same OS / JVM configuration and are using their own > indexes. We round-robin load-balance through our Tomcats and notice that > Since Solr 6.4.2 performance has dropped. We have two indices per server > "searchsuggestions" and "tradesearch". There is a noticeable drop in > performance since Solr 6.4.2. > I am not sure if this is perhaps related to metric collation or other > underlying changes. I am not sure if other high transaction users have > noticed similar issues. > *1) zasolrm03 (6.6.0) is almost twice as slow on the tradesearch index:* > !compare-6.4.2-6.6.0.png! > *2) This is also visible in the searchsuggestion index:* > !screenshot-1.png! > *3) The Tradesearch index shows the biggest difference:* > !screenshot-2.png! -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-12632) Completely remove Trie fields
[ https://issues.apache.org/jira/browse/SOLR-12632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16686865#comment-16686865 ] Joel Bernstein edited comment on SOLR-12632 at 11/14/18 4:56 PM: - I've been running queries locally and I was incorrect. This was driven by a very large drop in performance we were seeing in certain JSON facet queries when point fields were involved. But they were involved in two ways: 1) As facet buckets 2) In aggregations I removed the point fields from facet buckets and just used the points in aggregations. The points were actually faster. So it was #1 which is very, very slow. And in those scenarios the StrField copy does work. So, I'll remove my objection. was (Author: joel.bernstein): I've been running queries locally and I was incorrect. This was driven by a very large drop in performance we were seeing in certain JSON facet queries when point fields were involved. But they were involved in two ways: 1) As facet buckets 2) In aggregations I removed the point fields from facet buckets and just used them points in aggregations. The points were actually faster. So it was #1 which is very, very slow. And in those scenarios the StrField copy does work. So, I'll remove my objection. > Completely remove Trie fields > - > > Key: SOLR-12632 > URL: https://issues.apache.org/jira/browse/SOLR-12632 > Project: Solr > Issue Type: Task > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Steve Rowe >Priority: Blocker > Labels: numeric-tries-to-points > Fix For: master (8.0) > > > Trie fields were deprecated in Solr 7.0. We should remove them completely > before we release Solr 8.0. > Unresolved points-related issues: > [https://jira.apache.org/jira/issues/?jql=project=SOLR+AND+labels=numeric-tries-to-points+AND+resolution=unresolved] -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11078) Solr query performance degradation since Solr 6.4.2
[ https://issues.apache.org/jira/browse/SOLR-11078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16686856#comment-16686856 ] David Smiley commented on SOLR-11078: - [~jpountz] or [~mikemccand] are you aware of any Lucene benchmarking we have (or had done once and reported somewhere) on the performance difference for exact lookup queries on a Points field compared to the Trie fields? > Solr query performance degradation since Solr 6.4.2 > --- > > Key: SOLR-11078 > URL: https://issues.apache.org/jira/browse/SOLR-11078 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: search, Server >Affects Versions: 6.6, 7.1 > Environment: * CentOS 7.3 (Linux zasolrm03 3.10.0-514.26.2.el7.x86_64 > #1 SMP Tue Jul 4 15:04:05 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux) > * Java HotSpot(TM) 64-Bit Server VM (build 25.121-b13, mixed mode) > * 4 CPU, 10GB RAM > Running Solr 6.6.0 with the following JVM settings: > java -server -Xms4G -Xmx4G -XX:NewRatio=3 -XX:SurvivorRatio=4 > -XX:TargetSurvivorRatio=90 -XX:MaxTenuringThreshold=8 -XX:+UseConcMarkSweepGC > -XX:+UseParNewGC -XX:ConcGCThreads=4 -XX:ParallelGCThreads=4 > -XX:+CMSScavengeBeforeRemark -XX:PretenureSizeThreshold=64m > -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=50 > -XX:CMSMaxAbortablePrecleanTime=6000 -XX:+CMSParallelRemarkEnabled > -XX:+ParallelRefProcEnabled -verbose:gc -XX:+PrintHeapAtGC > -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps > -XX:+PrintTenuringDistribution -XX:+PrintGCApplicationStoppedTime > -Xloggc:/home/prodza/solrserver/../logs/solr_gc.log -XX:+UseGCLogFileRotation > -XX:NumberOfGCLogFiles=9 -XX:GCLogFileSize=20M > -Dsolr.log.dir=/home/prodza/solrserver/../logs -Djetty.port=8983 > -DSTOP.PORT=7983 -DSTOP.KEY=solrrocks -Duser.timezone=SAST > -Djetty.home=/home/prodza/solrserver/server > -Dsolr.solr.home=/home/prodza/solrserver/../solr > -Dsolr.install.dir=/home/prodza/solrserver > -Dlog4j.configuration=file:/home/prodza/solrserver/../config/log4j.properties > -Xss256k -Xss256k -Dsolr.log.muteconsole > -XX:OnOutOfMemoryError=/home/prodza/solrserver/bin/oom_solr.sh 8983 > /home/prodza/solrserver/../logs -jar start.jar --module=http >Reporter: bidorbuy >Priority: Major > Attachments: compare-6.4.2-6.6.0.png, core-admin-tradesearch.png, > jvm-stats.png, schema.xml, screenshot-1.png, screenshot-2.png, > screenshot-3.png, solr-6-4-2-schema.xml, solr-6-4-2-solrconfig.xml, > solr-7-1-0-managed-schema, solr-7-1-0-solrconfig.xml, solr-71-vs-64.png, > solr-sample-warning-log.txt, solr.in.sh, solrconfig.xml > > > We are currently running 2 separate Solr servers - refer to screenshots: > * zasolrm02 is running on Solr 6.4.2 > * zasolrm03 is running on Solr 6.6.0 > Both servers have the same OS / JVM configuration and are using their own > indexes. We round-robin load-balance through our Tomcats and notice that > Since Solr 6.4.2 performance has dropped. We have two indices per server > "searchsuggestions" and "tradesearch". There is a noticeable drop in > performance since Solr 6.4.2. > I am not sure if this is perhaps related to metric collation or other > underlying changes. I am not sure if other high transaction users have > noticed similar issues. > *1) zasolrm03 (6.6.0) is almost twice as slow on the tradesearch index:* > !compare-6.4.2-6.6.0.png! > *2) This is also visible in the searchsuggestion index:* > !screenshot-1.png! > *3) The Tradesearch index shows the biggest difference:* > !screenshot-2.png! -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12632) Completely remove Trie fields
[ https://issues.apache.org/jira/browse/SOLR-12632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16686838#comment-16686838 ] Yonik Seeley commented on SOLR-12632: - If docValues are enabled, hopefully current point fields aren't slower for things like statistics. But I could see them being slower for faceting (which uses single-value lookups for things like refinement, or calculating the domain for a sub-facet) > Completely remove Trie fields > - > > Key: SOLR-12632 > URL: https://issues.apache.org/jira/browse/SOLR-12632 > Project: Solr > Issue Type: Task > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Steve Rowe >Priority: Blocker > Labels: numeric-tries-to-points > Fix For: master (8.0) > > > Trie fields were deprecated in Solr 7.0. We should remove them completely > before we release Solr 8.0. > Unresolved points-related issues: > [https://jira.apache.org/jira/issues/?jql=project=SOLR+AND+labels=numeric-tries-to-points+AND+resolution=unresolved] -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-9120) LukeRequestHandler logs WARN "Error getting file length for [segments_NNN]" for inconsequential NoSuchFileException situations -- looks scary but is not a problem, loggi
[ https://issues.apache.org/jira/browse/SOLR-9120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16686845#comment-16686845 ] Franck Perrin commented on SOLR-9120: - Thank you, I'll go on to the user's list then. > LukeRequestHandler logs WARN "Error getting file length for [segments_NNN]" > for inconsequential NoSuchFileException situations -- looks scary but is not > a problem, logging should be reduced > - > > Key: SOLR-9120 > URL: https://issues.apache.org/jira/browse/SOLR-9120 > Project: Solr > Issue Type: Improvement >Affects Versions: 5.5, 6.0 >Reporter: Markus Jelsma >Assignee: Hoss Man >Priority: Major > Fix For: 7.2, master (8.0) > > Attachments: SOLR-9120.patch, SOLR-9120.patch, SOLR-9120.patch > > > Begining with Solr 5.5, the LukeRequestHandler started attempting to report > the name and file size of the segments file for the _current_ > Searcher+IndexReader in use by Solr -- however the filesize information is > not always available from the Directory in cases where "on disk" commits have > caused that file to be removed, for example... > * you perform index updates & commits w/o "newSearcher" being opened > * you "concurrently" make requests to the LukeRequestHandler or the > CoreAdminHandler requesting "STATUS" (ie: after the commit, before any > newSearcher) > ** these requests can come from the Admin UI passively if it's open in a > browser > In situations like this, a decision was made in SOLR-8587 to log a WARNing in > the event that the segments file size could not be determined -- but these > WARNing messages look scary and have lead (many) users to assume something is > wrong with their solr index. > We should reduce the severity of these log messages, and improve the wording > to make it more clear that this is not a fundemental problem with the index. > > Here's some trivial steps to reproduce the WARN message... > {noformat} > $ bin/solr -e techproducts > ... > $ tail -f example/techproducts/logs/solr.log > ... > {noformat} > In another terminal... > {noformat} > $ curl -H 'Content-Type: application/json' > 'http://localhost:8983/solr/techproducts/update?commit=true=false' > --data-binary '[{"id":"HOSS"}]' > ... > $ curl 'http://localhost:8983/solr/techproducts/admin/luke' > ... > {noformat} > When the "/admin/luke" URL is hit, this will show up in the logs – but the > luke request will finish correctly... > {noformat} > WARN - 2017-11-08 17:23:44.574; [ x:techproducts] > org.apache.solr.handler.admin.LukeRequestHandler; Error getting file length > for [segments_2] > java.nio.file.NoSuchFileException: > /home/hossman/lucene/dev/solr/example/techproducts/solr/techproducts/data/index/segments_2 > at > sun.nio.fs.UnixException.translateToIOException(UnixException.java:86) > at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102) > at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107) > at > sun.nio.fs.UnixFileAttributeViews$Basic.readAttributes(UnixFileAttributeViews.java:55) > at > sun.nio.fs.UnixFileSystemProvider.readAttributes(UnixFileSystemProvider.java:144) > at > sun.nio.fs.LinuxFileSystemProvider.readAttributes(LinuxFileSystemProvider.java:99) > at java.nio.file.Files.readAttributes(Files.java:1737) > at java.nio.file.Files.size(Files.java:2332) > at org.apache.lucene.store.FSDirectory.fileLength(FSDirectory.java:243) > at > org.apache.lucene.store.NRTCachingDirectory.fileLength(NRTCachingDirectory.java:128) > at > org.apache.solr.handler.admin.LukeRequestHandler.getFileLength(LukeRequestHandler.java:611) > at > org.apache.solr.handler.admin.LukeRequestHandler.getIndexInfo(LukeRequestHandler.java:584) > at > org.apache.solr.handler.admin.LukeRequestHandler.handleRequestBody(LukeRequestHandler.java:136) > at > org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:177) > ... > INFO - 2017-11-08 17:23:44.587; [ x:techproducts] > org.apache.solr.core.SolrCore; [techproducts] webapp=/solr path=/admin/luke > params={} status=0 QTime=15 > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-12632) Completely remove Trie fields
[ https://issues.apache.org/jira/browse/SOLR-12632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16686832#comment-16686832 ] Joel Bernstein edited comment on SOLR-12632 at 11/14/18 4:35 PM: - When you sum a point field it is slower then summing a trie field. was (Author: joel.bernstein): When you sum a point field it is slower the summing a trie field. > Completely remove Trie fields > - > > Key: SOLR-12632 > URL: https://issues.apache.org/jira/browse/SOLR-12632 > Project: Solr > Issue Type: Task > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Steve Rowe >Priority: Blocker > Labels: numeric-tries-to-points > Fix For: master (8.0) > > > Trie fields were deprecated in Solr 7.0. We should remove them completely > before we release Solr 8.0. > Unresolved points-related issues: > [https://jira.apache.org/jira/issues/?jql=project=SOLR+AND+labels=numeric-tries-to-points+AND+resolution=unresolved] -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12632) Completely remove Trie fields
[ https://issues.apache.org/jira/browse/SOLR-12632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16686837#comment-16686837 ] David Smiley commented on SOLR-12632: - Aggregations like that should come from DocValues which is entirely separate from either the Points index or Terms index. What am I missing? > Completely remove Trie fields > - > > Key: SOLR-12632 > URL: https://issues.apache.org/jira/browse/SOLR-12632 > Project: Solr > Issue Type: Task > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Steve Rowe >Priority: Blocker > Labels: numeric-tries-to-points > Fix For: master (8.0) > > > Trie fields were deprecated in Solr 7.0. We should remove them completely > before we release Solr 8.0. > Unresolved points-related issues: > [https://jira.apache.org/jira/issues/?jql=project=SOLR+AND+labels=numeric-tries-to-points+AND+resolution=unresolved] -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[GitHub] lucene-solr pull request #497: LUCENE-8026: ExitableDirectoryReader does not...
Github user cbismuth commented on a diff in the pull request: https://github.com/apache/lucene-solr/pull/497#discussion_r233524064 --- Diff: lucene/core/src/java/org/apache/lucene/index/ExitableDirectoryReader.java --- @@ -100,13 +109,156 @@ public CacheHelper getCoreCacheHelper() { } + /** + * Wrapper class for another PointValues implementation that is used by ExitableFields. + */ + public static class ExitablePointValues extends PointValues { + +private final PointValues in; +private final QueryTimeout queryTimeout; + +public ExitablePointValues(PointValues in, QueryTimeout queryTimeout) { --- End diff -- That's it, documentation fixed and `precommit` task really green :sweat_smile: thanks! --- - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12632) Completely remove Trie fields
[ https://issues.apache.org/jira/browse/SOLR-12632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16686832#comment-16686832 ] Joel Bernstein commented on SOLR-12632: --- When you sum a point field it is slower the summing a trie field. > Completely remove Trie fields > - > > Key: SOLR-12632 > URL: https://issues.apache.org/jira/browse/SOLR-12632 > Project: Solr > Issue Type: Task > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Steve Rowe >Priority: Blocker > Labels: numeric-tries-to-points > Fix For: master (8.0) > > > Trie fields were deprecated in Solr 7.0. We should remove them completely > before we release Solr 8.0. > Unresolved points-related issues: > [https://jira.apache.org/jira/issues/?jql=project=SOLR+AND+labels=numeric-tries-to-points+AND+resolution=unresolved] -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12632) Completely remove Trie fields
[ https://issues.apache.org/jira/browse/SOLR-12632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16686831#comment-16686831 ] David Smiley commented on SOLR-12632: - [~joel.bernstein] can you please explain how "aggregation functions" are impacted? I'm not sure what you mean by that, honestly. > Completely remove Trie fields > - > > Key: SOLR-12632 > URL: https://issues.apache.org/jira/browse/SOLR-12632 > Project: Solr > Issue Type: Task > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Steve Rowe >Priority: Blocker > Labels: numeric-tries-to-points > Fix For: master (8.0) > > > Trie fields were deprecated in Solr 7.0. We should remove them completely > before we release Solr 8.0. > Unresolved points-related issues: > [https://jira.apache.org/jira/issues/?jql=project=SOLR+AND+labels=numeric-tries-to-points+AND+resolution=unresolved] -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[GitHub] lucene-solr pull request #497: LUCENE-8026: ExitableDirectoryReader does not...
Github user cbismuth commented on a diff in the pull request: https://github.com/apache/lucene-solr/pull/497#discussion_r233517397 --- Diff: lucene/core/src/java/org/apache/lucene/index/ExitableDirectoryReader.java --- @@ -100,13 +109,156 @@ public CacheHelper getCoreCacheHelper() { } + /** + * Wrapper class for another PointValues implementation that is used by ExitableFields. + */ + public static class ExitablePointValues extends PointValues { + +private final PointValues in; +private final QueryTimeout queryTimeout; + +public ExitablePointValues(PointValues in, QueryTimeout queryTimeout) { --- End diff -- Issue found: I forgot to add a `set -e` statement in my tiny bash script below to stop on first failure. Let me run another `precommit` Ant task and push. ```bash #!/usr/bin/env bash set -e cd ${HOME}/git/lucene-solr find . -name "*.lck" -delete ant clean compile precommit ant -f lucene/build.xml test ``` --- - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-12632) Completely remove Trie fields
[ https://issues.apache.org/jira/browse/SOLR-12632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16686785#comment-16686785 ] Joel Bernstein edited comment on SOLR-12632 at 11/14/18 4:12 PM: - Trie fields are also faster in aggregation functions. So copying to a StrField won't help in those scenarios. I can add some performance numbers to this ticket so we quantify how much faster. At this time I agree with Yonik that the performance issues outweigh the need to remove deprecated Trie fields. was (Author: joel.bernstein): Trie fields are also faster in aggregation functions. So copying to a StrField won't help in those scenarios. I can add some performance numbers to this ticket so we quantify how much faster. > Completely remove Trie fields > - > > Key: SOLR-12632 > URL: https://issues.apache.org/jira/browse/SOLR-12632 > Project: Solr > Issue Type: Task > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Steve Rowe >Priority: Blocker > Labels: numeric-tries-to-points > Fix For: master (8.0) > > > Trie fields were deprecated in Solr 7.0. We should remove them completely > before we release Solr 8.0. > Unresolved points-related issues: > [https://jira.apache.org/jira/issues/?jql=project=SOLR+AND+labels=numeric-tries-to-points+AND+resolution=unresolved] -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12632) Completely remove Trie fields
[ https://issues.apache.org/jira/browse/SOLR-12632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16686785#comment-16686785 ] Joel Bernstein commented on SOLR-12632: --- Trie fields are also faster in aggregation functions. So copying to a StrField won't help in those scenarios. I can add some performance numbers to this ticket so we quantify how much faster. > Completely remove Trie fields > - > > Key: SOLR-12632 > URL: https://issues.apache.org/jira/browse/SOLR-12632 > Project: Solr > Issue Type: Task > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Steve Rowe >Priority: Blocker > Labels: numeric-tries-to-points > Fix For: master (8.0) > > > Trie fields were deprecated in Solr 7.0. We should remove them completely > before we release Solr 8.0. > Unresolved points-related issues: > [https://jira.apache.org/jira/issues/?jql=project=SOLR+AND+labels=numeric-tries-to-points+AND+resolution=unresolved] -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12632) Completely remove Trie fields
[ https://issues.apache.org/jira/browse/SOLR-12632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16686768#comment-16686768 ] Yonik Seeley commented on SOLR-12632: - The performance hit seems more important than exactly when deprecated functionality is removed. We should have a superior single numeric field that is better at both range queries and single value matches before we remove the existing field (trie) that can do both well. > Completely remove Trie fields > - > > Key: SOLR-12632 > URL: https://issues.apache.org/jira/browse/SOLR-12632 > Project: Solr > Issue Type: Task > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Steve Rowe >Priority: Blocker > Labels: numeric-tries-to-points > Fix For: master (8.0) > > > Trie fields were deprecated in Solr 7.0. We should remove them completely > before we release Solr 8.0. > Unresolved points-related issues: > [https://jira.apache.org/jira/issues/?jql=project=SOLR+AND+labels=numeric-tries-to-points+AND+resolution=unresolved] -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[GitHub] lucene-solr pull request #497: LUCENE-8026: ExitableDirectoryReader does not...
Github user jpountz commented on a diff in the pull request: https://github.com/apache/lucene-solr/pull/497#discussion_r233506349 --- Diff: lucene/core/src/java/org/apache/lucene/index/ExitableDirectoryReader.java --- @@ -100,13 +109,156 @@ public CacheHelper getCoreCacheHelper() { } + /** + * Wrapper class for another PointValues implementation that is used by ExitableFields. + */ + public static class ExitablePointValues extends PointValues { + +private final PointValues in; +private final QueryTimeout queryTimeout; + +public ExitablePointValues(PointValues in, QueryTimeout queryTimeout) { --- End diff -- I'm getting precommit failures about missing javadocs for this constructor, aren't you? --- - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-7.x-Windows (32bit/jdk1.8.0_172) - Build # 884 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Windows/884/ Java: 32bit/jdk1.8.0_172 -client -XX:+UseSerialGC 2 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.cloud.autoscaling.sim.TestSimGenericDistributedQueue Error Message: Suite timeout exceeded (>= 720 msec). Stack Trace: java.lang.Exception: Suite timeout exceeded (>= 720 msec). at __randomizedtesting.SeedInfo.seed([F5630563535D130E]:0) FAILED: org.apache.solr.cloud.autoscaling.sim.TestSimGenericDistributedQueue.testDistributedQueue Error Message: Test abandoned because suite timeout was reached. Stack Trace: java.lang.Exception: Test abandoned because suite timeout was reached. at __randomizedtesting.SeedInfo.seed([F5630563535D130E]:0) Build Log: [...truncated 15321 lines...] [junit4] Suite: org.apache.solr.cloud.autoscaling.sim.TestSimGenericDistributedQueue [junit4] 2> 1061113 INFO (SUITE-TestSimGenericDistributedQueue-seed#[F5630563535D130E]-worker) [] o.a.s.SolrTestCaseJ4 SecureRandom sanity checks: test.solr.allowed.securerandom=null & java.security.egd=file:/dev/./urandom [junit4] 2> Creating dataDir: C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.autoscaling.sim.TestSimGenericDistributedQueue_F5630563535D130E-001\init-core-data-001 [junit4] 2> 1061115 WARN (SUITE-TestSimGenericDistributedQueue-seed#[F5630563535D130E]-worker) [] o.a.s.SolrTestCaseJ4 startTrackingSearchers: numOpens=1 numCloses=1 [junit4] 2> 1061116 INFO (SUITE-TestSimGenericDistributedQueue-seed#[F5630563535D130E]-worker) [] o.a.s.SolrTestCaseJ4 Using PointFields (NUMERIC_POINTS_SYSPROP=true) w/NUMERIC_DOCVALUES_SYSPROP=false [junit4] 2> 1061121 INFO (SUITE-TestSimGenericDistributedQueue-seed#[F5630563535D130E]-worker) [] o.a.s.SolrTestCaseJ4 Randomized ssl (true) and clientAuth (false) via: @org.apache.solr.util.RandomizeSSL(reason=, value=NaN, ssl=NaN, clientAuth=NaN) [junit4] 2> 1061123 INFO (TEST-TestSimGenericDistributedQueue.testPeekElements-seed#[F5630563535D130E]) [] o.a.s.SolrTestCaseJ4 ###Starting testPeekElements [junit4] 2> 1062567 INFO (TEST-TestSimGenericDistributedQueue.testPeekElements-seed#[F5630563535D130E]) [] o.a.s.SolrTestCaseJ4 ###Ending testPeekElements [junit4] 2> 1062569 INFO (TEST-TestSimGenericDistributedQueue.testLocallyOffer-seed#[F5630563535D130E]) [] o.a.s.SolrTestCaseJ4 ###Starting testLocallyOffer [junit4] 2> 1062783 INFO (TEST-TestSimGenericDistributedQueue.testLocallyOffer-seed#[F5630563535D130E]) [] o.a.s.SolrTestCaseJ4 ###Ending testLocallyOffer [junit4] 2> 1062785 INFO (TEST-TestSimGenericDistributedQueue.testDistributedQueue-seed#[F5630563535D130E]) [] o.a.s.SolrTestCaseJ4 ###Starting testDistributedQueue [junit4] 2> Nov 14, 2018 9:06:12 QN com.carrotsearch.randomizedtesting.ThreadLeakControl$2 evaluate [junit4] 2> WARNING: Suite execution timed out: org.apache.solr.cloud.autoscaling.sim.TestSimGenericDistributedQueue [junit4] 2> jstack at approximately timeout time [junit4] 2> "TEST-TestSimGenericDistributedQueue.testDistributedQueue-seed#[F5630563535D130E]" ID=8689 TIMED_WAITING on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@175413c [junit4] 2>at sun.misc.Unsafe.park(Native Method) [junit4] 2>- timed waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@175413c [junit4] 2>at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) [junit4] 2>at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) [junit4] 2>at org.apache.solr.cloud.autoscaling.sim.GenericDistributedQueue.peek(GenericDistributedQueue.java:194) [junit4] 2>at org.apache.solr.cloud.autoscaling.sim.GenericDistributedQueue.peek(GenericDistributedQueue.java:167) [junit4] 2>at org.apache.solr.cloud.autoscaling.sim.TestSimDistributedQueue.testDistributedQueue(TestSimDistributedQueue.java:74) [junit4] 2>at org.apache.solr.cloud.autoscaling.sim.TestSimGenericDistributedQueue.testDistributedQueue(TestSimGenericDistributedQueue.java:37) [junit4] 2>at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) [junit4] 2>at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) [junit4] 2>at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) [junit4] 2>at java.lang.reflect.Method.invoke(Method.java:498) [junit4] 2>at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742) [junit4] 2>at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935) [junit4] 2>at
[jira] [Commented] (SOLR-9120) LukeRequestHandler logs WARN "Error getting file length for [segments_NNN]" for inconsequential NoSuchFileException situations -- looks scary but is not a problem, loggi
[ https://issues.apache.org/jira/browse/SOLR-9120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16686744#comment-16686744 ] Erick Erickson commented on SOLR-9120: -- It's highly unlikely this is related to your OOMs, Support: sure, see the Solr user's list from here: http://lucene.apache.org/solr/community.html My guess would be you're sorting or grouping or faceting on a field that does not have docValues enabled, but lets move the rest of the discussion over to the user's list. > LukeRequestHandler logs WARN "Error getting file length for [segments_NNN]" > for inconsequential NoSuchFileException situations -- looks scary but is not > a problem, logging should be reduced > - > > Key: SOLR-9120 > URL: https://issues.apache.org/jira/browse/SOLR-9120 > Project: Solr > Issue Type: Improvement >Affects Versions: 5.5, 6.0 >Reporter: Markus Jelsma >Assignee: Hoss Man >Priority: Major > Fix For: 7.2, master (8.0) > > Attachments: SOLR-9120.patch, SOLR-9120.patch, SOLR-9120.patch > > > Begining with Solr 5.5, the LukeRequestHandler started attempting to report > the name and file size of the segments file for the _current_ > Searcher+IndexReader in use by Solr -- however the filesize information is > not always available from the Directory in cases where "on disk" commits have > caused that file to be removed, for example... > * you perform index updates & commits w/o "newSearcher" being opened > * you "concurrently" make requests to the LukeRequestHandler or the > CoreAdminHandler requesting "STATUS" (ie: after the commit, before any > newSearcher) > ** these requests can come from the Admin UI passively if it's open in a > browser > In situations like this, a decision was made in SOLR-8587 to log a WARNing in > the event that the segments file size could not be determined -- but these > WARNing messages look scary and have lead (many) users to assume something is > wrong with their solr index. > We should reduce the severity of these log messages, and improve the wording > to make it more clear that this is not a fundemental problem with the index. > > Here's some trivial steps to reproduce the WARN message... > {noformat} > $ bin/solr -e techproducts > ... > $ tail -f example/techproducts/logs/solr.log > ... > {noformat} > In another terminal... > {noformat} > $ curl -H 'Content-Type: application/json' > 'http://localhost:8983/solr/techproducts/update?commit=true=false' > --data-binary '[{"id":"HOSS"}]' > ... > $ curl 'http://localhost:8983/solr/techproducts/admin/luke' > ... > {noformat} > When the "/admin/luke" URL is hit, this will show up in the logs – but the > luke request will finish correctly... > {noformat} > WARN - 2017-11-08 17:23:44.574; [ x:techproducts] > org.apache.solr.handler.admin.LukeRequestHandler; Error getting file length > for [segments_2] > java.nio.file.NoSuchFileException: > /home/hossman/lucene/dev/solr/example/techproducts/solr/techproducts/data/index/segments_2 > at > sun.nio.fs.UnixException.translateToIOException(UnixException.java:86) > at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102) > at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107) > at > sun.nio.fs.UnixFileAttributeViews$Basic.readAttributes(UnixFileAttributeViews.java:55) > at > sun.nio.fs.UnixFileSystemProvider.readAttributes(UnixFileSystemProvider.java:144) > at > sun.nio.fs.LinuxFileSystemProvider.readAttributes(LinuxFileSystemProvider.java:99) > at java.nio.file.Files.readAttributes(Files.java:1737) > at java.nio.file.Files.size(Files.java:2332) > at org.apache.lucene.store.FSDirectory.fileLength(FSDirectory.java:243) > at > org.apache.lucene.store.NRTCachingDirectory.fileLength(NRTCachingDirectory.java:128) > at > org.apache.solr.handler.admin.LukeRequestHandler.getFileLength(LukeRequestHandler.java:611) > at > org.apache.solr.handler.admin.LukeRequestHandler.getIndexInfo(LukeRequestHandler.java:584) > at > org.apache.solr.handler.admin.LukeRequestHandler.handleRequestBody(LukeRequestHandler.java:136) > at > org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:177) > ... > INFO - 2017-11-08 17:23:44.587; [ x:techproducts] > org.apache.solr.core.SolrCore; [techproducts] webapp=/solr path=/admin/luke > params={} status=0 QTime=15 > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional
[jira] [Commented] (SOLR-12632) Completely remove Trie fields
[ https://issues.apache.org/jira/browse/SOLR-12632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16686741#comment-16686741 ] Erick Erickson commented on SOLR-12632: --- Revealing my ignorance here, but just want to be sure. Adding docValues to a points-based field doesn't penalize sorting/faceting/grouping, right? You can tell I couldn't attend the committer's meeting at Activate > Completely remove Trie fields > - > > Key: SOLR-12632 > URL: https://issues.apache.org/jira/browse/SOLR-12632 > Project: Solr > Issue Type: Task > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Steve Rowe >Priority: Blocker > Labels: numeric-tries-to-points > Fix For: master (8.0) > > > Trie fields were deprecated in Solr 7.0. We should remove them completely > before we release Solr 8.0. > Unresolved points-related issues: > [https://jira.apache.org/jira/issues/?jql=project=SOLR+AND+labels=numeric-tries-to-points+AND+resolution=unresolved] -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk-10.0.1) - Build # 3091 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/3091/ Java: 64bit/jdk-10.0.1 -XX:-UseCompressedOops -XX:+UseG1GC 4 tests failed. FAILED: org.apache.solr.cloud.TestTlogReplica.testRecovery Error Message: Can not find doc 7 in https://127.0.0.1:36069/solr Stack Trace: java.lang.AssertionError: Can not find doc 7 in https://127.0.0.1:36069/solr at __randomizedtesting.SeedInfo.seed([1F059259EB055B5B:DEF5EBF5C65591FC]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.junit.Assert.assertNotNull(Assert.java:526) at org.apache.solr.cloud.TestTlogReplica.checkRTG(TestTlogReplica.java:902) at org.apache.solr.cloud.TestTlogReplica.testRecovery(TestTlogReplica.java:567) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:564) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.base/java.lang.Thread.run(Thread.java:844) FAILED: org.apache.solr.cloud.TestTlogReplica.testRecovery Error Message: Can not find doc 7 in https://127.0.0.1:36973/solr Stack Trace:
[GitHub] lucene-solr issue #495: LUCENE-8464: Implement ConstantScoreScorer#setMinCom...
Github user cbismuth commented on the issue: https://github.com/apache/lucene-solr/pull/495 Lucene tests are all green on my side, I'll launch Solr ones later this day. I'm not at ease with these two changes [here](https://github.com/apache/lucene-solr/pull/495/files#diff-4c736ed817068ac157cd867fa1dc6418R769) and [there](https://github.com/apache/lucene-solr/pull/495/files#diff-4c736ed817068ac157cd867fa1dc6418R847). I'm not sure I can _easily_ retrieve the score mode from the `CachingWrapperWeight` class, therefore I passed `ScoreMode.TOP_SCORE` to not prevent from setting min competitive score. What do you think about it? --- - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12881) Remove unneeded import statements
[ https://issues.apache.org/jira/browse/SOLR-12881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16686665#comment-16686665 ] Erick Erickson commented on SOLR-12881: --- Sorry, this slipped off my radar. Let's table the bits about changing precommit and I'll try to get to this by next Monday at the latest (probably over the next couple of days assuming Peter's patch applies cleanly, there's no good reason to let this go stale again. > Remove unneeded import statements > - > > Key: SOLR-12881 > URL: https://issues.apache.org/jira/browse/SOLR-12881 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: master (8.0) >Reporter: Peter Somogyi >Assignee: Erick Erickson >Priority: Trivial > Attachments: SOLR-12881.patch, SOLR-12881.patch, SOLR-12881.patch, > SOLR-12881.patch > > Time Spent: 10m > Remaining Estimate: 0h > > There are unnecessary import statements: > * import from java.lang > * import from same package > * unused import -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8374) Reduce reads for sparse DocValues
[ https://issues.apache.org/jira/browse/LUCENE-8374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16686657#comment-16686657 ] Adrien Grand commented on LUCENE-8374: -- I'd really like to bake it into the codec and avoid computing the cache dynamically. If someone really needs this feature with the current codec, they could make a custom build that applies your patch? As far as reviewing is concerned, maybe it would help to split the improvements to skipping blocks and skipping over blocks into two different patches (or commits)? That might help dig test failures or performance issues in the future as well since git bisect would point to a smaller commit. > Reduce reads for sparse DocValues > - > > Key: LUCENE-8374 > URL: https://issues.apache.org/jira/browse/LUCENE-8374 > Project: Lucene - Core > Issue Type: Improvement > Components: core/codecs >Affects Versions: 7.5, master (8.0) >Reporter: Toke Eskildsen >Priority: Major > Labels: performance > Fix For: 7.6 > > Attachments: LUCENE-8374.patch, LUCENE-8374.patch, LUCENE-8374.patch, > LUCENE-8374.patch, LUCENE-8374.patch, LUCENE-8374.patch, LUCENE-8374.patch, > LUCENE-8374_branch_7_3.patch, LUCENE-8374_branch_7_3.patch.20181005, > LUCENE-8374_branch_7_4.patch, LUCENE-8374_branch_7_5.patch, > entire_index_logs.txt, image-2018-10-24-07-30-06-663.png, > image-2018-10-24-07-30-56-962.png, single_vehicle_logs.txt, > start-2018-10-24-1_snapshot___Users_tim_Snapshots__-_YourKit_Java_Profiler_2017_02-b75_-_64-bit.png, > > start-2018-10-24_snapshot___Users_tim_Snapshots__-_YourKit_Java_Profiler_2017_02-b75_-_64-bit.png > > > The {{Lucene70DocValuesProducer}} has the internal classes > {{SparseNumericDocValues}} and {{BaseSortedSetDocValues}} (sparse code path), > which again uses {{IndexedDISI}} to handle the docID -> value-ordinal lookup. > The value-ordinal is the index of the docID assuming an abstract tightly > packed monotonically increasing list of docIDs: If the docIDs with > corresponding values are {{[0, 4, 1432]}}, their value-ordinals will be {{[0, > 1, 2]}}. > h2. Outer blocks > The lookup structure of {{IndexedDISI}} consists of blocks of 2^16 values > (65536), where each block can be either {{ALL}}, {{DENSE}} (2^12 to 2^16 > values) or {{SPARSE}} (< 2^12 values ~= 6%). Consequently blocks vary quite a > lot in size and ordinal resolving strategy. > When a sparse Numeric DocValue is needed, the code first locates the block > containing the wanted docID flag. It does so by iterating blocks one-by-one > until it reaches the needed one, where each iteration requires a lookup in > the underlying {{IndexSlice}}. For a common memory mapped index, this > translates to either a cached request or a read operation. If a segment has > 6M documents, worst-case is 91 lookups. In our web archive, our segments has > ~300M values: A worst-case of 4577 lookups! > One obvious solution is to use a lookup-table for blocks: A long[]-array with > an entry for each block. For 6M documents, that is < 1KB and would allow for > direct jumping (a single lookup) in all instances. Unfortunately this > lookup-table cannot be generated upfront when the writing of values is purely > streaming. It can be appended to the end of the stream before it is closed, > but without knowing the position of the lookup-table the reader cannot seek > to it. > One strategy for creating such a lookup-table would be to generate it during > reads and cache it for next lookup. This does not fit directly into how > {{IndexedDISI}} currently works (it is created anew for each invocation), but > could probably be added with a little work. An advantage to this is that this > does not change the underlying format and thus could be used with existing > indexes. > h2. The lookup structure inside each block > If {{ALL}} of the 2^16 values are defined, the structure is empty and the > ordinal is simply the requested docID with some modulo and multiply math. > Nothing to improve there. > If the block is {{DENSE}} (2^12 to 2^16 values are defined), a bitmap is used > and the number of set bits up to the wanted index (the docID modulo the block > origo) are counted. That bitmap is a long[1024], meaning that worst case is > to lookup and count all set bits for 1024 longs! > One known solution to this is to use a [rank > structure|[https://en.wikipedia.org/wiki/Succinct_data_structure]]. I > [implemented > it|[https://github.com/tokee/lucene-solr/blob/solr5894/solr/core/src/java/org/apache/solr/search/sparse/count/plane/RankCache.java]] > for a related project and with that (), the rank-overhead for a {{DENSE}} > block would be long[32] and would ensure a maximum of 9 lookups. It is not > trivial to build the rank-structure and caching it
[jira] [Commented] (SOLR-12632) Completely remove Trie fields
[ https://issues.apache.org/jira/browse/SOLR-12632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16686656#comment-16686656 ] David Smiley commented on SOLR-12632: - Simple: add a copyField to a StrField and do lookups there. This isn't "friendly" but it's also just an optimization (isn't essential). > Completely remove Trie fields > - > > Key: SOLR-12632 > URL: https://issues.apache.org/jira/browse/SOLR-12632 > Project: Solr > Issue Type: Task > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Steve Rowe >Priority: Blocker > Labels: numeric-tries-to-points > Fix For: master (8.0) > > > Trie fields were deprecated in Solr 7.0. We should remove them completely > before we release Solr 8.0. > Unresolved points-related issues: > [https://jira.apache.org/jira/issues/?jql=project=SOLR+AND+labels=numeric-tries-to-points+AND+resolution=unresolved] -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-12881) Remove unneeded import statements
[ https://issues.apache.org/jira/browse/SOLR-12881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Somogyi updated SOLR-12881: - Attachment: SOLR-12881.patch > Remove unneeded import statements > - > > Key: SOLR-12881 > URL: https://issues.apache.org/jira/browse/SOLR-12881 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: master (8.0) >Reporter: Peter Somogyi >Assignee: Erick Erickson >Priority: Trivial > Attachments: SOLR-12881.patch, SOLR-12881.patch, SOLR-12881.patch, > SOLR-12881.patch > > Time Spent: 10m > Remaining Estimate: 0h > > There are unnecessary import statements: > * import from java.lang > * import from same package > * unused import -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12632) Completely remove Trie fields
[ https://issues.apache.org/jira/browse/SOLR-12632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16686652#comment-16686652 ] Erick Erickson commented on SOLR-12632: --- [~dsmiley] But the whole point of keeping Trie fields was because of the performance slowdown for looking up individual terms. If we just remove Trie fields, what's our story on the performance hit? > Completely remove Trie fields > - > > Key: SOLR-12632 > URL: https://issues.apache.org/jira/browse/SOLR-12632 > Project: Solr > Issue Type: Task > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Steve Rowe >Priority: Blocker > Labels: numeric-tries-to-points > Fix For: master (8.0) > > > Trie fields were deprecated in Solr 7.0. We should remove them completely > before we release Solr 8.0. > Unresolved points-related issues: > [https://jira.apache.org/jira/issues/?jql=project=SOLR+AND+labels=numeric-tries-to-points+AND+resolution=unresolved] -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8563) Remove k1+1 from the numerator of BM25Similarity
[ https://issues.apache.org/jira/browse/LUCENE-8563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16686633#comment-16686633 ] Adrien Grand commented on LUCENE-8563: -- That would be great [~lucacavanna]. I suspect most of the work is going to be about fixing tests that rely on absolute score values. > Remove k1+1 from the numerator of BM25Similarity > - > > Key: LUCENE-8563 > URL: https://issues.apache.org/jira/browse/LUCENE-8563 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Adrien Grand >Priority: Minor > > Our current implementation of BM25 does > {code:java} > boost * IDF * (k1+1) * tf / (tf + norm) > {code} > As (k1+1) is a constant, it is the same for every term and doesn't modify > ordering. It is often omitted and I found out that the "The Probabilistic > Relevance Framework: BM25 and Beyond" paper by Robertson (BM25's author) and > Zaragova even describes adding (k1+1) to the numerator as a variant whose > benefit is to be more comparable with Robertson/Sparck-Jones weighting, which > we don't care about. > {quote}A common variant is to add a (k1 + 1) component to the > numerator of the saturation function. This is the same for all > terms, and therefore does not affect the ranking produced. > The reason for including it was to make the final formula > more compatible with the RSJ weight used on its own > {quote} > Should we remove it from BM25Similarity as well? > A side-effect that I'm interested in is that integrating other score > contributions (eg. via oal.document.FeatureField) would be a bit easier to > reason about. For instance a weight of 3 in FeatureField#newSaturationQuery > would have a similar impact as a term whose IDF is 3 (and thus docFreq ~= 5%) > rather than a term whose IDF is 3/(k1 + 1). -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-BadApples-Tests-master - Build # 211 - Still Unstable
Build: https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-master/211/ 2 tests failed. FAILED: org.apache.solr.client.solrj.impl.CloudSolrClientTest.testRouting Error Message: Error from server at https://127.0.0.1:44434/solr/collection1_shard2_replica_n2: Expected mime type application/octet-stream but got text/html.Error 404 Can not find: /solr/collection1_shard2_replica_n2/update HTTP ERROR 404 Problem accessing /solr/collection1_shard2_replica_n2/update. Reason: Can not find: /solr/collection1_shard2_replica_n2/updatehttp://eclipse.org/jetty;>Powered by Jetty:// 9.4.11.v20180605 Stack Trace: org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: Error from server at https://127.0.0.1:44434/solr/collection1_shard2_replica_n2: Expected mime type application/octet-stream but got text/html. Error 404 Can not find: /solr/collection1_shard2_replica_n2/update HTTP ERROR 404 Problem accessing /solr/collection1_shard2_replica_n2/update. Reason: Can not find: /solr/collection1_shard2_replica_n2/updatehttp://eclipse.org/jetty;>Powered by Jetty:// 9.4.11.v20180605 at __randomizedtesting.SeedInfo.seed([673FE55577206DEF:A588D93D74609D97]:0) at org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:551) at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1016) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:884) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949) at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194) at org.apache.solr.client.solrj.request.UpdateRequest.commit(UpdateRequest.java:237) at org.apache.solr.client.solrj.impl.CloudSolrClientTest.testRouting(CloudSolrClientTest.java:269) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985) at org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:110) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at
[jira] [Commented] (SOLR-12632) Completely remove Trie fields
[ https://issues.apache.org/jira/browse/SOLR-12632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16686638#comment-16686638 ] David Smiley commented on SOLR-12632: - IMO we should remove old trie stuff for 8.0 wether or not SOLR-12074 happens or not. SOLR-12074 is a nice-to-have. > Completely remove Trie fields > - > > Key: SOLR-12632 > URL: https://issues.apache.org/jira/browse/SOLR-12632 > Project: Solr > Issue Type: Task > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Steve Rowe >Priority: Blocker > Labels: numeric-tries-to-points > Fix For: master (8.0) > > > Trie fields were deprecated in Solr 7.0. We should remove them completely > before we release Solr 8.0. > Unresolved points-related issues: > [https://jira.apache.org/jira/issues/?jql=project=SOLR+AND+labels=numeric-tries-to-points+AND+resolution=unresolved] -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12632) Completely remove Trie fields
[ https://issues.apache.org/jira/browse/SOLR-12632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16686630#comment-16686630 ] Cassandra Targett commented on SOLR-12632: -- I recall a discussion about this issue among a few committers at Activate in October, where it was proposed that perhaps we should not try to completely remove Trie fields until the issue with the individual lookups (SOLR-11078) can be fixed (possibly with something like SOLR-12074). Anyone else recall that conversation, and possibly want to add any additional thoughts or agreement? If we agree as a community about punting this, we could unmark this as a blocker for 8.0. > Completely remove Trie fields > - > > Key: SOLR-12632 > URL: https://issues.apache.org/jira/browse/SOLR-12632 > Project: Solr > Issue Type: Task > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Steve Rowe >Priority: Blocker > Labels: numeric-tries-to-points > Fix For: master (8.0) > > > Trie fields were deprecated in Solr 7.0. We should remove them completely > before we release Solr 8.0. > Unresolved points-related issues: > [https://jira.apache.org/jira/issues/?jql=project=SOLR+AND+labels=numeric-tries-to-points+AND+resolution=unresolved] -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-BadApples-NightlyTests-7.x - Build # 35 - Failure
Build: https://builds.apache.org/job/Lucene-Solr-BadApples-NightlyTests-7.x/35/ 73 tests failed. FAILED: org.apache.solr.OutputWriterTest.initializationError Error Message: GC overhead limit exceeded Stack Trace: java.lang.OutOfMemoryError: GC overhead limit exceeded FAILED: org.apache.solr.TestDocumentBuilder.initializationError Error Message: GC overhead limit exceeded Stack Trace: java.lang.OutOfMemoryError: GC overhead limit exceeded FAILED: org.apache.solr.TestJoin.initializationError Error Message: GC overhead limit exceeded Stack Trace: java.lang.OutOfMemoryError: GC overhead limit exceeded FAILED: org.apache.solr.client.solrj.embedded.TestJettySolrRunner.initializationError Error Message: GC overhead limit exceeded Stack Trace: java.lang.OutOfMemoryError: GC overhead limit exceeded FAILED: junit.framework.TestSuite.org.apache.solr.cloud.AddReplicaTest Error Message: GC overhead limit exceeded Stack Trace: java.lang.OutOfMemoryError: GC overhead limit exceeded FAILED: org.apache.solr.cloud.BasicZkTest.initializationError Error Message: GC overhead limit exceeded Stack Trace: java.lang.OutOfMemoryError: GC overhead limit exceeded FAILED: org.apache.solr.cloud.CleanupOldIndexTest.initializationError Error Message: GC overhead limit exceeded Stack Trace: java.lang.OutOfMemoryError: GC overhead limit exceeded at java.lang.reflect.Method.getTypeParameters(Method.java:216) at java.lang.reflect.Executable.sharedToGenericString(Executable.java:148) at java.lang.reflect.Method.toGenericString(Method.java:415) at com.carrotsearch.randomizedtesting.ClassModel$1.compare(ClassModel.java:27) at com.carrotsearch.randomizedtesting.ClassModel$1.compare(ClassModel.java:23) at java.util.TimSort.binarySort(TimSort.java:296) at java.util.TimSort.sort(TimSort.java:221) at java.util.Arrays.sort(Arrays.java:1438) at com.carrotsearch.randomizedtesting.ClassModel$3.members(ClassModel.java:216) at com.carrotsearch.randomizedtesting.ClassModel$3.members(ClassModel.java:212) at com.carrotsearch.randomizedtesting.ClassModel$ModelBuilder.build(ClassModel.java:85) at com.carrotsearch.randomizedtesting.ClassModel.methodsModel(ClassModel.java:224) at com.carrotsearch.randomizedtesting.ClassModel.(ClassModel.java:207) at com.carrotsearch.randomizedtesting.RandomizedRunner.(RandomizedRunner.java:323) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.junit.internal.builders.AnnotatedBuilder.buildRunner(AnnotatedBuilder.java:31) at org.junit.internal.builders.AnnotatedBuilder.runnerForClass(AnnotatedBuilder.java:24) at org.junit.runners.model.RunnerBuilder.safeRunnerForClass(RunnerBuilder.java:57) at org.junit.internal.builders.AllDefaultPossibilitiesBuilder.runnerForClass(AllDefaultPossibilitiesBuilder.java:29) at org.junit.runners.model.RunnerBuilder.safeRunnerForClass(RunnerBuilder.java:57) at org.junit.internal.requests.ClassRequest.getRunner(ClassRequest.java:24) at com.carrotsearch.ant.tasks.junit4.slave.SlaveMain.execute(SlaveMain.java:258) at com.carrotsearch.ant.tasks.junit4.slave.SlaveMain.main(SlaveMain.java:394) at com.carrotsearch.ant.tasks.junit4.slave.SlaveMainSafe.main(SlaveMainSafe.java:13) FAILED: org.apache.solr.cloud.DeleteInactiveReplicaTest.initializationError Error Message: GC overhead limit exceeded Stack Trace: java.lang.OutOfMemoryError: GC overhead limit exceeded FAILED: org.apache.solr.cloud.DistributedQueueTest.initializationError Error Message: GC overhead limit exceeded Stack Trace: java.lang.OutOfMemoryError: GC overhead limit exceeded FAILED: org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.initializationError Error Message: GC overhead limit exceeded Stack Trace: java.lang.OutOfMemoryError: GC overhead limit exceeded FAILED: org.apache.solr.cloud.MigrateRouteKeyTest.initializationError Error Message: GC overhead limit exceeded Stack Trace: java.lang.OutOfMemoryError: GC overhead limit exceeded FAILED: org.apache.solr.cloud.OverseerRolesTest.initializationError Error Message: GC overhead limit exceeded Stack Trace: java.lang.OutOfMemoryError: GC overhead limit exceeded FAILED: org.apache.solr.cloud.ReplaceNodeTest.initializationError Error Message: GC overhead limit exceeded Stack Trace: java.lang.OutOfMemoryError: GC overhead limit exceeded FAILED: org.apache.solr.cloud.SliceStateTest.initializationError Error Message: GC overhead limit exceeded Stack Trace: java.lang.OutOfMemoryError: GC overhead limit
[JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_172) - Build # 23208 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/23208/ Java: 32bit/jdk1.8.0_172 -server -XX:+UseParallelGC 6 tests failed. FAILED: org.apache.solr.cloud.DeleteReplicaTest.raceConditionOnDeleteAndRegisterReplica Error Message: Expected new active leader null Live Nodes: [127.0.0.1:35533_solr, 127.0.0.1:38693_solr, 127.0.0.1:43383_solr] Last available state: DocCollection(raceDeleteReplica_true//collections/raceDeleteReplica_true/state.json/11)={ "pullReplicas":"0", "replicationFactor":"2", "shards":{"shard1":{ "range":"8000-7fff", "state":"active", "replicas":{ "core_node3":{ "core":"raceDeleteReplica_true_shard1_replica_n1", "base_url":"https://127.0.0.1:43993/solr;, "node_name":"127.0.0.1:43993_solr", "state":"down", "type":"NRT", "leader":"true"}, "core_node6":{ "core":"raceDeleteReplica_true_shard1_replica_n5", "base_url":"https://127.0.0.1:43993/solr;, "node_name":"127.0.0.1:43993_solr", "state":"down", "type":"NRT"}, "core_node4":{ "core":"raceDeleteReplica_true_shard1_replica_n2", "base_url":"https://127.0.0.1:38693/solr;, "node_name":"127.0.0.1:38693_solr", "state":"down", "type":"NRT", "router":{"name":"compositeId"}, "maxShardsPerNode":"1", "autoAddReplicas":"false", "nrtReplicas":"2", "tlogReplicas":"0"} Stack Trace: java.lang.AssertionError: Expected new active leader null Live Nodes: [127.0.0.1:35533_solr, 127.0.0.1:38693_solr, 127.0.0.1:43383_solr] Last available state: DocCollection(raceDeleteReplica_true//collections/raceDeleteReplica_true/state.json/11)={ "pullReplicas":"0", "replicationFactor":"2", "shards":{"shard1":{ "range":"8000-7fff", "state":"active", "replicas":{ "core_node3":{ "core":"raceDeleteReplica_true_shard1_replica_n1", "base_url":"https://127.0.0.1:43993/solr;, "node_name":"127.0.0.1:43993_solr", "state":"down", "type":"NRT", "leader":"true"}, "core_node6":{ "core":"raceDeleteReplica_true_shard1_replica_n5", "base_url":"https://127.0.0.1:43993/solr;, "node_name":"127.0.0.1:43993_solr", "state":"down", "type":"NRT"}, "core_node4":{ "core":"raceDeleteReplica_true_shard1_replica_n2", "base_url":"https://127.0.0.1:38693/solr;, "node_name":"127.0.0.1:38693_solr", "state":"down", "type":"NRT", "router":{"name":"compositeId"}, "maxShardsPerNode":"1", "autoAddReplicas":"false", "nrtReplicas":"2", "tlogReplicas":"0"} at __randomizedtesting.SeedInfo.seed([C0210725611D1AA1:AA3766F509EF506B]:0) at org.junit.Assert.fail(Assert.java:93) at org.apache.solr.cloud.SolrCloudTestCase.waitForState(SolrCloudTestCase.java:280) at org.apache.solr.cloud.DeleteReplicaTest.raceConditionOnDeleteAndRegisterReplica(DeleteReplicaTest.java:328) at org.apache.solr.cloud.DeleteReplicaTest.raceConditionOnDeleteAndRegisterReplica(DeleteReplicaTest.java:223) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at
[GitHub] lucene-solr issue #497: LUCENE-8026: ExitableDirectoryReader does not instru...
Github user jpountz commented on the issue: https://github.com/apache/lucene-solr/pull/497 `ant precommit` is complaining about some missing javadocs, could you address this? (the solution to some of these issues might be to make classes private rather than adding docs) --- - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org