[jira] [Comment Edited] (SOLR-11522) Suggestions/recommendations to rebalance replicas
[ https://issues.apache.org/jira/browse/SOLR-11522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16647457#comment-16647457 ] Noble Paul edited comment on SOLR-11522 at 10/12/18 6:20 AM: - Ideally, it should just be {{get(..)}}. but as this class is implemented by a million other classes there is likely to be a conflict. so I went with {{_get(..)}}. The broad philosophy is that Solr has embraced JSON for everything (even some of the JUnit tests are driven by JSON). We need to have an in-memory representation of JSON which is * memory efficient and as little overhead as possible * streaming . Use as little memory as possible * can be serialized to any of the supported outputs with ease .(supports Javabin etc too) * No concrete classes . hence {{MapWriter}}, {{IteratorWriter}} interfaces If a class implements {{MapWriter}}, most likely it is a deeply nested Object. . The most common operation that you can perform is to query that object with a proper path like {{a/b/c[4]/d}} . Looking up a {{MapWriter}} is usually an operation with a complexity of {{O(n)}} and no new Objects are created in the process. So, it is pretty fast. The best part is the readability it offers to our tests Using a string path is OK for JUnit tests but it leads to creation of unnecessary objects. If we use it in other places, we can't afford to create new {{String}} objects. That's why I created an equivalent method {{get(List)}} which doesn't create new Objects. The alternative was to use a {{Utils#getObjectByPath()}} method . It was definitely ugly. was (Author: noble.paul): Ideally, it should just be {{get(..)}}. but as this class is implemented by a million other classes there is likely to be a conflict. so I went with {{_get(..)}}. The broad philosophy is that Solr has embraced JSON for everything (even some of the JUnit tests are driven by JSON). We need to have a cheap in-memory representation of JSON which is * memory efficient and as little overhead as possible * streaming . Use as little memory as possible * can be serialized to any of the supported outputs with ease .(supports Javabin too) * No concrete classes . hence {{MapWriter}}, {{IteratorWriter}} interfaces If a class implements {{MapWriter}}, most likely it is a deeply nested Object. . The most common operation that you can perform is to query that object with a proper path like {{a/b/c[4]/d}} . Looking up a MapWriter is usually an operation with a complexity of {{O\(n)}} and no new Objects are created in the process. So, it is pretty fast. The best part is the readability it offers to our tests Using a string path is OK for JUnit tests but it leads to creation of unnecessary objects. If we use it in other places, we can't afford to create new {{String}} objects. That's why I created an equivalent method {{get(List)}} which doesn't create new Objects. The alternative was to use a {{Utils#getObjectByPath()}} method . It was definitely ugly. > Suggestions/recommendations to rebalance replicas > - > > Key: SOLR-11522 > URL: https://issues.apache.org/jira/browse/SOLR-11522 > Project: Solr > Issue Type: Sub-task > Security Level: Public(Default Security Level. Issues are Public) > Components: AutoScaling >Reporter: Noble Paul >Priority: Major > > It is possible that a cluster is unbalanced even if it is not breaking any of > the policy rules. Some nodes may have very little load while some others may > be heavily loaded. So, it is possible to move replicas around so that the > load is more evenly distributed. This is going to be driven by preferences. > The way we arrive at these suggestions is going to be as follows > # Sort the nodes according to the given preferences > # Choose a replica from the most loaded node ({{source-node}}) > # try adding them to the least loaded node ({{target-node}}) > # See if it breaks any policy rules. If yes , try another {{target-node}} > (go to #3) > # If no policy rules are being broken, present this as a {{suggestion}} . > The suggestion contains the following information > #* The {{source-node}} and {{target-node}} names > #* The actual v2 command that can be run to effect the operation > # Go to step #1 > # Do this until the a replicas can be moved in such a way that the {{target > node}} is more loaded than the {{source-node}} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS-EA] Lucene-Solr-BadApples-master-Linux (64bit/jdk-12-ea+12) - Build # 105 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-BadApples-master-Linux/105/ Java: 64bit/jdk-12-ea+12 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC 36 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.cloud.DocValuesNotIndexedTest Error Message: Collection not found: dv_coll Stack Trace: org.apache.solr.common.SolrException: Collection not found: dv_coll at __randomizedtesting.SeedInfo.seed([79FE1ABC6A3093FA]:0) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:851) at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194) at org.apache.solr.cloud.DocValuesNotIndexedTest.createCluster(DocValuesNotIndexedTest.java:154) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:875) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.base/java.lang.Thread.run(Thread.java:835) FAILED: junit.framework.TestSuite.org.apache.solr.cloud.DocValuesNotIndexedTest Error Message: Collection not found: dv_coll Stack Trace: org.apache.solr.common.SolrException: Collection not found: dv_coll at __randomizedtesting.SeedInfo.seed([79FE1ABC6A3093FA]:0) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:851) at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194) at org.apache.solr.cloud.DocValuesNotIndexedTest.createCluster(DocValuesNotIndexedTest.java:154) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:875) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) a
[JENKINS] Lucene-Solr-repro - Build # 1672 - Still Unstable
Build: https://builds.apache.org/job/Lucene-Solr-repro/1672/ [...truncated 28 lines...] [repro] Jenkins log URL: https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-master/178/consoleText [repro] Revision: 42ac07d11b9735df6dace64bf751ce528c0d01c8 [repro] Repro line: ant test -Dtestcase=SolrRrdBackendFactoryTest -Dtests.method=testBasic -Dtests.seed=4F000BFBCB779D63 -Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true -Dtests.locale=es-PR -Dtests.timezone=America/St_Thomas -Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1 [repro] Repro line: ant test -Dtestcase=TestCollectionStateWatchers -Dtests.method=testSimpleCollectionWatch -Dtests.seed=D866D20FD3644EB3 -Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true -Dtests.locale=mk-MK -Dtests.timezone=Etc/GMT-2 -Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1 [repro] git rev-parse --abbrev-ref HEAD [repro] git rev-parse HEAD [repro] Initial local git branch/revision: 42ac07d11b9735df6dace64bf751ce528c0d01c8 [repro] git fetch [repro] git checkout 42ac07d11b9735df6dace64bf751ce528c0d01c8 [...truncated 1 lines...] [repro] git merge --ff-only [...truncated 1 lines...] [repro] ant clean [...truncated 6 lines...] [repro] Test suites by module: [repro]solr/solrj [repro] TestCollectionStateWatchers [repro]solr/core [repro] SolrRrdBackendFactoryTest [repro] ant compile-test [...truncated 2560 lines...] [repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 -Dtests.class="*.TestCollectionStateWatchers" -Dtests.showOutput=onerror -Dtests.seed=D866D20FD3644EB3 -Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true -Dtests.locale=mk-MK -Dtests.timezone=Etc/GMT-2 -Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1 [...truncated 308 lines...] [repro] Setting last failure code to 256 [repro] ant compile-test [...truncated 1352 lines...] [repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 -Dtests.class="*.SolrRrdBackendFactoryTest" -Dtests.showOutput=onerror -Dtests.seed=4F000BFBCB779D63 -Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true -Dtests.locale=es-PR -Dtests.timezone=America/St_Thomas -Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1 [...truncated 68 lines...] [repro] Failures: [repro] 0/5 failed: org.apache.solr.metrics.rrd.SolrRrdBackendFactoryTest [repro] 5/5 failed: org.apache.solr.common.cloud.TestCollectionStateWatchers [repro] Re-testing 100% failures at the tip of master [repro] git fetch [repro] git checkout master [...truncated 4 lines...] [repro] git merge --ff-only [...truncated 1 lines...] [repro] ant clean [...truncated 8 lines...] [repro] Test suites by module: [repro]solr/solrj [repro] TestCollectionStateWatchers [repro] ant compile-test [...truncated 2452 lines...] [repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 -Dtests.class="*.TestCollectionStateWatchers" -Dtests.showOutput=onerror -Dtests.seed=D866D20FD3644EB3 -Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true -Dtests.locale=mk-MK -Dtests.timezone=Etc/GMT-2 -Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1 [...truncated 2147 lines...] [repro] Setting last failure code to 256 [repro] Failures at the tip of master: [repro] 2/5 failed: org.apache.solr.common.cloud.TestCollectionStateWatchers [repro] git checkout 42ac07d11b9735df6dace64bf751ce528c0d01c8 [...truncated 8 lines...] [repro] Exiting with code 256 [...truncated 5 lines...] - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11522) Suggestions/recommendations to rebalance replicas
[ https://issues.apache.org/jira/browse/SOLR-11522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16647457#comment-16647457 ] Noble Paul commented on SOLR-11522: --- Ideally, it should just be {{get(..)}}. but as this class is implemented by a million other classes there is likely to be a conflict. so I went with {{_get(..)}}. The broad philosophy is that Solr has embraced JSON for everything (even some of the JUnit tests are driven by JSON). We need to have a cheap in-memory representation of JSON which is * memory efficient and as little overhead as possible * streaming . Use as little memory as possible * can be serialized to any of the supported outputs with ease .(supports Javabin too) * No concrete classes . hence {{MapWriter}}, {{IteratorWriter}} interfaces If a class implements {{MapWriter}}, most likely it is a deeply nested Object. . The most common operation that you can perform is to query that object with a proper path like {{a/b/c[4]/d}} . Looking up a MapWriter is usually an operation with a complexity of {{O\(n)}} and no new Objects are created in the process. So, it is pretty fast. The best part is the readability it offers to our tests Using a string path is OK for JUnit tests but it leads to creation of unnecessary objects. If we use it in other places, we can't afford to create new {{String}} objects. That's why I created an equivalent method {{get(List)}} which doesn't create new Objects. The alternative was to use a {{Utils#getObjectByPath()}} method . It was definitely ugly. > Suggestions/recommendations to rebalance replicas > - > > Key: SOLR-11522 > URL: https://issues.apache.org/jira/browse/SOLR-11522 > Project: Solr > Issue Type: Sub-task > Security Level: Public(Default Security Level. Issues are Public) > Components: AutoScaling >Reporter: Noble Paul >Priority: Major > > It is possible that a cluster is unbalanced even if it is not breaking any of > the policy rules. Some nodes may have very little load while some others may > be heavily loaded. So, it is possible to move replicas around so that the > load is more evenly distributed. This is going to be driven by preferences. > The way we arrive at these suggestions is going to be as follows > # Sort the nodes according to the given preferences > # Choose a replica from the most loaded node ({{source-node}}) > # try adding them to the least loaded node ({{target-node}}) > # See if it breaks any policy rules. If yes , try another {{target-node}} > (go to #3) > # If no policy rules are being broken, present this as a {{suggestion}} . > The suggestion contains the following information > #* The {{source-node}} and {{target-node}} names > #* The actual v2 command that can be run to effect the operation > # Go to step #1 > # Do this until the a replicas can be moved in such a way that the {{target > node}} is more loaded than the {{source-node}} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11522) Suggestions/recommendations to rebalance replicas
[ https://issues.apache.org/jira/browse/SOLR-11522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16647414#comment-16647414 ] David Smiley commented on SOLR-11522: - In this and in other commits like SOLR-12792, SOLR-12824 you've introduced default methods to MapWriter that seem fishy to me. In particular the underscore methods like {{_get(...)}}. This approach doesn't sit well with me. I can understand why you might have added the underscore but I think the desire to do this should be a sign to reconsider the approach altogether. I think methods should be added to MapWriter with care (same for SolrParams and NamedList). [~noble.paul] can you please explain your thought process for this? What's the next best alternative? > Suggestions/recommendations to rebalance replicas > - > > Key: SOLR-11522 > URL: https://issues.apache.org/jira/browse/SOLR-11522 > Project: Solr > Issue Type: Sub-task > Security Level: Public(Default Security Level. Issues are Public) > Components: AutoScaling >Reporter: Noble Paul >Priority: Major > > It is possible that a cluster is unbalanced even if it is not breaking any of > the policy rules. Some nodes may have very little load while some others may > be heavily loaded. So, it is possible to move replicas around so that the > load is more evenly distributed. This is going to be driven by preferences. > The way we arrive at these suggestions is going to be as follows > # Sort the nodes according to the given preferences > # Choose a replica from the most loaded node ({{source-node}}) > # try adding them to the least loaded node ({{target-node}}) > # See if it breaks any policy rules. If yes , try another {{target-node}} > (go to #3) > # If no policy rules are being broken, present this as a {{suggestion}} . > The suggestion contains the following information > #* The {{source-node}} and {{target-node}} names > #* The actual v2 command that can be run to effect the operation > # Go to step #1 > # Do this until the a replicas can be moved in such a way that the {{target > node}} is more loaded than the {{source-node}} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-7.x-Linux (32bit/jdk1.8.0_172) - Build # 2898 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/2898/ Java: 32bit/jdk1.8.0_172 -client -XX:+UseConcMarkSweepGC 1 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.uninverting.TestFieldCacheVsDocValues Error Message: The test or suite printed 10392 bytes to stdout and stderr, even though the limit was set to 8192 bytes. Increase the limit with @Limit, ignore it completely with @SuppressSysoutChecks or run with -Dtests.verbose=true Stack Trace: java.lang.AssertionError: The test or suite printed 10392 bytes to stdout and stderr, even though the limit was set to 8192 bytes. Increase the limit with @Limit, ignore it completely with @SuppressSysoutChecks or run with -Dtests.verbose=true at __randomizedtesting.SeedInfo.seed([6201C07D6BF83F93]:0) at org.apache.lucene.util.TestRuleLimitSysouts.afterIfSuccessful(TestRuleLimitSysouts.java:211) at com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterIfSuccessful(TestRuleAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:37) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.lang.Thread.run(Thread.java:748) Build Log: [...truncated 13277 lines...] [junit4] Suite: org.apache.solr.uninverting.TestFieldCacheVsDocValues [junit4] 2> 1730945 ERROR (Finalizer) [] o.a.s.c.CoreContainer CoreContainer was not close prior to finalize(), indicates a bug -- POSSIBLE RESOURCE LEAK!!! instance=2549416 [junit4] 2> 1730946 ERROR (Finalizer) [] o.a.s.c.CoreContainer CoreContainer was not close prior to finalize(), indicates a bug -- POSSIBLE RESOURCE LEAK!!! instance=26022248 [junit4] 2> 1730947 ERROR (Finalizer) [] o.a.s.c.CoreContainer CoreContainer was not close prior to finalize(), indicates a bug -- POSSIBLE RESOURCE LEAK!!! instance=30678910 [junit4] 2> 1730947 ERROR (Finalizer) [] o.a.s.c.CoreContainer CoreContainer was not close prior to finalize(), indicates a bug -- POSSIBLE RESOURCE LEAK!!! instance=26588699 [junit4] 2> 1730947 ERROR (Finalizer) [] o.a.s.c.CoreContainer CoreContainer was not close prior to finalize(), indicates a bug -- POSSIBLE RESOURCE LEAK!!! instance=596455 [junit4] 2> 1730947 ERROR (Finalizer) [] o.a.s.c.CoreContainer CoreContainer was not close prior to finalize(), indicates a bug -- POSSIBLE RESOURCE LEAK!!! instance=21513695 [junit4] 2> 1730948 ERROR (Finalizer) [] o.a.s.c.CoreContainer CoreContainer was not close prior to finalize(), indicates a bug -- POSSIBLE RESOURCE LEAK!!! instance=13534887 [junit4] 2> 1730948 ERROR (Finalizer) [] o.a.s.c.CoreContainer CoreContainer was not close prior to finalize(), indicates a bug -- POSSIBLE RESOURCE LEAK!!! instance=9558795 [junit4] 2> 1730948 ERROR (Finalizer) [] o.a.s.c.CoreContainer CoreContainer was not close prior to finalize(), indicates a bug -- POSSIBLE RESOURCE LEAK!!! instance=4662623 [junit4] 2> 1730949 ERROR (Finalizer) [] o.a.s.c.CoreContainer CoreContainer was not close prior to finalize(), indicates a bug -- POSSIBLE RESOURCE LEAK!!! instance=24962485 [junit4] 2> 1730949 ERROR (Finalizer) [] o.a.s.c.CoreContainer CoreContainer was not close prior to finalize(), indicates a bug -- POSSIBLE RESOURCE LEAK!!! instance=21344387 [junit4] 2> 1730949 ERROR (Finalizer) [] o.a.s.c.CoreContainer CoreContainer was not close prior to finalize(), indicates a bug -- POSSIBLE RESOURCE LEAK!!! instance=21918307 [junit4] 2> 1730949 ERROR (Finalizer) [] o.a.s.c.CoreContainer CoreContainer was not close prior to finalize(), indicates a bug -- POSSIBLE RESOURCE LEAK!!! instance=13025992 [junit4] 2> 1730949 ERROR (Finalizer) [] o.a.s.c.CoreContainer CoreContainer was not close prior to finalize(), indicates a bug -- POSSIBLE RESOURCE LEAK!!! instance=15844008 [junit4] 2> 1730950 ERROR (Finalizer) [] o.a.s.c.CoreContainer CoreContainer was not close prior to finalize(), indicates a bug -- POSSIBLE RESOURCE LEAK!!! instance=6208477 [junit4] 2> 1730950 ERROR (Finalizer) [] o.a.s.c.CoreContainer CoreContainer was not close prior to finalize(), indicates a bug -- POSSIBLE RESOURCE LEAK!!! instance=15798623 [junit4] 2> 1730950 ERROR (Finalizer) [] o.a.s.c.CoreContainer CoreContai
[jira] [Comment Edited] (SOLR-12367) When adding a model referencing a non-existent feature the error message is very ambiguous
[ https://issues.apache.org/jira/browse/SOLR-12367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16646719#comment-16646719 ] Kamuela Lau edited comment on SOLR-12367 at 10/12/18 3:09 AM: -- A similar ClassCastException may show up for other LTRScoringModels such as NeuralNetworkModel (in particular, thinking of matrix/bias values for layers). The ambiguous message for CCE will only change for LinearModel as of right now. was (Author: kamulau): A similar ClassCastException may show up for other LTRScoringModels (such as entering an int/long in matrix, ), such as NeuralNetworkModel (in particular, thinking of matrix/bias values for layers). The ambiguous message for CCE will only change for LinearModel as of right now. > When adding a model referencing a non-existent feature the error message is > very ambiguous > -- > > Key: SOLR-12367 > URL: https://issues.apache.org/jira/browse/SOLR-12367 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: contrib - LTR >Affects Versions: 7.3.1 >Reporter: Georg Sorst >Priority: Minor > Attachments: SOLR-12367.patch, SOLR-12367.patch, SOLR-12367.patch > > > When adding a model that references a non-existent feature a very ambiguous > error message is thrown, something like "Model type does not exist > org.apache.solr.ltr.model.{{LinearModel}}". > > To reproduce, do not add any features and just add a model, for example by > doing this: > > {{curl -XPUT 'http://localhost:8983/solr/gettingstarted/schema/model-store' > --data-binary '}} > { > {{ "class": "org.apache.solr.ltr.model.LinearModel",}} > {{ "name": "myModel",}} > {{ "features": [ \{"name": "whatever" }],}} > {{ "params": {"weights": {"whatever": 1.0 > {{}' -H 'Content-type:application/json'}} > > The resulting error message "Model type does not exist > {{org.apache.solr.ltr.model.LinearModel" is extremely misleading and cost me > a while to figure out the actual cause.}} > > A more suitable error message should probably indicate the name of the > missing feature that the model is trying to reference. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12780) Add support for Leaky ReLU and TanH activations in LTR contrib module
[ https://issues.apache.org/jira/browse/SOLR-12780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16647342#comment-16647342 ] Kamuela Lau commented on SOLR-12780: Thanks [~cpoerschke] for the comment and updated patch! I had no idea that Math.tanh exists in Java itself; intriguing indeed! I think that you are right; there are fewer Math.exp calls, and the use of Math.tanh is also more succinct and easier to understand. > Add support for Leaky ReLU and TanH activations in LTR contrib module > - > > Key: SOLR-12780 > URL: https://issues.apache.org/jira/browse/SOLR-12780 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) > Components: contrib - LTR >Reporter: Kamuela Lau >Priority: Minor > Labels: ltr > Attachments: SOLR-12780.patch, SOLR-12780.patch > > > Add support for Leaky ReLU and TanH activation functions in > NeuralNetworkModel. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-7.x-Solaris (64bit/jdk1.8.0) - Build # 856 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Solaris/856/ Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseG1GC 3 tests failed. FAILED: org.apache.solr.client.solrj.impl.CloudSolrClientTest.testParallelUpdateQTime Error Message: Error from server at http://127.0.0.1:46721/solr/collection1_shard2_replica_n3: Expected mime type application/octet-stream but got text/html. Error 404 Can not find: /solr/collection1_shard2_replica_n3/update HTTP ERROR 404 Problem accessing /solr/collection1_shard2_replica_n3/update. Reason: Can not find: /solr/collection1_shard2_replica_n3/updatehttp://eclipse.org/jetty";>Powered by Jetty:// 9.4.11.v20180605 Stack Trace: org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: Error from server at http://127.0.0.1:46721/solr/collection1_shard2_replica_n3: Expected mime type application/octet-stream but got text/html. Error 404 Can not find: /solr/collection1_shard2_replica_n3/update HTTP ERROR 404 Problem accessing /solr/collection1_shard2_replica_n3/update. Reason: Can not find: /solr/collection1_shard2_replica_n3/updatehttp://eclipse.org/jetty";>Powered by Jetty:// 9.4.11.v20180605 at __randomizedtesting.SeedInfo.seed([CC9DF8D9884B5144:22458348C63484D7]:0) at org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:551) at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1016) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:884) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949) at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194) at org.apache.solr.client.solrj.impl.CloudSolrClientTest.testParallelUpdateQTime(CloudSolrClientTest.java:146) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985) at org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:110) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesR
[JENKINS] Lucene-Solr-Tests-7.x - Build # 942 - Still Unstable
Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.x/942/ 2 tests failed. FAILED: org.apache.solr.client.solrj.impl.CloudSolrClientTest.testRouting Error Message: Error from server at https://127.0.0.1:33072/solr/collection1_shard2_replica_n3: Expected mime type application/octet-stream but got text/html.Error 404 Can not find: /solr/collection1_shard2_replica_n3/update HTTP ERROR 404 Problem accessing /solr/collection1_shard2_replica_n3/update. Reason: Can not find: /solr/collection1_shard2_replica_n3/updatehttp://eclipse.org/jetty";>Powered by Jetty:// 9.4.11.v20180605 Stack Trace: org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: Error from server at https://127.0.0.1:33072/solr/collection1_shard2_replica_n3: Expected mime type application/octet-stream but got text/html. Error 404 Can not find: /solr/collection1_shard2_replica_n3/update HTTP ERROR 404 Problem accessing /solr/collection1_shard2_replica_n3/update. Reason: Can not find: /solr/collection1_shard2_replica_n3/updatehttp://eclipse.org/jetty";>Powered by Jetty:// 9.4.11.v20180605 at __randomizedtesting.SeedInfo.seed([E13A05D9D753B825:238D39B1D413485D]:0) at org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:551) at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1016) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:884) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949) at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817) at org.apache.solr.client.solrj.impl.CloudSolrClientTest.testRouting(CloudSolrClientTest.java:238) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985) at org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:110) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
[JENKINS] Lucene-Solr-repro - Build # 1671 - Still Unstable
Build: https://builds.apache.org/job/Lucene-Solr-repro/1671/ [...truncated 31 lines...] [repro] Jenkins log URL: https://builds.apache.org/job/Lucene-Solr-Tests-7.x/941/consoleText [repro] Revision: 1565df5092a89c68425ffb2eb9eafb43758e9344 [repro] Repro line: ant test -Dtestcase=DeleteReplicaTest -Dtests.method=raceConditionOnDeleteAndRegisterReplica -Dtests.seed=C4BA40BF8B39862 -Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=et-EE -Dtests.timezone=America/Thunder_Bay -Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1 [repro] Repro line: ant test -Dtestcase=TestSimPolicyCloud -Dtests.method=testCreateCollectionAddReplica -Dtests.seed=C4BA40BF8B39862 -Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=mt -Dtests.timezone=Asia/Kolkata -Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1 [repro] git rev-parse --abbrev-ref HEAD [repro] git rev-parse HEAD [repro] Initial local git branch/revision: 42ac07d11b9735df6dace64bf751ce528c0d01c8 [repro] git fetch [repro] git checkout 1565df5092a89c68425ffb2eb9eafb43758e9344 [...truncated 2 lines...] [repro] git merge --ff-only [...truncated 1 lines...] [repro] ant clean [...truncated 6 lines...] [repro] Test suites by module: [repro]solr/core [repro] TestSimPolicyCloud [repro] DeleteReplicaTest [repro] ant compile-test [...truncated 3437 lines...] [repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=10 -Dtests.class="*.TestSimPolicyCloud|*.DeleteReplicaTest" -Dtests.showOutput=onerror -Dtests.seed=C4BA40BF8B39862 -Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=mt -Dtests.timezone=Asia/Kolkata -Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1 [...truncated 27345 lines...] [repro] Setting last failure code to 256 [repro] Failures: [repro] 1/5 failed: org.apache.solr.cloud.autoscaling.sim.TestSimPolicyCloud [repro] 2/5 failed: org.apache.solr.cloud.DeleteReplicaTest [repro] git checkout 42ac07d11b9735df6dace64bf751ce528c0d01c8 [...truncated 2 lines...] [repro] Exiting with code 256 [...truncated 5 lines...] - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk-9.0.4) - Build # 2897 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/2897/ Java: 64bit/jdk-9.0.4 -XX:-UseCompressedOops -XX:+UseG1GC 3 tests failed. FAILED: org.apache.solr.cloud.TestTlogReplica.testRecovery Error Message: Can not find doc 7 in https://127.0.0.1:42315/solr Stack Trace: java.lang.AssertionError: Can not find doc 7 in https://127.0.0.1:42315/solr at __randomizedtesting.SeedInfo.seed([8FA54C120CF2E1BE:4E5535BE21A22B19]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.junit.Assert.assertNotNull(Assert.java:526) at org.apache.solr.cloud.TestTlogReplica.checkRTG(TestTlogReplica.java:901) at org.apache.solr.cloud.TestTlogReplica.testRecovery(TestTlogReplica.java:566) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:564) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.base/java.lang.Thread.run(Thread.java:844) FAILED: org.apache.solr.cloud.TestTlogReplica.testRecovery Error Message: Can not find doc 7 in https://127.0.0.1:39471/solr Stack Trace: java.lang.AssertionError:
[JENKINS] Lucene-Solr-BadApples-Tests-master - Build # 178 - Still Unstable
Build: https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-master/178/ 2 tests failed. FAILED: org.apache.solr.metrics.rrd.SolrRrdBackendFactoryTest.testBasic Error Message: {} expected:<1> but was:<0> Stack Trace: java.lang.AssertionError: {} expected:<1> but was:<0> at __randomizedtesting.SeedInfo.seed([4F000BFBCB779D63:E4FA16EE14AB1B4D]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.failNotEquals(Assert.java:647) at org.junit.Assert.assertEquals(Assert.java:128) at org.junit.Assert.assertEquals(Assert.java:472) at org.apache.solr.metrics.rrd.SolrRrdBackendFactoryTest.testBasic(SolrRrdBackendFactoryTest.java:92) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.lang.Thread.run(Thread.java:748) FAILED: org.apache.solr.common.cloud.TestCollectionStateWatchers.testSimpleCollectionWatch Error Message: CollectionStateWatcher was never notified of cluster change Stack Trace: java.lang.AssertionError: CollectionStat
[jira] [Updated] (SOLR-12854) Document steps to improve delta import via DataImportHandler
[ https://issues.apache.org/jira/browse/SOLR-12854?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Amrit Sarkar updated SOLR-12854: Description: Delta imports in DataImportHandler is sometimes slower than full imports where the delta import makes multiple queries compare to full import and hence making it time complex. Listed in: https://wiki.apache.org/solr/DataImportHandlerDeltaQueryViaFullImport In the mailing list; http://lucene.472066.n3.nabble.com/Number-of-requests-spike-up-when-i-do-the-delta-Import-td4338162.html one of the Solr users have noted a workaround which works perfectly and improves delta import performance, where we need to specify ${dataimporter.last_index_time} in the delta_import_query, and not delta_query. {code} I found a hacky way to limit the number of times deltaImportQuery was executed. As designed, solr executes deltaQuery to get a list of ids that need to be indexed. For each of those, it executes deltaImportQuery, which is typically very similar to the full query. I constructed a deltaQuery to purposely only return 1 row. E.g. deltaQuery = "SELECT id FROM table WHERE rownum=1" // written for oracle, likely requires a different syntax for other dbs. Also, it occurred to you could probably include the date>= '${dataimporter.last_index_time}' filter here so this returns 0 rows if no data has changed Since deltaImportQuery now *only gets called once I needed to add the filter logic to *deltaImportQuery *to only select the changed rows (that logic is normally in *deltaQuery). E.g. deltaImportQuery = [normal import query] WHERE date >= '${dataimporter.last_index_time}' {code} A number of other users have adopted the strategy and DIH delta import performance has improved, and henceforth documenting this strategy as TIP will help other users too. was: Delta imports in DataImportHandler is sometimes slower than full imports where the delta import makes multiple queries compare to full import and hence making it time complex. Listed in: https://wiki.apache.org/solr/DataImportHandlerDeltaQueryViaFullImport In the mailing list; http://lucene.472066.n3.nabble.com/Number-of-requests-spike-up-when-i-do-the-delta-Import-td4338162.html one of the Solr users have noted a workaround which works perfectly and improves delta import performance, where we need to specify ${dataimporter.last_index_time} in the delta_import_query, and not delta_sql_query. {code} I found a hacky way to limit the number of times deltaImportQuery was executed. As designed, solr executes deltaQuery to get a list of ids that need to be indexed. For each of those, it executes deltaImportQuery, which is typically very similar to the full query. I constructed a deltaQuery to purposely only return 1 row. E.g. deltaQuery = "SELECT id FROM table WHERE rownum=1" // written for oracle, likely requires a different syntax for other dbs. Also, it occurred to you could probably include the date>= '${dataimporter.last_index_time}' filter here so this returns 0 rows if no data has changed Since deltaImportQuery now *only gets called once I needed to add the filter logic to *deltaImportQuery *to only select the changed rows (that logic is normally in *deltaQuery). E.g. deltaImportQuery = [normal import query] WHERE date >= '${dataimporter.last_index_time}' {code} A number of other users have adopted the strategy and DIH delta import performance has improved, and henceforth documenting this strategy as TIP will help other users too. > Document steps to improve delta import via DataImportHandler > - > > Key: SOLR-12854 > URL: https://issues.apache.org/jira/browse/SOLR-12854 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: contrib - DataImportHandler >Affects Versions: 7.5 >Reporter: Amrit Sarkar >Priority: Major > > Delta imports in DataImportHandler is sometimes slower than full imports > where the delta import makes multiple queries compare to full import and > hence making it time complex. Listed in: > https://wiki.apache.org/solr/DataImportHandlerDeltaQueryViaFullImport > In the mailing list; > http://lucene.472066.n3.nabble.com/Number-of-requests-spike-up-when-i-do-the-delta-Import-td4338162.html > one of the Solr users have noted a workaround which works perfectly and > improves delta import performance, where we need to specify > ${dataimporter.last_index_time} in the delta_import_query, and not > delta_query. > {code} > I found a hacky way to limit the number of > times deltaImportQuery was executed. > As designed, solr executes deltaQuery to get a list of ids that need to be > indexed. For each of those, it executes deltaImportQuery, which is typically > very similar
[jira] [Commented] (SOLR-12739) Make autoscaling policy based replica placement the default strategy for placing replicas
[ https://issues.apache.org/jira/browse/SOLR-12739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16647111#comment-16647111 ] Steve Rowe commented on SOLR-12739: --- Another failure, reproduces well enough for me on master to run {{git bisect}} and locate the first failing commit ({{dbed8baf}} on this issue) when I leave off the {{-Dtests.method}} cmdline param. From [https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-7.x/183/]: {noformat} Checking out Revision d921fe50e9bfcace5253a27e69d2e91f3eccc172 (refs/remotes/origin/branch_7x) [...] [junit4] 2> NOTE: reproduce with: ant test -Dtestcase=TestCollectionStateWatchers -Dtests.method=testSimpleCollectionWatch -Dtests.seed=466EB2632B3B0B17 -Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true -Dtests.locale=is-IS -Dtests.timezone=Africa/Lagos -Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1 [junit4] FAILURE 33.2s J2 | TestCollectionStateWatchers.testSimpleCollectionWatch <<< [junit4]> Throwable #1: java.lang.AssertionError: CollectionStateWatcher was never notified of cluster change [junit4]>at __randomizedtesting.SeedInfo.seed([466EB2632B3B0B17:1B557D136C369429]:0) [junit4]>at org.apache.solr.common.cloud.TestCollectionStateWatchers.testSimpleCollectionWatch(TestCollectionStateWatchers.java:141) [junit4]>at java.lang.Thread.run(Thread.java:748) [...] [junit4] 2> NOTE: test params are: codec=Asserting(Lucene70): {}, docValues:{}, maxPointsInLeafNode=815, maxMBSortInHeap=5.42575346479253, sim=RandomSimilarity(queryNorm=true): {}, locale=is-IS, timezone=Africa/Lagos [junit4] 2> NOTE: Linux 4.4.0-137-generic amd64/Oracle Corporation 1.8.0_172 (64-bit)/cpus=4,threads=1,free=74336440,total=429391872 {noformat} > Make autoscaling policy based replica placement the default strategy for > placing replicas > - > > Key: SOLR-12739 > URL: https://issues.apache.org/jira/browse/SOLR-12739 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: AutoScaling, SolrCloud >Reporter: Shalin Shekhar Mangar >Assignee: Shalin Shekhar Mangar >Priority: Major > Fix For: 7.6, master (8.0) > > Attachments: SOLR-12739.patch, SOLR-12739.patch, SOLR-12739.patch, > SOLR-12739.patch, SOLR-12739.patch > > > Today the default placement strategy is the same one used since Solr 4.x > which is to select nodes on a round robin fashion. I propose to make the > autoscaling policy based replica placement as the default policy for placing > replicas. > This is related to SOLR-12648 where even though we have default cluster > preferences, we don't use them unless a policy is also configured. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-repro - Build # 1670 - Still Unstable
Build: https://builds.apache.org/job/Lucene-Solr-repro/1670/ [...truncated 31 lines...] [repro] Jenkins log URL: https://builds.apache.org/job/Lucene-Solr-Tests-master/2866/consoleText [repro] Revision: a0bb5017722ce698fc390f3990243697341d2b8d [repro] Repro line: ant test -Dtestcase=CloudSolrClientTest -Dtests.method=testRouting -Dtests.seed=939F7A6612DD48C3 -Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=ko-KR -Dtests.timezone=America/Indiana/Petersburg -Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1 [repro] Repro line: ant test -Dtestcase=CloudSolrClientTest -Dtests.method=testParallelUpdateQTime -Dtests.seed=939F7A6612DD48C3 -Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=ko-KR -Dtests.timezone=America/Indiana/Petersburg -Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1 [repro] git rev-parse --abbrev-ref HEAD [repro] git rev-parse HEAD [repro] Initial local git branch/revision: 42ac07d11b9735df6dace64bf751ce528c0d01c8 [repro] git fetch [repro] git checkout a0bb5017722ce698fc390f3990243697341d2b8d [...truncated 2 lines...] [repro] git merge --ff-only [...truncated 1 lines...] [repro] ant clean [...truncated 6 lines...] [repro] Test suites by module: [repro]solr/solrj [repro] CloudSolrClientTest [repro] ant compile-test [...truncated 2560 lines...] [repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 -Dtests.class="*.CloudSolrClientTest" -Dtests.showOutput=onerror -Dtests.seed=939F7A6612DD48C3 -Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=ko-KR -Dtests.timezone=America/Indiana/Petersburg -Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1 [...truncated 1070 lines...] [repro] Setting last failure code to 256 [repro] Failures: [repro] 2/5 failed: org.apache.solr.client.solrj.impl.CloudSolrClientTest [repro] git checkout 42ac07d11b9735df6dace64bf751ce528c0d01c8 [...truncated 2 lines...] [repro] Exiting with code 256 [...truncated 5 lines...] - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-9.0.4) - Build # 23012 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/23012/ Java: 64bit/jdk-9.0.4 -XX:-UseCompressedOops -XX:+UseSerialGC 5 tests failed. FAILED: org.apache.solr.cloud.DeleteReplicaTest.raceConditionOnDeleteAndRegisterReplica Error Message: Expected new active leader null Live Nodes: [127.0.0.1:34139_solr, 127.0.0.1:40567_solr, 127.0.0.1:41743_solr] Last available state: DocCollection(raceDeleteReplica_false//collections/raceDeleteReplica_false/state.json/12)={ "pullReplicas":"0", "replicationFactor":"2", "shards":{"shard1":{ "range":"8000-7fff", "state":"active", "replicas":{ "core_node3":{ "core":"raceDeleteReplica_false_shard1_replica_n1", "base_url":"https://127.0.0.1:43329/solr";, "node_name":"127.0.0.1:43329_solr", "state":"down", "type":"NRT", "force_set_state":"false", "leader":"true"}, "core_node6":{ "core":"raceDeleteReplica_false_shard1_replica_n5", "base_url":"https://127.0.0.1:43329/solr";, "node_name":"127.0.0.1:43329_solr", "state":"down", "type":"NRT", "force_set_state":"false", "router":{"name":"compositeId"}, "maxShardsPerNode":"1", "autoAddReplicas":"false", "nrtReplicas":"2", "tlogReplicas":"0"} Stack Trace: java.lang.AssertionError: Expected new active leader null Live Nodes: [127.0.0.1:34139_solr, 127.0.0.1:40567_solr, 127.0.0.1:41743_solr] Last available state: DocCollection(raceDeleteReplica_false//collections/raceDeleteReplica_false/state.json/12)={ "pullReplicas":"0", "replicationFactor":"2", "shards":{"shard1":{ "range":"8000-7fff", "state":"active", "replicas":{ "core_node3":{ "core":"raceDeleteReplica_false_shard1_replica_n1", "base_url":"https://127.0.0.1:43329/solr";, "node_name":"127.0.0.1:43329_solr", "state":"down", "type":"NRT", "force_set_state":"false", "leader":"true"}, "core_node6":{ "core":"raceDeleteReplica_false_shard1_replica_n5", "base_url":"https://127.0.0.1:43329/solr";, "node_name":"127.0.0.1:43329_solr", "state":"down", "type":"NRT", "force_set_state":"false", "router":{"name":"compositeId"}, "maxShardsPerNode":"1", "autoAddReplicas":"false", "nrtReplicas":"2", "tlogReplicas":"0"} at __randomizedtesting.SeedInfo.seed([F4C77EF8B90D35E1:9ED11F28D1FF7F2B]:0) at org.junit.Assert.fail(Assert.java:93) at org.apache.solr.cloud.SolrCloudTestCase.waitForState(SolrCloudTestCase.java:280) at org.apache.solr.cloud.DeleteReplicaTest.raceConditionOnDeleteAndRegisterReplica(DeleteReplicaTest.java:327) at org.apache.solr.cloud.DeleteReplicaTest.raceConditionOnDeleteAndRegisterReplica(DeleteReplicaTest.java:223) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:564) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944) at com.carrotsearch.randomizedtesting.Randomize
[jira] [Commented] (SOLR-12852) NPE in ClusteringComponent
[ https://issues.apache.org/jira/browse/SOLR-12852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16646948#comment-16646948 ] Dawid Weiss commented on SOLR-12852: Thanks Markus. I'm out of office tomorrow, but I'll return to it and try to fix it properly (including the test). > NPE in ClusteringComponent > -- > > Key: SOLR-12852 > URL: https://issues.apache.org/jira/browse/SOLR-12852 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: contrib - Clustering >Affects Versions: 7.5 >Reporter: Markus Jelsma >Assignee: Dawid Weiss >Priority: Major > Fix For: master (8.0) > > > Got this exception: > {code} > o.a.s.s.HttpSolrCall null:java.lang.NullPointerException > at > org.apache.solr.handler.clustering.ClusteringComponent.process(ClusteringComponent.java:234) > at > org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:298) > at > org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199) > at org.apache.solr.core.SolrCore.execute(SolrCore.java:2541) > at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:709) > at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:515) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:377) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:323) > {code} > with this config (copied from reference guide, changed title and snippet > parameters) > {code} >class="solr.clustering.ClusteringComponent"> > > > lingo >name="carrot.algorithm">org.carrot2.clustering.lingo.LingoClusteringAlgorithm > > > > stc >name="carrot.algorithm">org.carrot2.clustering.stc.STCClusteringAlgorithm > > > > class="solr.SearchHandler"> > > true > true > > id > title_nl > content_nl > > 100 > *,score > > > > clustering > > > {code} > using this query: > http://localhost:8983/solr/collection/clustering?q=*:* > All libraries are present, Solr no longer complains about missing classes, > instead i got this. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-12857) Dataimport status screen in UI doesn't work in Internet Explorer 11
[ https://issues.apache.org/jira/browse/SOLR-12857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16646941#comment-16646941 ] Shawn Heisey edited comment on SOLR-12857 at 10/11/18 7:20 PM: --- Firefox is the developer edition, version 63.0b13 (64-bit). Chrome is Version 69.0.3497.100 (Official Build) (64-bit) IE is Version: 11.345.17134.0 Edge versions: Microsoft Edge 42.17134.1.0 Microsoft EdgeHTML 17.17134 Windows 10 is also current on its updates. The machine has not yet been selected to receive the big October 2018 update. was (Author: elyograg): Chrome and Firefox are running the most current versions. Windows 10 is also current on its updates. The machine has not yet been selected to receive the big October 2018 update. > Dataimport status screen in UI doesn't work in Internet Explorer 11 > --- > > Key: SOLR-12857 > URL: https://issues.apache.org/jira/browse/SOLR-12857 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: Admin UI >Affects Versions: 7.5 >Reporter: Shawn Heisey >Priority: Major > Attachments: solr-admin-ui-firefox-dev-dataimport-status-works.png, > solr-admin-ui-ie11-dataimport-status-no-work.png > > > Got a report via IRC that the dataimport screen was not showing the import > status. Accessing the API directly is working for the user. > Fired up the dih example on 7.5.0, checked it. Everything looked good, so I > thought I would check multiple browsers. All is good in current versions of > Firefox Developer Edition, Chrome, and Edge. But in Internet Explorer > (version 11 on Windows 10), it says "Last Update: unknown". -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12857) Dataimport status screen in UI doesn't work in Internet Explorer 11
[ https://issues.apache.org/jira/browse/SOLR-12857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16646941#comment-16646941 ] Shawn Heisey commented on SOLR-12857: - Chrome and Firefox are running the most current versions. Windows 10 is also current on its updates. The machine has not yet been selected to receive the big October 2018 update. > Dataimport status screen in UI doesn't work in Internet Explorer 11 > --- > > Key: SOLR-12857 > URL: https://issues.apache.org/jira/browse/SOLR-12857 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: Admin UI >Affects Versions: 7.5 >Reporter: Shawn Heisey >Priority: Major > Attachments: solr-admin-ui-firefox-dev-dataimport-status-works.png, > solr-admin-ui-ie11-dataimport-status-no-work.png > > > Got a report via IRC that the dataimport screen was not showing the import > status. Accessing the API directly is working for the user. > Fired up the dih example on 7.5.0, checked it. Everything looked good, so I > thought I would check multiple browsers. All is good in current versions of > Firefox Developer Edition, Chrome, and Edge. But in Internet Explorer > (version 11 on Windows 10), it says "Last Update: unknown". -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12857) Dataimport status screen in UI doesn't work in Internet Explorer 11
[ https://issues.apache.org/jira/browse/SOLR-12857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16646936#comment-16646936 ] Shawn Heisey commented on SOLR-12857: - Attached screenshots, one from Firefox Developer Edition, the other from IE11. > Dataimport status screen in UI doesn't work in Internet Explorer 11 > --- > > Key: SOLR-12857 > URL: https://issues.apache.org/jira/browse/SOLR-12857 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: Admin UI >Affects Versions: 7.5 >Reporter: Shawn Heisey >Priority: Major > Attachments: solr-admin-ui-firefox-dev-dataimport-status-works.png, > solr-admin-ui-ie11-dataimport-status-no-work.png > > > Got a report via IRC that the dataimport screen was not showing the import > status. Accessing the API directly is working for the user. > Fired up the dih example on 7.5.0, checked it. Everything looked good, so I > thought I would check multiple browsers. All is good in current versions of > Firefox Developer Edition, Chrome, and Edge. But in Internet Explorer > (version 11 on Windows 10), it says "Last Update: unknown". -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-12857) Dataimport status screen in UI doesn't work in Internet Explorer 11
Shawn Heisey created SOLR-12857: --- Summary: Dataimport status screen in UI doesn't work in Internet Explorer 11 Key: SOLR-12857 URL: https://issues.apache.org/jira/browse/SOLR-12857 Project: Solr Issue Type: Bug Security Level: Public (Default Security Level. Issues are Public) Components: Admin UI Affects Versions: 7.5 Reporter: Shawn Heisey Attachments: solr-admin-ui-firefox-dev-dataimport-status-works.png, solr-admin-ui-ie11-dataimport-status-no-work.png Got a report via IRC that the dataimport screen was not showing the import status. Accessing the API directly is working for the user. Fired up the dih example on 7.5.0, checked it. Everything looked good, so I thought I would check multiple browsers. All is good in current versions of Firefox Developer Edition, Chrome, and Edge. But in Internet Explorer (version 11 on Windows 10), it says "Last Update: unknown". -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-12857) Dataimport status screen in UI doesn't work in Internet Explorer 11
[ https://issues.apache.org/jira/browse/SOLR-12857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shawn Heisey updated SOLR-12857: Attachment: solr-admin-ui-ie11-dataimport-status-no-work.png solr-admin-ui-firefox-dev-dataimport-status-works.png > Dataimport status screen in UI doesn't work in Internet Explorer 11 > --- > > Key: SOLR-12857 > URL: https://issues.apache.org/jira/browse/SOLR-12857 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: Admin UI >Affects Versions: 7.5 >Reporter: Shawn Heisey >Priority: Major > Attachments: solr-admin-ui-firefox-dev-dataimport-status-works.png, > solr-admin-ui-ie11-dataimport-status-no-work.png > > > Got a report via IRC that the dataimport screen was not showing the import > status. Accessing the API directly is working for the user. > Fired up the dih example on 7.5.0, checked it. Everything looked good, so I > thought I would check multiple browsers. All is good in current versions of > Firefox Developer Edition, Chrome, and Edge. But in Internet Explorer > (version 11 on Windows 10), it says "Last Update: unknown". -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-repro - Build # 1669 - Unstable
Build: https://builds.apache.org/job/Lucene-Solr-repro/1669/ [...truncated 28 lines...] [repro] Jenkins log URL: https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-7.x/183/consoleText [repro] Revision: d921fe50e9bfcace5253a27e69d2e91f3eccc172 [repro] Repro line: ant test -Dtestcase=HdfsRecoveryZkTest -Dtests.seed=BE5155F3D629A536 -Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true -Dtests.locale=sq -Dtests.timezone=America/Aruba -Dtests.asserts=true -Dtests.file.encoding=UTF-8 [repro] Repro line: ant test -Dtestcase=TestCollectionStateWatchers -Dtests.method=testSimpleCollectionWatch -Dtests.seed=466EB2632B3B0B17 -Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true -Dtests.locale=is-IS -Dtests.timezone=Africa/Lagos -Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1 [repro] git rev-parse --abbrev-ref HEAD [repro] git rev-parse HEAD [repro] Initial local git branch/revision: 42ac07d11b9735df6dace64bf751ce528c0d01c8 [repro] git fetch [repro] git checkout d921fe50e9bfcace5253a27e69d2e91f3eccc172 [...truncated 2 lines...] [repro] git merge --ff-only [...truncated 1 lines...] [repro] ant clean [...truncated 6 lines...] [repro] Test suites by module: [repro]solr/core [repro] HdfsRecoveryZkTest [repro]solr/solrj [repro] TestCollectionStateWatchers [repro] ant compile-test [...truncated 3437 lines...] [repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 -Dtests.class="*.HdfsRecoveryZkTest" -Dtests.showOutput=onerror -Dtests.seed=BE5155F3D629A536 -Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true -Dtests.locale=sq -Dtests.timezone=America/Aruba -Dtests.asserts=true -Dtests.file.encoding=UTF-8 [...truncated 74 lines...] [repro] ant compile-test [...truncated 454 lines...] [repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 -Dtests.class="*.TestCollectionStateWatchers" -Dtests.showOutput=onerror -Dtests.seed=466EB2632B3B0B17 -Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true -Dtests.locale=is-IS -Dtests.timezone=Africa/Lagos -Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1 [...truncated 309 lines...] [repro] Setting last failure code to 256 [repro] Failures: [repro] 0/5 failed: org.apache.solr.cloud.hdfs.HdfsRecoveryZkTest [repro] 5/5 failed: org.apache.solr.common.cloud.TestCollectionStateWatchers [repro] Re-testing 100% failures at the tip of branch_7x [repro] git fetch [...truncated 2 lines...] [repro] git checkout branch_7x [...truncated 4 lines...] [repro] git merge --ff-only [...truncated 13 lines...] [repro] ant clean [...truncated 8 lines...] [repro] Test suites by module: [repro]solr/solrj [repro] TestCollectionStateWatchers [repro] ant compile-test [...truncated 2573 lines...] [repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 -Dtests.class="*.TestCollectionStateWatchers" -Dtests.showOutput=onerror -Dtests.seed=466EB2632B3B0B17 -Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true -Dtests.locale=is-IS -Dtests.timezone=Africa/Lagos -Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1 [...truncated 302 lines...] [repro] Setting last failure code to 256 [repro] Failures at the tip of branch_7x: [repro] 5/5 failed: org.apache.solr.common.cloud.TestCollectionStateWatchers [repro] Re-testing 100% failures at the tip of branch_7x without a seed [repro] ant clean [...truncated 8 lines...] [repro] Test suites by module: [repro]solr/solrj [repro] TestCollectionStateWatchers [repro] ant compile-test [...truncated 2573 lines...] [repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 -Dtests.class="*.TestCollectionStateWatchers" -Dtests.showOutput=onerror -Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true -Dtests.locale=is-IS -Dtests.timezone=Africa/Lagos -Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1 [...truncated 291 lines...] [repro] Setting last failure code to 256 [repro] Failures at the tip of branch_7x without a seed: [repro] 4/5 failed: org.apache.solr.common.cloud.TestCollectionStateWatchers [repro] git checkout 42ac07d11b9735df6dace64bf751ce528c0d01c8 [...truncated 8 lines...] [repro] Exiting with code 256 [...truncated 6 lines...] - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12852) NPE in ClusteringComponent
[ https://issues.apache.org/jira/browse/SOLR-12852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16646915#comment-16646915 ] Markus Jelsma commented on SOLR-12852: -- Hello David, We have regular SolrCloud collections using various number of shards. I wouldn't see how our collection's characteristics could mess up. I know distributed search calls SearchComponent's process() multiple times per request, at different phases, only at the final phase fields are available to do highlighting or clustering on. So i think this component never had support for distributed search in the first place. That doesn't explain DistributedClusteringComponentTest passing though. Testing for the correct phase, and perhaps even the null check might solve it. Perhaps a committer having experience dealing with SearchComponent and different distributed search phases could chime in on this? Regards, Markus > NPE in ClusteringComponent > -- > > Key: SOLR-12852 > URL: https://issues.apache.org/jira/browse/SOLR-12852 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: contrib - Clustering >Affects Versions: 7.5 >Reporter: Markus Jelsma >Assignee: Dawid Weiss >Priority: Major > Fix For: master (8.0) > > > Got this exception: > {code} > o.a.s.s.HttpSolrCall null:java.lang.NullPointerException > at > org.apache.solr.handler.clustering.ClusteringComponent.process(ClusteringComponent.java:234) > at > org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:298) > at > org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199) > at org.apache.solr.core.SolrCore.execute(SolrCore.java:2541) > at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:709) > at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:515) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:377) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:323) > {code} > with this config (copied from reference guide, changed title and snippet > parameters) > {code} >class="solr.clustering.ClusteringComponent"> > > > lingo >name="carrot.algorithm">org.carrot2.clustering.lingo.LingoClusteringAlgorithm > > > > stc >name="carrot.algorithm">org.carrot2.clustering.stc.STCClusteringAlgorithm > > > > class="solr.SearchHandler"> > > true > true > > id > title_nl > content_nl > > 100 > *,score > > > > clustering > > > {code} > using this query: > http://localhost:8983/solr/collection/clustering?q=*:* > All libraries are present, Solr no longer complains about missing classes, > instead i got this. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS-EA] Lucene-Solr-7.x-Linux (64bit/jdk-12-ea+12) - Build # 2896 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/2896/ Java: 64bit/jdk-12-ea+12 -XX:+UseCompressedOops -XX:+UseG1GC 37 tests failed. FAILED: org.apache.solr.client.solrj.impl.CloudSolrClientTest.checkCollectionParameters Error Message: No live SolrServers available to handle this request:[https://127.0.0.1:36723/solr/multicollection2] Stack Trace: org.apache.solr.client.solrj.SolrServerException: No live SolrServers available to handle this request:[https://127.0.0.1:36723/solr/multicollection2] at __randomizedtesting.SeedInfo.seed([2B59A6F6C9EA36DF:D04E354690BDE168]:0) at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:462) at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1107) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:884) at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194) at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:942) at org.apache.solr.client.solrj.impl.CloudSolrClientTest.checkCollectionParameters(CloudSolrClientTest.java:593) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985) at org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:110) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestR
[jira] [Resolved] (SOLR-12565) Solr Guide references CloudSolrClient.uploadConfig() method which no longer exists
[ https://issues.apache.org/jira/browse/SOLR-12565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Gerlowski resolved SOLR-12565. Resolution: Fixed Fix Version/s: master (8.0) 7.6 > Solr Guide references CloudSolrClient.uploadConfig() method which no longer > exists > -- > > Key: SOLR-12565 > URL: https://issues.apache.org/jira/browse/SOLR-12565 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: documentation >Affects Versions: 7.0, 7.1, 7.2, 7.3, 7.4 >Reporter: Andy Chillrud >Assignee: Jason Gerlowski >Priority: Minor > Fix For: 7.6, master (8.0) > > Attachments: SOLR-12565.patch, SOLR-12565.patch > > > The uploadConfig() method seems to have been removed from the CloudSolrClient > class in the 7.0 release, but the Solr Ref Guide still references it. > See section on "Uploading Configuration Files using bin/solr or SolrJ" at > [https://lucene.apache.org/solr/guide/7_4/using-zookeeper-to-manage-configuration-files.html > > |https://lucene.apache.org/solr/guide/7_4/using-zookeeper-to-manage-configuration-files.html] > Poking around in the source code it seems that perhaps this should be changed > to reference ZkConfigManager.uploadConfigDir() instead, but not really sure > if this is best. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12565) Solr Guide references CloudSolrClient.uploadConfig() method which no longer exists
[ https://issues.apache.org/jira/browse/SOLR-12565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16646904#comment-16646904 ] Jason Gerlowski commented on SOLR-12565: Thanks for the report Andy! > Solr Guide references CloudSolrClient.uploadConfig() method which no longer > exists > -- > > Key: SOLR-12565 > URL: https://issues.apache.org/jira/browse/SOLR-12565 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: documentation >Affects Versions: 7.0, 7.1, 7.2, 7.3, 7.4 >Reporter: Andy Chillrud >Assignee: Jason Gerlowski >Priority: Minor > Fix For: 7.6, master (8.0) > > Attachments: SOLR-12565.patch, SOLR-12565.patch > > > The uploadConfig() method seems to have been removed from the CloudSolrClient > class in the 7.0 release, but the Solr Ref Guide still references it. > See section on "Uploading Configuration Files using bin/solr or SolrJ" at > [https://lucene.apache.org/solr/guide/7_4/using-zookeeper-to-manage-configuration-files.html > > |https://lucene.apache.org/solr/guide/7_4/using-zookeeper-to-manage-configuration-files.html] > Poking around in the source code it seems that perhaps this should be changed > to reference ZkConfigManager.uploadConfigDir() instead, but not really sure > if this is best. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12565) Solr Guide references CloudSolrClient.uploadConfig() method which no longer exists
[ https://issues.apache.org/jira/browse/SOLR-12565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16646901#comment-16646901 ] Jason Gerlowski commented on SOLR-12565: branch_7x commit: 720481e7c04dc6a69c681cccd543931cf262d78a master commit: 42ac07d11b9735df6dace64bf751ce528c0d01c8 > Solr Guide references CloudSolrClient.uploadConfig() method which no longer > exists > -- > > Key: SOLR-12565 > URL: https://issues.apache.org/jira/browse/SOLR-12565 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: documentation >Affects Versions: 7.0, 7.1, 7.2, 7.3, 7.4 >Reporter: Andy Chillrud >Assignee: Jason Gerlowski >Priority: Minor > Attachments: SOLR-12565.patch, SOLR-12565.patch > > > The uploadConfig() method seems to have been removed from the CloudSolrClient > class in the 7.0 release, but the Solr Ref Guide still references it. > See section on "Uploading Configuration Files using bin/solr or SolrJ" at > [https://lucene.apache.org/solr/guide/7_4/using-zookeeper-to-manage-configuration-files.html > > |https://lucene.apache.org/solr/guide/7_4/using-zookeeper-to-manage-configuration-files.html] > Poking around in the source code it seems that perhaps this should be changed > to reference ZkConfigManager.uploadConfigDir() instead, but not really sure > if this is best. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-12856) Improve javadocs for public SolrJ classes
Jason Gerlowski created SOLR-12856: -- Summary: Improve javadocs for public SolrJ classes Key: SOLR-12856 URL: https://issues.apache.org/jira/browse/SOLR-12856 Project: Solr Issue Type: Improvement Security Level: Public (Default Security Level. Issues are Public) Components: documentation, SolrJ Affects Versions: 7.5 Reporter: Jason Gerlowski Assignee: Jason Gerlowski While poking around some SolrJ code, I noticed that the Javadoc documentation tends to be spotty. Some sections have pretty meticulous descriptions, others are missing javadocs entirely. I'm not aiming to entirely correct that situation here, but I did want to fix a few of the more serious concerns I ran into in some of my digging. This list includes: * SolrClient.commit should have some warning about the downside of invoking commits on the client side * ditto re: SolrClient.rollback * SolrClient's single-doc add method should have a warning about performance implications of not batching. Not sure if this should live in SolrClient itself and be worded as a "potential" perf impact, or live in each of the clients it applies to. * the SolrClient builders can use some clarification around when particular settings are useful. * ResponseParser and some other classes might benefit from some high level class javadocs. Figured this was worth a JIRA so others can catch potential mistakes I'm making here, or suggest other SolrJ things that'd really benefit from Javadocs. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-Tests-7.x - Build # 941 - Still Unstable
Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.x/941/ 2 tests failed. FAILED: org.apache.solr.cloud.DeleteReplicaTest.raceConditionOnDeleteAndRegisterReplica Error Message: Timeout adding replica to shard Stack Trace: java.util.concurrent.TimeoutException: Timeout adding replica to shard at __randomizedtesting.SeedInfo.seed([C4BA40BF8B39862:665DC5DB9041D2A8]:0) at org.apache.solr.util.TimeOut.waitFor(TimeOut.java:66) at org.apache.solr.cloud.DeleteReplicaTest.raceConditionOnDeleteAndRegisterReplica(DeleteReplicaTest.java:317) at org.apache.solr.cloud.DeleteReplicaTest.raceConditionOnDeleteAndRegisterReplica(DeleteReplicaTest.java:229) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.lang.Thread.run(Thread.java:748) FAILED: org.apache.solr.cloud.autoscaling.sim.TestSimPolicyCloud.testCreateCollectionAddReplica Error Message: Timeout waiting for collection to become active Live Nodes: [127.0.0.1:10017_solr, 127.0.0.1:10018_solr, 127.0.0.1
[jira] [Commented] (SOLR-12852) NPE in ClusteringComponent
[ https://issues.apache.org/jira/browse/SOLR-12852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16646874#comment-16646874 ] Dawid Weiss commented on SOLR-12852: Hmm... What's your setup that uses "distrib=true", Markus? I really don't know much about distributed mode, but grepping for "distrib" yields all kinds of interesting stuff that's unrelated to the clustering plugin...: {code} BaseDistributedSearchTestCase: // TODO: look into why passing true causes fails params.set("distrib", "false"); {code} I'm not sure where to start, to be honest. There is a "DistributedClusteringComponentTest.java" and it seems to be passing. The offending line (NPE) happens in {code} DocListAndSet results = rb.getResults(); Map docIds = new HashMap<>(results.docList.size()); {code} so I'm guessing docList is empty (or results). Don't want to dodge the problem (if ... != null) without understanding the cause. > NPE in ClusteringComponent > -- > > Key: SOLR-12852 > URL: https://issues.apache.org/jira/browse/SOLR-12852 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: contrib - Clustering >Affects Versions: 7.5 >Reporter: Markus Jelsma >Assignee: Dawid Weiss >Priority: Major > Fix For: master (8.0) > > > Got this exception: > {code} > o.a.s.s.HttpSolrCall null:java.lang.NullPointerException > at > org.apache.solr.handler.clustering.ClusteringComponent.process(ClusteringComponent.java:234) > at > org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:298) > at > org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199) > at org.apache.solr.core.SolrCore.execute(SolrCore.java:2541) > at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:709) > at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:515) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:377) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:323) > {code} > with this config (copied from reference guide, changed title and snippet > parameters) > {code} >class="solr.clustering.ClusteringComponent"> > > > lingo >name="carrot.algorithm">org.carrot2.clustering.lingo.LingoClusteringAlgorithm > > > > stc >name="carrot.algorithm">org.carrot2.clustering.stc.STCClusteringAlgorithm > > > > class="solr.SearchHandler"> > > true > true > > id > title_nl > content_nl > > 100 > *,score > > > > clustering > > > {code} > using this query: > http://localhost:8983/solr/collection/clustering?q=*:* > All libraries are present, Solr no longer complains about missing classes, > instead i got this. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12855) Add distanceUnits-and-WKT-strings hint to exception thrown by AbstractSpatialFieldType.parseShape()
[ https://issues.apache.org/jira/browse/SOLR-12855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16646824#comment-16646824 ] Christine Poerschke commented on SOLR-12855: Attached proposed patch. Here's an illustration of what the exception wording would look like then. {code} ... o.a.s.h.RequestHandlerBase org.apache.solr.common.SolrException: ERROR: [doc=23_-90_180] Error adding field 'pos_srpt'='-90 180' msg=Unable to parse shape given formats "lat,lon", "x y" or as WKT because java.text.ParseException: Unknown Shape definition [-90 180] (note: distanceUnits=kilometers doesn’t affect distances embedded in WKT strings which are still in degrees) at org.apache.solr.update.DocumentBuilder.toDocument(DocumentBuilder.java:215) at org.apache.solr.update.AddUpdateCommand.getLuceneDocument(AddUpdateCommand.java:102) at org.apache.solr.update.DirectUpdateHandler2.updateDocOrDocValues(DirectUpdateHandler2.java:962) ... {code} > Add distanceUnits-and-WKT-strings hint to exception thrown by > AbstractSpatialFieldType.parseShape() > --- > > Key: SOLR-12855 > URL: https://issues.apache.org/jira/browse/SOLR-12855 > Project: Solr > Issue Type: Task >Reporter: Christine Poerschke >Priority: Minor > Attachments: SOLR-12855.patch > > > The [Schema Configuration for > RPT|http://lucene.apache.org/solr/guide/7_5/spatial-search.html#schema-configuration-for-rpt] > section of the Solr Reference Guide mentions "However, it doesn’t affect > distances embedded in WKT strings ... which are still in degrees." for the > {{distanceUnits}} attribute. > This ticket proposes to include something along those lines as part of the > {{"Unable to parse shape given formats ..."}} exception that is thrown if the > user perhaps did not see or remember about the 'however' mention for the > attribute. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-12855) Add distanceUnits-and-WKT-strings hint to exception thrown by AbstractSpatialFieldType.parseShape()
[ https://issues.apache.org/jira/browse/SOLR-12855?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Christine Poerschke updated SOLR-12855: --- Attachment: SOLR-12855.patch > Add distanceUnits-and-WKT-strings hint to exception thrown by > AbstractSpatialFieldType.parseShape() > --- > > Key: SOLR-12855 > URL: https://issues.apache.org/jira/browse/SOLR-12855 > Project: Solr > Issue Type: Task >Reporter: Christine Poerschke >Priority: Minor > Attachments: SOLR-12855.patch > > > The [Schema Configuration for > RPT|http://lucene.apache.org/solr/guide/7_5/spatial-search.html#schema-configuration-for-rpt] > section of the Solr Reference Guide mentions "However, it doesn’t affect > distances embedded in WKT strings ... which are still in degrees." for the > {{distanceUnits}} attribute. > This ticket proposes to include something along those lines as part of the > {{"Unable to parse shape given formats ..."}} exception that is thrown if the > user perhaps did not see or remember about the 'however' mention for the > attribute. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-12854) Document steps to improve delta import via DataImportHandler
[ https://issues.apache.org/jira/browse/SOLR-12854?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Amrit Sarkar updated SOLR-12854: Issue Type: Improvement (was: Bug) > Document steps to improve delta import via DataImportHandler > - > > Key: SOLR-12854 > URL: https://issues.apache.org/jira/browse/SOLR-12854 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: contrib - DataImportHandler >Affects Versions: 7.5 >Reporter: Amrit Sarkar >Priority: Major > > Delta imports in DataImportHandler is sometimes slower than full imports > where the delta import makes multiple queries compare to full import and > hence making it time complex. Listed in: > https://wiki.apache.org/solr/DataImportHandlerDeltaQueryViaFullImport > In the mailing list; > http://lucene.472066.n3.nabble.com/Number-of-requests-spike-up-when-i-do-the-delta-Import-td4338162.html > one of the Solr users have noted a workaround which works perfectly and > improves delta import performance, where we need to specify > ${dataimporter.last_index_time} in the delta_import_query, and not > delta_sql_query. > {code} > I found a hacky way to limit the number of > times deltaImportQuery was executed. > As designed, solr executes deltaQuery to get a list of ids that need to be > indexed. For each of those, it executes deltaImportQuery, which is typically > very similar to the full query. > I constructed a deltaQuery to purposely only return 1 row. E.g. > deltaQuery = "SELECT id FROM table WHERE rownum=1" // written for > oracle, likely requires a different syntax for other dbs. Also, it occurred > to you could probably include the date>= '${dataimporter.last_index_time}' > filter here so this returns 0 rows if no data has changed > Since deltaImportQuery now *only gets called once I needed to add the filter > logic to *deltaImportQuery *to only select the changed rows (that logic is > normally in *deltaQuery). E.g. > deltaImportQuery = [normal import query] WHERE date >= > '${dataimporter.last_index_time}' > {code} > A number of other users have adopted the strategy and DIH delta import > performance has improved, and henceforth documenting this strategy as TIP > will help other users too. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-12855) Add distanceUnits-and-WKT-strings hint to exception thrown by AbstractSpatialFieldType.parseShape()
Christine Poerschke created SOLR-12855: -- Summary: Add distanceUnits-and-WKT-strings hint to exception thrown by AbstractSpatialFieldType.parseShape() Key: SOLR-12855 URL: https://issues.apache.org/jira/browse/SOLR-12855 Project: Solr Issue Type: Task Reporter: Christine Poerschke The [Schema Configuration for RPT|http://lucene.apache.org/solr/guide/7_5/spatial-search.html#schema-configuration-for-rpt] section of the Solr Reference Guide mentions "However, it doesn’t affect distances embedded in WKT strings ... which are still in degrees." for the {{distanceUnits}} attribute. This ticket proposes to include something along those lines as part of the {{"Unable to parse shape given formats ..."}} exception that is thrown if the user perhaps did not see or remember about the 'however' mention for the attribute. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12423) Upgrade to Tika 1.19.1 when available
[ https://issues.apache.org/jira/browse/SOLR-12423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16646796#comment-16646796 ] Tim Allison commented on SOLR-12423: I tested PR#468 against the ~650 unit test docs within Tika's project, and found no surprises. > Upgrade to Tika 1.19.1 when available > - > > Key: SOLR-12423 > URL: https://issues.apache.org/jira/browse/SOLR-12423 > Project: Solr > Issue Type: Task > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Tim Allison >Priority: Major > Time Spent: 40m > Remaining Estimate: 0h > > In Tika 1.19, there will be the ability to call the ForkParser and specify a > directory of jars from which to load the classes for the Parser in the child > processes. This will allow us to remove all of the parser dependencies from > Solr. We’ll still need tika-core, of course, but we could drop tika-app.jar > in the child process’ bin directory and be done with the upgrade... no more > fiddly dependency upgrades and threat of jar hell. > The ForkParser also protects against ooms, infinite loops and jvm crashes. > W00t! > This issue covers the basic upgrading to 1.19.1. For the migration to the > ForkParser, see SOLR-11721. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-12854) Document steps to improve delta import via DataImportHandler
Amrit Sarkar created SOLR-12854: --- Summary: Document steps to improve delta import via DataImportHandler Key: SOLR-12854 URL: https://issues.apache.org/jira/browse/SOLR-12854 Project: Solr Issue Type: Bug Security Level: Public (Default Security Level. Issues are Public) Components: contrib - DataImportHandler Affects Versions: 7.5 Reporter: Amrit Sarkar Delta imports in DataImportHandler is sometimes slower than full imports where the delta import makes multiple queries compare to full import and hence making it time complex. Listed in: https://wiki.apache.org/solr/DataImportHandlerDeltaQueryViaFullImport In the mailing list; http://lucene.472066.n3.nabble.com/Number-of-requests-spike-up-when-i-do-the-delta-Import-td4338162.html one of the Solr users have noted a workaround which works perfectly and improves delta import performance, where we need to specify ${dataimporter.last_index_time} in the delta_import_query, and not delta_sql_query. {code} I found a hacky way to limit the number of times deltaImportQuery was executed. As designed, solr executes deltaQuery to get a list of ids that need to be indexed. For each of those, it executes deltaImportQuery, which is typically very similar to the full query. I constructed a deltaQuery to purposely only return 1 row. E.g. deltaQuery = "SELECT id FROM table WHERE rownum=1" // written for oracle, likely requires a different syntax for other dbs. Also, it occurred to you could probably include the date>= '${dataimporter.last_index_time}' filter here so this returns 0 rows if no data has changed Since deltaImportQuery now *only gets called once I needed to add the filter logic to *deltaImportQuery *to only select the changed rows (that logic is normally in *deltaQuery). E.g. deltaImportQuery = [normal import query] WHERE date >= '${dataimporter.last_index_time}' {code} A number of other users have adopted the strategy and DIH delta import performance has improved, and henceforth documenting this strategy as TIP will help other users too. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[GitHub] lucene-solr pull request #468: jira/SOLR-12423
GitHub user tballison opened a pull request: https://github.com/apache/lucene-solr/pull/468 jira/SOLR-12423 Upgrade to Tika 1.19.1, first draft You can merge this pull request into a Git repository by running: $ git pull https://github.com/tballison/lucene-solr jira/SOLR-12423 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/lucene-solr/pull/468.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #468 commit e6c9a9f3f209b3b45bfc57963ce4df3a7b7946fb Author: tallison Date: 2018-10-11T15:01:05Z SOLR-12423 upgrade to Tika 1.19.1, first commit commit 4fcc28ee35f28d6e1806f3c23824d9f86cc9ec2b Author: TALLISON Date: 2018-10-11T17:28:08Z Merge branch 'master' of https://github.com/apache/lucene-solr into jira/SOLR-12423 --- - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8530) fix some 'rawtypes' javac warnings
[ https://issues.apache.org/jira/browse/LUCENE-8530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16646792#comment-16646792 ] Christine Poerschke commented on LUCENE-8530: - Attached proposed patch to fix some (not all) 'rawtypes' javac warnings. > fix some 'rawtypes' javac warnings > -- > > Key: LUCENE-8530 > URL: https://issues.apache.org/jira/browse/LUCENE-8530 > Project: Lucene - Core > Issue Type: Task >Reporter: Christine Poerschke >Assignee: Christine Poerschke >Priority: Minor > Attachments: LUCENE-8530.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-8530) fix some 'rawtypes' javac warnings
[ https://issues.apache.org/jira/browse/LUCENE-8530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Christine Poerschke updated LUCENE-8530: Attachment: LUCENE-8530.patch > fix some 'rawtypes' javac warnings > -- > > Key: LUCENE-8530 > URL: https://issues.apache.org/jira/browse/LUCENE-8530 > Project: Lucene - Core > Issue Type: Task >Reporter: Christine Poerschke >Assignee: Christine Poerschke >Priority: Minor > Attachments: LUCENE-8530.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (LUCENE-8530) fix some 'rawtypes' javac warnings
Christine Poerschke created LUCENE-8530: --- Summary: fix some 'rawtypes' javac warnings Key: LUCENE-8530 URL: https://issues.apache.org/jira/browse/LUCENE-8530 Project: Lucene - Core Issue Type: Task Reporter: Christine Poerschke Assignee: Christine Poerschke -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (LUCENE-8478) combine TermScorer constructors' implementation
[ https://issues.apache.org/jira/browse/LUCENE-8478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Christine Poerschke resolved LUCENE-8478. - Resolution: Won't Do Thanks [~jpountz] for your input! > combine TermScorer constructors' implementation > --- > > Key: LUCENE-8478 > URL: https://issues.apache.org/jira/browse/LUCENE-8478 > Project: Lucene - Core > Issue Type: Bug >Affects Versions: master (8.0) >Reporter: Christine Poerschke >Priority: Minor > Attachments: LUCENE-8478.patch, LUCENE-8478.patch > > > We currently have two {{TermScorer}} constructor variants and it's not > immediately obvious how and why their implementations are the way they are as > far as initialisations and initialisation order is concerned. Combination of > the logic could make the commonalities and differences clearer. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12780) Add support for Leaky ReLU and TanH activations in LTR contrib module
[ https://issues.apache.org/jira/browse/SOLR-12780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16646782#comment-16646782 ] Christine Poerschke commented on SOLR-12780: Thanks [~kamulau] for opening this ticket with a patch to add two additional activation functions! Just attached slightly revised patch, the main difference is the (proposed) use of {{Math.tanh}} instead of the {code} (Math.exp(in) - Math.exp(-in))/(Math.exp(in) + Math.exp(-in)) {code} formula - what do you think? Initially I'd wondered about the benefits or otherwise of reducing the number of {{Math.exp}} calls and then your SOLR-12785 patch made me wonder if [Apache Commons Math|http://commons.apache.org/proper/commons-math/javadocs/api-3.6.1/index.html] has activation functions and that then led to the discovery that {{Math.tanh}} exists in Java itself! > Add support for Leaky ReLU and TanH activations in LTR contrib module > - > > Key: SOLR-12780 > URL: https://issues.apache.org/jira/browse/SOLR-12780 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) > Components: contrib - LTR >Reporter: Kamuela Lau >Priority: Minor > Labels: ltr > Attachments: SOLR-12780.patch, SOLR-12780.patch > > > Add support for Leaky ReLU and TanH activation functions in > NeuralNetworkModel. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-12780) Add support for Leaky ReLU and TanH activations in LTR contrib module
[ https://issues.apache.org/jira/browse/SOLR-12780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Christine Poerschke updated SOLR-12780: --- Attachment: SOLR-12780.patch > Add support for Leaky ReLU and TanH activations in LTR contrib module > - > > Key: SOLR-12780 > URL: https://issues.apache.org/jira/browse/SOLR-12780 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) > Components: contrib - LTR >Reporter: Kamuela Lau >Priority: Minor > Labels: ltr > Attachments: SOLR-12780.patch, SOLR-12780.patch > > > Add support for Leaky ReLU and TanH activation functions in > NeuralNetworkModel. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7642) Should launching Solr in cloud mode using a ZooKeeper chroot create the chroot znode if it doesn't exist?
[ https://issues.apache.org/jira/browse/SOLR-7642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16646777#comment-16646777 ] Shawn Heisey commented on SOLR-7642: Catching up on email and saw issue updates for this. bq. So how about we auto-create the chroot only if it == /solr. I really like this idea. A chroot of /solr is very unlikely to be a typo, and probably will be what a sizable majority of users will want. So if that exact text is the chroot and it doesn't exist, we go ahead and create it. We can discuss whether to make it case-insensitive, so /Solr or /SOLR is also auto-created. If any other string (like /solr7 or /solrdev) gets used, we can require manual creation, and have a meaningful error in the log. [~thelabdude] already indicated that the existing error is not ambiguous, but we can probably improve it by telling the user they will have to manually create the chroot. > Should launching Solr in cloud mode using a ZooKeeper chroot create the > chroot znode if it doesn't exist? > - > > Key: SOLR-7642 > URL: https://issues.apache.org/jira/browse/SOLR-7642 > Project: Solr > Issue Type: Improvement >Reporter: Timothy Potter >Priority: Minor > Attachments: SOLR-7642.patch, SOLR-7642.patch, > SOLR-7642_tag_7.5.0.patch > > > If you launch Solr for the first time in cloud mode using a ZooKeeper > connection string that includes a chroot leads to the following > initialization error: > {code} > ERROR - 2015-06-05 17:15:50.410; [ ] org.apache.solr.common.SolrException; > null:org.apache.solr.common.cloud.ZooKeeperException: A chroot was specified > in ZkHost but the znode doesn't exist. localhost:2181/lan > at > org.apache.solr.core.ZkContainer.initZooKeeper(ZkContainer.java:113) > at org.apache.solr.core.CoreContainer.load(CoreContainer.java:339) > at > org.apache.solr.servlet.SolrDispatchFilter.createCoreContainer(SolrDispatchFilter.java:140) > at > org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:110) > at > org.eclipse.jetty.servlet.FilterHolder.initialize(FilterHolder.java:138) > at > org.eclipse.jetty.servlet.ServletHandler.initialize(ServletHandler.java:852) > at > org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:298) > at > org.eclipse.jetty.webapp.WebAppContext.startWebapp(WebAppContext.java:1349) > at > org.eclipse.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1342) > at > org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:741) > at > org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:505) > {code} > The work-around for this is to use the scripts/cloud-scripts/zkcli.sh script > to create the chroot znode (bootstrap action does this). > I'm wondering if we shouldn't just create the znode if it doesn't exist? Or > is that some violation of using a chroot? -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-9.0.4) - Build # 23011 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/23011/ Java: 64bit/jdk-9.0.4 -XX:-UseCompressedOops -XX:+UseParallelGC 4 tests failed. FAILED: org.apache.solr.client.solrj.impl.CloudSolrClientTest.testVersionsAreReturned Error Message: Error from server at https://127.0.0.1:34225/solr/collection1_shard2_replica_n2: Expected mime type application/octet-stream but got text/html.Error 404 Can not find: /solr/collection1_shard2_replica_n2/update HTTP ERROR 404 Problem accessing /solr/collection1_shard2_replica_n2/update. Reason: Can not find: /solr/collection1_shard2_replica_n2/updatehttp://eclipse.org/jetty";>Powered by Jetty:// 9.4.11.v20180605 Stack Trace: org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: Error from server at https://127.0.0.1:34225/solr/collection1_shard2_replica_n2: Expected mime type application/octet-stream but got text/html. Error 404 Can not find: /solr/collection1_shard2_replica_n2/update HTTP ERROR 404 Problem accessing /solr/collection1_shard2_replica_n2/update. Reason: Can not find: /solr/collection1_shard2_replica_n2/updatehttp://eclipse.org/jetty";>Powered by Jetty:// 9.4.11.v20180605 at __randomizedtesting.SeedInfo.seed([1E9232B9E49C0648:E654CB8C77859E80]:0) at org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:551) at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1016) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:884) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949) at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194) at org.apache.solr.client.solrj.request.UpdateRequest.commit(UpdateRequest.java:237) at org.apache.solr.client.solrj.impl.CloudSolrClientTest.testVersionsAreReturned(CloudSolrClientTest.java:725) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:564) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985) at org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:110) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
[JENKINS] Lucene-Solr-Tests-master - Build # 2866 - Unstable
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/2866/ 2 tests failed. FAILED: org.apache.solr.client.solrj.impl.CloudSolrClientTest.testRouting Error Message: Error from server at http://127.0.0.1:38797/solr/collection1_shard2_replica_n3: Expected mime type application/octet-stream but got text/html. Error 404 Can not find: /solr/collection1_shard2_replica_n3/update HTTP ERROR 404 Problem accessing /solr/collection1_shard2_replica_n3/update. Reason: Can not find: /solr/collection1_shard2_replica_n3/updatehttp://eclipse.org/jetty";>Powered by Jetty:// 9.4.11.v20180605 Stack Trace: org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: Error from server at http://127.0.0.1:38797/solr/collection1_shard2_replica_n3: Expected mime type application/octet-stream but got text/html. Error 404 Can not find: /solr/collection1_shard2_replica_n3/update HTTP ERROR 404 Problem accessing /solr/collection1_shard2_replica_n3/update. Reason: Can not find: /solr/collection1_shard2_replica_n3/updatehttp://eclipse.org/jetty";>Powered by Jetty:// 9.4.11.v20180605 at __randomizedtesting.SeedInfo.seed([939F7A6612DD48C3:5128460E119DB8BB]:0) at org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:551) at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1016) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:884) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949) at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817) at org.apache.solr.client.solrj.impl.CloudSolrClientTest.testRouting(CloudSolrClientTest.java:238) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985) at org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:110) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
[jira] [Updated] (SOLR-12254) TestInPlaceUpdatesDistrib reproducing failures
[ https://issues.apache.org/jira/browse/SOLR-12254?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Rowe updated SOLR-12254: -- Summary: TestInPlaceUpdatesDistrib reproducing failures (was: TestInPlaceUpdatesDistrib reproducing failure) > TestInPlaceUpdatesDistrib reproducing failures > -- > > Key: SOLR-12254 > URL: https://issues.apache.org/jira/browse/SOLR-12254 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: Tests, update >Reporter: Steve Rowe >Priority: Major > > From [https://builds.apache.org/job/Lucene-Solr-NightlyTests-7.x/205/], 100% > reproducing (see [https://builds.apache.org/job/Lucene-Solr-repro/535/]): > {noformat} > Checking out Revision 3d21fda4ce1c899f31b8f00e200eb1ac0d23d17b > (refs/remotes/origin/branch_7x) > [...] >[junit4] 2> NOTE: reproduce with: ant test > -Dtestcase=TestInPlaceUpdatesDistrib -Dtests.method=test > -Dtests.seed=9BC71F2BDDB8F28A -Dtests.multiplier=2 -Dtests.nightly=true > -Dtests.slow=true > -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt > -Dtests.locale=ru-RU -Dtests.timezone=Hongkong -Dtests.asserts=true > -Dtests.file.encoding=ISO-8859-1 >[junit4] ERROR 23.6s J2 | TestInPlaceUpdatesDistrib.test <<< >[junit4]> Throwable #1: > org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error > from server at https://127.0.0.1:56916/collection1: ERROR adding document > SolrInputDocument(fields: [id=-216, title_s=title-216, id_i=-216, > _version_=1598231319283761152]) >[junit4]> at > __randomizedtesting.SeedInfo.seed([9BC71F2BDDB8F28A:139320F173449F72]:0) >[junit4]> at > org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:643) >[junit4]> at > org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255) >[junit4]> at > org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244) >[junit4]> at > org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194) >[junit4]> at > org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211) >[junit4]> at > org.apache.solr.update.TestInPlaceUpdatesDistrib.addDocAndGetVersion(TestInPlaceUpdatesDistrib.java:1105) >[junit4]> at > org.apache.solr.update.TestInPlaceUpdatesDistrib.buildRandomIndex(TestInPlaceUpdatesDistrib.java:1150) >[junit4]> at > org.apache.solr.update.TestInPlaceUpdatesDistrib.docValuesUpdateTest(TestInPlaceUpdatesDistrib.java:318) >[junit4]> at > org.apache.solr.update.TestInPlaceUpdatesDistrib.test(TestInPlaceUpdatesDistrib.java:144) >[junit4]> at > org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:993) >[junit4]> at > org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968) >[junit4]> at java.lang.Thread.run(Thread.java:748) > [...] >[junit4] 2> NOTE: test params are: codec=Asserting(Lucene70): > {title_s=PostingsFormat(name=LuceneFixedGap), id=Lucene50(blocksize=128), > id_field_copy_that_does_not_support_in_place_update_s=PostingsFormat(name=Memory)}, > docValues:{inplace_updatable_float=DocValuesFormat(name=Lucene70), > id_i=DocValuesFormat(name=Direct), _version_=DocValuesFormat(name=Asserting), > id=DocValuesFormat(name=Memory), > inplace_updatable_int_with_default=DocValuesFormat(name=Lucene70), > inplace_updatable_float_with_default=DocValuesFormat(name=Direct)}, > maxPointsInLeafNode=922, maxMBSortInHeap=5.690194493492291, > sim=RandomSimilarity(queryNorm=true): {}, locale=ru-RU, timezone=Hongkong >[junit4] 2> NOTE: Linux 3.13.0-88-generic amd64/Oracle Corporation > 1.8.0_152 (64-bit)/cpus=4,threads=1,free=127774192,total=523763712 > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12254) TestInPlaceUpdatesDistrib reproducing failure
[ https://issues.apache.org/jira/browse/SOLR-12254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16646729#comment-16646729 ] Steve Rowe commented on SOLR-12254: --- Another reproducing failure, apparently has a different cause though. From [https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/23010/] (reproduces for me 5/5 iterations with Java8): {noformat} Checking out Revision 971a0e3f4afddab4687642834037c52fef0c6758 (refs/remotes/origin/master) [...] [java-info] java version "12-ea" [java-info] OpenJDK Runtime Environment (12-ea+12, Oracle Corporation) [java-info] OpenJDK 64-Bit Server VM (12-ea+12, Oracle Corporation) [java-info] Test args: [-XX:+UseCompressedOops -XX:+UseSerialGC] [...] [junit4] 2> NOTE: reproduce with: ant test -Dtestcase=TestInPlaceUpdatesDistrib -Dtests.method=test -Dtests.seed=FC7D71FE8231BFF7 -Dtests.multiplier=3 -Dtests.slow=true -Dtests.locale=fr-BE -Dtests.timezone=Etc/GMT-9 -Dtests.asserts=true -Dtests.file.encoding=UTF-8 [junit4] ERROR 4043s J2 | TestInPlaceUpdatesDistrib.test <<< [junit4]> Throwable #1: org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting response from server at: https://127.0.0.1:34187/_qot/o/collection1 [junit4]>at __randomizedtesting.SeedInfo.seed([FC7D71FE8231BFF7:74294E242CCDD20F]:0) [junit4]>at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:654) [junit4]>at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255) [junit4]>at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244) [junit4]>at org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483) [junit4]>at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413) [junit4]>at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1107) [junit4]>at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:884) [junit4]>at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817) [junit4]>at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194) [junit4]>at org.apache.solr.client.solrj.SolrClient.commit(SolrClient.java:484) [junit4]>at org.apache.solr.client.solrj.SolrClient.commit(SolrClient.java:463) [junit4]>at org.apache.solr.cloud.AbstractFullDistribZkTestBase.commit(AbstractFullDistribZkTestBase.java:1590) [junit4]>at org.apache.solr.update.TestInPlaceUpdatesDistrib.docValuesUpdateTest(TestInPlaceUpdatesDistrib.java:378) [junit4]>at org.apache.solr.update.TestInPlaceUpdatesDistrib.test(TestInPlaceUpdatesDistrib.java:146) [junit4]>at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) [junit4]>at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) [junit4]>at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) [junit4]>at java.base/java.lang.reflect.Method.invoke(Method.java:566) [junit4]>at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:1010) [junit4]>at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:985) [junit4]>at java.base/java.lang.Thread.run(Thread.java:835) [junit4]> Caused by: java.net.SocketTimeoutException: Read timed out [junit4]>at java.base/java.net.SocketInputStream.socketRead0(Native Method) [junit4]>at java.base/java.net.SocketInputStream.socketRead(SocketInputStream.java:115) [junit4]>at java.base/java.net.SocketInputStream.read(SocketInputStream.java:168) [junit4]>at java.base/java.net.SocketInputStream.read(SocketInputStream.java:140) [junit4]>at java.base/sun.security.ssl.SSLSocketInputRecord.read(SSLSocketInputRecord.java:448) [junit4]>at java.base/sun.security.ssl.SSLSocketInputRecord.bytesInCompletePacket(SSLSocketInputRecord.java:68) [junit4]>at java.base/sun.security.ssl.SSLSocketImpl.readApplicationRecord(SSLSocketImpl.java:1104) [junit4]>at java.base/sun.security.ssl.SSLSocketImpl$AppInputStream.read(SSLSocketImpl.java:823) [junit4]>at org.apache.http.impl.io.SessionInputBufferImpl.streamRead(SessionInputBufferImpl.java:137) [junit4]>at org.apache.http.impl.io.SessionInputBufferImpl.fi
[jira] [Comment Edited] (SOLR-12367) When adding a model referencing a non-existent feature the error message is very ambiguous
[ https://issues.apache.org/jira/browse/SOLR-12367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16646719#comment-16646719 ] Kamuela Lau edited comment on SOLR-12367 at 10/11/18 4:26 PM: -- A similar ClassCastException may show up for other LTRScoringModels (such as entering an int/long in matrix, ), such as NeuralNetworkModel (in particular, thinking of matrix/bias values for layers). The ambiguous message for CCE will only change for LinearModel as of right now. was (Author: kamulau): A similar ClassCastException may show up for other LTRScoringModels (such as entering an int/long in matrix, ), such as NeuralNetworkModel (in particular, thinking of matrix/bias values for layers). Current patch will not change the ambiguous message for NeuralNetworkModel... > When adding a model referencing a non-existent feature the error message is > very ambiguous > -- > > Key: SOLR-12367 > URL: https://issues.apache.org/jira/browse/SOLR-12367 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: contrib - LTR >Affects Versions: 7.3.1 >Reporter: Georg Sorst >Priority: Minor > Attachments: SOLR-12367.patch, SOLR-12367.patch, SOLR-12367.patch > > > When adding a model that references a non-existent feature a very ambiguous > error message is thrown, something like "Model type does not exist > org.apache.solr.ltr.model.{{LinearModel}}". > > To reproduce, do not add any features and just add a model, for example by > doing this: > > {{curl -XPUT 'http://localhost:8983/solr/gettingstarted/schema/model-store' > --data-binary '}} > { > {{ "class": "org.apache.solr.ltr.model.LinearModel",}} > {{ "name": "myModel",}} > {{ "features": [ \{"name": "whatever" }],}} > {{ "params": {"weights": {"whatever": 1.0 > {{}' -H 'Content-type:application/json'}} > > The resulting error message "Model type does not exist > {{org.apache.solr.ltr.model.LinearModel" is extremely misleading and cost me > a while to figure out the actual cause.}} > > A more suitable error message should probably indicate the name of the > missing feature that the model is trying to reference. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12367) When adding a model referencing a non-existent feature the error message is very ambiguous
[ https://issues.apache.org/jira/browse/SOLR-12367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16646719#comment-16646719 ] Kamuela Lau commented on SOLR-12367: A similar ClassCastException may show up for other LTRScoringModels (such as entering an int/long in matrix, ), such as NeuralNetworkModel (in particular, thinking of matrix/bias values for layers). Current patch will not change the ambiguous message for NeuralNetworkModel... > When adding a model referencing a non-existent feature the error message is > very ambiguous > -- > > Key: SOLR-12367 > URL: https://issues.apache.org/jira/browse/SOLR-12367 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: contrib - LTR >Affects Versions: 7.3.1 >Reporter: Georg Sorst >Priority: Minor > Attachments: SOLR-12367.patch, SOLR-12367.patch, SOLR-12367.patch > > > When adding a model that references a non-existent feature a very ambiguous > error message is thrown, something like "Model type does not exist > org.apache.solr.ltr.model.{{LinearModel}}". > > To reproduce, do not add any features and just add a model, for example by > doing this: > > {{curl -XPUT 'http://localhost:8983/solr/gettingstarted/schema/model-store' > --data-binary '}} > { > {{ "class": "org.apache.solr.ltr.model.LinearModel",}} > {{ "name": "myModel",}} > {{ "features": [ \{"name": "whatever" }],}} > {{ "params": {"weights": {"whatever": 1.0 > {{}' -H 'Content-type:application/json'}} > > The resulting error message "Model type does not exist > {{org.apache.solr.ltr.model.LinearModel" is extremely misleading and cost me > a while to figure out the actual cause.}} > > A more suitable error message should probably indicate the name of the > missing feature that the model is trying to reference. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-7913) Add stream.body support to MLT QParser
[ https://issues.apache.org/jira/browse/SOLR-7913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16646715#comment-16646715 ] Isabelle Giguere edited comment on SOLR-7913 at 10/11/18 4:22 PM: -- Solr 7.5.0: Since Solr 7.1, requestParsers param enableStreamBody allows control over stream.body support, but stream.body still cannot be used with the MLT Qparser. Current behavior insists on looking for a doc id. SOLR-7913_tag_7.5.0.patch : Clean patch to allow stream.body im MLT QParser (no trace of SOLR-8604 as previously) Patch based on revision 61870, tag 7.5.0, latest release Notes: New class org.apache.solr.client.solrj.request.ContentStreamQueryRequest should override SolrRequest.getContentWriter(String) instead of SolrRequest.getContentStreams() Changes in org.apache.solr.request.json.RequestUtil allow stream.body on an MLT Qparser request, but test TestRemoteStreaming.testNoUrlAccess fails (meaning the test query doesn't fail), so is ignored for now. There should be a better fix, that would consider MLT QParser, Json requests, and still pass test TestRemoteStreaming.testNoUrlAccess - Set a contentType on MLT QParser requests with stream.body, and check for that contentType along with "/json" in RequestUtil ? - Require param 'json' on all Json requests ? Meaning the query at line 178 in TestJsonRequest.doJsonRequest(Client, boolean) would not be allowed There could be a more streamlined solution, closer to how requestParsers param enableStreamBody is supported elsewhere in the code ? was (Author: igiguere): Solr 7.5.0: Since Solr 7.1, requestParsers param enableStreamBody allows control over stream.body support, but stream.body still cannot be used with the MLT Qparser. Current behavior insists on looking for a doc id. SOLR-7913_tag_7.5.0.patch : Clean patch to allow stream.body im MLT QParser (no trace of SOLR-8604 as previously) Patch based on revision 61870, tag 7.5.0, latest release Notes: New class org.apache.solr.client.solrj.request.ContentStreamQueryRequest should override SolrRequest.getContentWriter(String) instead of SolrRequest.getContentStreams() Changes in org.apache.solr.request.json.RequestUtil allow stream.body on an MLT Qparser request, but test TestRemoteStreaming.testNoUrlAccess fails (meaning the test query doesn't fail), so is ignored for now. There should be a better fix, that would consider MLT QParser, Json requests, and still pass test TestRemoteStreaming.testNoUrlAccess - Set a contentType on MLT QParser requests with stream.body, and check for that contentType along with "/json" in RequestUtil ? - Require param 'json' on all Json requests ? Meaning the query at line 178 in TestJsonRequest.doJsonRequest(Client, boolean) would not be allowed There could be a more streamlined solution, closer to how requestParsers param enableStreamBody is supported elsewhere in the code. > Add stream.body support to MLT QParser > -- > > Key: SOLR-7913 > URL: https://issues.apache.org/jira/browse/SOLR-7913 > Project: Solr > Issue Type: Improvement >Reporter: Anshum Gupta >Priority: Major > Attachments: SOLR-7913.patch, SOLR-7913.patch, SOLR-7913.patch, > SOLR-7913_fixTests.patch, SOLR-7913_tag_7.5.0.patch > > > Continuing from > https://issues.apache.org/jira/browse/SOLR-7639?focusedCommentId=14601011&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14601011. > It'd be good to have stream.body be supported by the mlt qparser. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7913) Add stream.body support to MLT QParser
[ https://issues.apache.org/jira/browse/SOLR-7913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16646715#comment-16646715 ] Isabelle Giguere commented on SOLR-7913: Solr 7.5.0: Since Solr 7.1, requestParsers param enableStreamBody allows control over stream.body support, but stream.body still cannot be used with the MLT Qparser. Current behavior insists on looking for a doc id. SOLR-7913_tag_7.5.0.patch : Clean patch to allow stream.body im MLT QParser (no trace of SOLR-8604 as previously) Patch based on revision 61870, tag 7.5.0, latest release Notes: New class org.apache.solr.client.solrj.request.ContentStreamQueryRequest should override SolrRequest.getContentWriter(String) instead of SolrRequest.getContentStreams() Changes in org.apache.solr.request.json.RequestUtil allow stream.body on an MLT Qparser request, but test TestRemoteStreaming.testNoUrlAccess fails (meaning the test query doesn't fail), so is ignored for now. There should be a better fix, that would consider MLT QParser, Json requests, and still pass test TestRemoteStreaming.testNoUrlAccess - Set a contentType on MLT QParser requests with stream.body, and check for that contentType along with "/json" in RequestUtil ? - Require param 'json' on all Json requests ? Meaning the query at line 178 in TestJsonRequest.doJsonRequest(Client, boolean) would not be allowed There could be a more streamlined solution, closer to how requestParsers param enableStreamBody is supported elsewhere in the code. > Add stream.body support to MLT QParser > -- > > Key: SOLR-7913 > URL: https://issues.apache.org/jira/browse/SOLR-7913 > Project: Solr > Issue Type: Improvement >Reporter: Anshum Gupta >Priority: Major > Attachments: SOLR-7913.patch, SOLR-7913.patch, SOLR-7913.patch, > SOLR-7913_fixTests.patch, SOLR-7913_tag_7.5.0.patch > > > Continuing from > https://issues.apache.org/jira/browse/SOLR-7639?focusedCommentId=14601011&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14601011. > It'd be good to have stream.body be supported by the mlt qparser. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-7913) Add stream.body support to MLT QParser
[ https://issues.apache.org/jira/browse/SOLR-7913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Isabelle Giguere updated SOLR-7913: --- Attachment: SOLR-7913_tag_7.5.0.patch > Add stream.body support to MLT QParser > -- > > Key: SOLR-7913 > URL: https://issues.apache.org/jira/browse/SOLR-7913 > Project: Solr > Issue Type: Improvement >Reporter: Anshum Gupta >Priority: Major > Attachments: SOLR-7913.patch, SOLR-7913.patch, SOLR-7913.patch, > SOLR-7913_fixTests.patch, SOLR-7913_tag_7.5.0.patch > > > Continuing from > https://issues.apache.org/jira/browse/SOLR-7639?focusedCommentId=14601011&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14601011. > It'd be good to have stream.body be supported by the mlt qparser. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12785) Add test for activation functions in NeuralNetworkModel
[ https://issues.apache.org/jira/browse/SOLR-12785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16646707#comment-16646707 ] Kamuela Lau commented on SOLR-12785: If changes are accepted for additional activation functions (SOLR-12780), the test in the patch should be changed accordingly; if the tests (and edited implementation of default activation functions) is accepted here, the implementation of the activation functions in SOLR-12780 will have to change accordingly. > Add test for activation functions in NeuralNetworkModel > --- > > Key: SOLR-12785 > URL: https://issues.apache.org/jira/browse/SOLR-12785 > Project: Solr > Issue Type: Test > Security Level: Public(Default Security Level. Issues are Public) > Components: contrib - LTR >Reporter: Kamuela Lau >Priority: Minor > Attachments: SOLR-12785.patch, test-no-activation-change.txt > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-master-Windows (64bit/jdk-11) - Build # 7563 - Failure!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/7563/ Java: 64bit/jdk-11 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC 2 tests failed. FAILED: org.apache.solr.cloud.TestMiniSolrCloudClusterSSL.testSslWithInvalidPeerName Error Message: Could not find collection:second_collection Stack Trace: java.lang.AssertionError: Could not find collection:second_collection at __randomizedtesting.SeedInfo.seed([677B527A74F45491:30CA17C1B408AB80]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.junit.Assert.assertNotNull(Assert.java:526) at org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:155) at org.apache.solr.cloud.TestMiniSolrCloudClusterSSL.checkCreateCollection(TestMiniSolrCloudClusterSSL.java:263) at org.apache.solr.cloud.TestMiniSolrCloudClusterSSL.checkClusterWithCollectionCreations(TestMiniSolrCloudClusterSSL.java:249) at org.apache.solr.cloud.TestMiniSolrCloudClusterSSL.testSslWithInvalidPeerName(TestMiniSolrCloudClusterSSL.java:185) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTe
[jira] [Created] (LUCENE-8529) Use the completion key to tiebreak completion suggestion
Jim Ferenczi created LUCENE-8529: Summary: Use the completion key to tiebreak completion suggestion Key: LUCENE-8529 URL: https://issues.apache.org/jira/browse/LUCENE-8529 Project: Lucene - Core Issue Type: Improvement Reporter: Jim Ferenczi Today the completion suggester uses the document id to tiebreak completion suggestion with same scores. It would improve the stability of the sort to use the surface form of suggestions as the first tiebreaker. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[GitHub] lucene-solr pull request #466: SOLR-12853 Add ability to set CreateNodeList....
Github user benedictb closed the pull request at: https://github.com/apache/lucene-solr/pull/466 --- - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-8921) Potential NPE in pivot facet
[ https://issues.apache.org/jira/browse/SOLR-8921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16646667#comment-16646667 ] Isabelle Giguere commented on SOLR-8921: Solr 7.5.0 : Reproduced with a query on an alias and text field, even if each collections in the alias respond without error individually 'name' and 'author' are text field, 'fileType' is a string field - collection=de_alias&facet.field=author&facet.pivot=name = NPE - collection=lang_de&facet.field=author&facet.pivot=name = respone OK - collection=emptyText&facet.field=author&facet.pivot=name = respone OK - collection=de&facet.field=author&facet.pivot=fileType = respone OK I'll try to find time to devise a unit test to illustrate. Alternatively to this patch on PivotFacetProcessor, org.apache.solr.search.SolrIndexSearcher.getPositiveDocSet(Query q) could return DocSet.EMPTY if input Query is null, but that would have repercussions everywhere. > Potential NPE in pivot facet > > > Key: SOLR-8921 > URL: https://issues.apache.org/jira/browse/SOLR-8921 > Project: Solr > Issue Type: Bug >Affects Versions: 5.4.1 >Reporter: Steve Molloy >Priority: Major > Attachments: SOLR-8921.patch, SOLR-8921.patch, > SOLR-8921_tag_7.5.0.patch > > > For some queries distributed over multiple collections, I've hit a NPE when > SolrIndexSearcher tries to fetch results from cache. Basically, query > generated to compute pivot on document sub set is null, causing the NPE on > lookup. > 2016-03-28 11:34:58.361 ERROR (qtp268141378-751) [c:otif_fr s:shard1 > r:core_node1 x:otif_fr_shard1_replica1] o.a.s.h.RequestHandlerBase > java.lang.NullPointerException > at > java.util.concurrent.ConcurrentHashMap.get(ConcurrentHashMap.java:936) > at > org.apache.solr.util.ConcurrentLFUCache.get(ConcurrentLFUCache.java:92) > at org.apache.solr.search.LFUCache.get(LFUCache.java:153) > at > org.apache.solr.search.SolrIndexSearcher.getPositiveDocSet(SolrIndexSearcher.java:940) > at > org.apache.solr.search.SolrIndexSearcher.numDocs(SolrIndexSearcher.java:2098) > at > org.apache.solr.handler.component.PivotFacetProcessor.getSubsetSize(PivotFacetProcessor.java:356) > at > org.apache.solr.handler.component.PivotFacetProcessor.processSingle(PivotFacetProcessor.java:219) > at > org.apache.solr.handler.component.PivotFacetProcessor.process(PivotFacetProcessor.java:167) > at > org.apache.solr.handler.component.FacetComponent.process(FacetComponent.java:263) > at > org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:273) > at > org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:156) > at org.apache.solr.core.SolrCore.execute(SolrCore.java:2073) > at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:658) > at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:457) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:223) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:181) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652) > at > org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) > at > org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577) > at > org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223) > at > org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127) > at > org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515) > at > org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) > at > org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) > at > org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215) > at > org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110) > at > org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97) > at org.eclipse.jetty.server.Server.handle(Server.java:499) > at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310) > at > org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257) > at > org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540) > at > org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635) > at > org.eclip
[jira] [Updated] (SOLR-8921) Potential NPE in pivot facet
[ https://issues.apache.org/jira/browse/SOLR-8921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Isabelle Giguere updated SOLR-8921: --- Attachment: SOLR-8921_tag_7.5.0.patch > Potential NPE in pivot facet > > > Key: SOLR-8921 > URL: https://issues.apache.org/jira/browse/SOLR-8921 > Project: Solr > Issue Type: Bug >Affects Versions: 5.4.1 >Reporter: Steve Molloy >Priority: Major > Attachments: SOLR-8921.patch, SOLR-8921.patch, > SOLR-8921_tag_7.5.0.patch > > > For some queries distributed over multiple collections, I've hit a NPE when > SolrIndexSearcher tries to fetch results from cache. Basically, query > generated to compute pivot on document sub set is null, causing the NPE on > lookup. > 2016-03-28 11:34:58.361 ERROR (qtp268141378-751) [c:otif_fr s:shard1 > r:core_node1 x:otif_fr_shard1_replica1] o.a.s.h.RequestHandlerBase > java.lang.NullPointerException > at > java.util.concurrent.ConcurrentHashMap.get(ConcurrentHashMap.java:936) > at > org.apache.solr.util.ConcurrentLFUCache.get(ConcurrentLFUCache.java:92) > at org.apache.solr.search.LFUCache.get(LFUCache.java:153) > at > org.apache.solr.search.SolrIndexSearcher.getPositiveDocSet(SolrIndexSearcher.java:940) > at > org.apache.solr.search.SolrIndexSearcher.numDocs(SolrIndexSearcher.java:2098) > at > org.apache.solr.handler.component.PivotFacetProcessor.getSubsetSize(PivotFacetProcessor.java:356) > at > org.apache.solr.handler.component.PivotFacetProcessor.processSingle(PivotFacetProcessor.java:219) > at > org.apache.solr.handler.component.PivotFacetProcessor.process(PivotFacetProcessor.java:167) > at > org.apache.solr.handler.component.FacetComponent.process(FacetComponent.java:263) > at > org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:273) > at > org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:156) > at org.apache.solr.core.SolrCore.execute(SolrCore.java:2073) > at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:658) > at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:457) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:223) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:181) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652) > at > org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) > at > org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577) > at > org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223) > at > org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127) > at > org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515) > at > org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) > at > org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) > at > org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215) > at > org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110) > at > org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97) > at org.eclipse.jetty.server.Server.handle(Server.java:499) > at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310) > at > org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257) > at > org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540) > at > org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635) > at > org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555) > at java.lang.Thread.run(Thread.java:745) -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-8921) Potential NPE in pivot facet
[ https://issues.apache.org/jira/browse/SOLR-8921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16646668#comment-16646668 ] Isabelle Giguere commented on SOLR-8921: SOLR-8921_tag_7.5.0.patch : same patch as before, on PivotFacetProcessor. Based on revision 61870, tag 7.5.0, latest release. > Potential NPE in pivot facet > > > Key: SOLR-8921 > URL: https://issues.apache.org/jira/browse/SOLR-8921 > Project: Solr > Issue Type: Bug >Affects Versions: 5.4.1 >Reporter: Steve Molloy >Priority: Major > Attachments: SOLR-8921.patch, SOLR-8921.patch, > SOLR-8921_tag_7.5.0.patch > > > For some queries distributed over multiple collections, I've hit a NPE when > SolrIndexSearcher tries to fetch results from cache. Basically, query > generated to compute pivot on document sub set is null, causing the NPE on > lookup. > 2016-03-28 11:34:58.361 ERROR (qtp268141378-751) [c:otif_fr s:shard1 > r:core_node1 x:otif_fr_shard1_replica1] o.a.s.h.RequestHandlerBase > java.lang.NullPointerException > at > java.util.concurrent.ConcurrentHashMap.get(ConcurrentHashMap.java:936) > at > org.apache.solr.util.ConcurrentLFUCache.get(ConcurrentLFUCache.java:92) > at org.apache.solr.search.LFUCache.get(LFUCache.java:153) > at > org.apache.solr.search.SolrIndexSearcher.getPositiveDocSet(SolrIndexSearcher.java:940) > at > org.apache.solr.search.SolrIndexSearcher.numDocs(SolrIndexSearcher.java:2098) > at > org.apache.solr.handler.component.PivotFacetProcessor.getSubsetSize(PivotFacetProcessor.java:356) > at > org.apache.solr.handler.component.PivotFacetProcessor.processSingle(PivotFacetProcessor.java:219) > at > org.apache.solr.handler.component.PivotFacetProcessor.process(PivotFacetProcessor.java:167) > at > org.apache.solr.handler.component.FacetComponent.process(FacetComponent.java:263) > at > org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:273) > at > org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:156) > at org.apache.solr.core.SolrCore.execute(SolrCore.java:2073) > at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:658) > at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:457) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:223) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:181) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652) > at > org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) > at > org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577) > at > org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223) > at > org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127) > at > org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515) > at > org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) > at > org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) > at > org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215) > at > org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110) > at > org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97) > at org.eclipse.jetty.server.Server.handle(Server.java:499) > at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310) > at > org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257) > at > org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540) > at > org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635) > at > org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555) > at java.lang.Thread.run(Thread.java:745) -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[GitHub] lucene-solr pull request #467: SOLR-12853 Add ability to set CreateNodeList....
GitHub user benedictb opened a pull request: https://github.com/apache/lucene-solr/pull/467 SOLR-12853 Add ability to set CreateNodeList.shuffle parameter in Create admin requests Addition of a simple getter and setter for a missing parameter in CollectionAdminRequest.Create You can merge this pull request into a Git repository by running: $ git pull https://github.com/benedictb/lucene-solr master Alternatively you can review and apply these changes as the patch at: https://github.com/apache/lucene-solr/pull/467.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #467 commit 5fc4f2a22d21cc3ef478a9d94b3d0fecfbad95a5 Author: Benedict Becker Date: 2018-10-11T15:55:14Z SOLR-12853 Add ability to set CreateNodeList.shuffle parameter in Create admin requests --- - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-8394) Luke handler doesn't support FilterLeafReader
[ https://issues.apache.org/jira/browse/SOLR-8394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16646656#comment-16646656 ] Isabelle Giguere commented on SOLR-8394: SOLR-8394_tag_7.5.0.patch : Same patch, on revision 61870, tag 7.5.0, latest release Simple test: http://localhost:8983/solr/all/admin/luke?wt=xml - without the patch : -1 -- -1 is the default return value ! - fixed by the patch : 299034 > Luke handler doesn't support FilterLeafReader > - > > Key: SOLR-8394 > URL: https://issues.apache.org/jira/browse/SOLR-8394 > Project: Solr > Issue Type: Improvement >Reporter: Steve Molloy >Priority: Major > Attachments: SOLR-8394.patch, SOLR-8394.patch, > SOLR-8394_tag_7.5.0.patch > > > When fetching index information, luke handler only looks at ramBytesUsed for > SegmentReader leaves. If these readers are wrapped in FilterLeafReader, no > RAM usage is returned. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-8394) Luke handler doesn't support FilterLeafReader
[ https://issues.apache.org/jira/browse/SOLR-8394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Isabelle Giguere updated SOLR-8394: --- Attachment: SOLR-8394_tag_7.5.0.patch > Luke handler doesn't support FilterLeafReader > - > > Key: SOLR-8394 > URL: https://issues.apache.org/jira/browse/SOLR-8394 > Project: Solr > Issue Type: Improvement >Reporter: Steve Molloy >Priority: Major > Attachments: SOLR-8394.patch, SOLR-8394.patch, > SOLR-8394_tag_7.5.0.patch > > > When fetching index information, luke handler only looks at ramBytesUsed for > SegmentReader leaves. If these readers are wrapped in FilterLeafReader, no > RAM usage is returned. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-8393) Component for Solr resource usage planning
[ https://issues.apache.org/jira/browse/SOLR-8393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Isabelle Giguere updated SOLR-8393: --- Attachment: SOLR-8393_tag_7.5.0.patch > Component for Solr resource usage planning > -- > > Key: SOLR-8393 > URL: https://issues.apache.org/jira/browse/SOLR-8393 > Project: Solr > Issue Type: Improvement >Reporter: Steve Molloy >Priority: Major > Attachments: SOLR-8393.patch, SOLR-8393.patch, SOLR-8393.patch, > SOLR-8393.patch, SOLR-8393.patch, SOLR-8393.patch, SOLR-8393.patch, > SOLR-8393_tag_7.5.0.patch > > > One question that keeps coming back is how much disk and RAM do I need to run > Solr. The most common response is that it highly depends on your data. While > true, it makes for frustrated users trying to plan their deployments. > The idea I'm bringing is to create a new component that will attempt to > extrapolate resources needed in the future by looking at resources currently > used. By adding a parameter for the target number of documents, current > resources are adapted by a ratio relative to current number of documents. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-8393) Component for Solr resource usage planning
[ https://issues.apache.org/jira/browse/SOLR-8393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16646648#comment-16646648 ] Isabelle Giguere commented on SOLR-8393: SOLR-8393_tag_7.5.0.patch : Same patch, on revision 61870, tag 7.5.0, latest release > Component for Solr resource usage planning > -- > > Key: SOLR-8393 > URL: https://issues.apache.org/jira/browse/SOLR-8393 > Project: Solr > Issue Type: Improvement >Reporter: Steve Molloy >Priority: Major > Attachments: SOLR-8393.patch, SOLR-8393.patch, SOLR-8393.patch, > SOLR-8393.patch, SOLR-8393.patch, SOLR-8393.patch, SOLR-8393.patch, > SOLR-8393_tag_7.5.0.patch > > > One question that keeps coming back is how much disk and RAM do I need to run > Solr. The most common response is that it highly depends on your data. While > true, it makes for frustrated users trying to plan their deployments. > The idea I'm bringing is to create a new component that will attempt to > extrapolate resources needed in the future by looking at resources currently > used. By adding a parameter for the target number of documents, current > resources are adapted by a ratio relative to current number of documents. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[GitHub] lucene-solr pull request #466: SOLR-12853 Add ability to set CreateNodeList....
GitHub user benedictb opened a pull request: https://github.com/apache/lucene-solr/pull/466 SOLR-12853 Add ability to set CreateNodeList.shuffle parameter in Create collection requests Addition of a simple getter and setter for a missing parameter in CollectionAdminRequest.Create You can merge this pull request into a Git repository by running: $ git pull https://github.com/benedictb/lucene-solr branch_7x Alternatively you can review and apply these changes as the patch at: https://github.com/apache/lucene-solr/pull/466.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #466 commit b91dacee5f7fdea74b0aced43dae677ce6b1a4e2 Author: Benedict Becker Date: 2018-10-11T15:43:13Z SOLR-12853 Add ability to set CreateNodeList.shuffle parameter in Create collection requests --- - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7864) timeAllowed causing ClassCastException
[ https://issues.apache.org/jira/browse/SOLR-7864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16646644#comment-16646644 ] Isabelle Giguere commented on SOLR-7864: The issue still occurs on 7.5.0 ! SOLR-7864_tag_7.5.0.patch : Patch on revision 61870, tag 7.5.0, latest release > timeAllowed causing ClassCastException > -- > > Key: SOLR-7864 > URL: https://issues.apache.org/jira/browse/SOLR-7864 > Project: Solr > Issue Type: Bug >Affects Versions: 5.2 >Reporter: Markus Jelsma >Priority: Major > Attachments: SOLR-7864.patch, SOLR-7864.patch, SOLR-7864_extra.patch, > SOLR-7864_tag_7.5.0.patch > > > If timeAllowed kicks in, following exception is thrown and user gets HTTP 500. > {code} > 65219 [qtp2096057945-19] ERROR org.apache.solr.servlet.SolrDispatchFilter [ > search] – null:java.lang.ClassCastException: > org.apache.solr.response.ResultContext cannot be cast to > org.apache.solr.common.SolrDocumentList > at > org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:275) > at > org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143) > at org.apache.solr.core.SolrCore.execute(SolrCore.java:2064) > at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:654) > at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:450) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:227) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:196) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652) > at > org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) > at > org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577) > at > org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223) > at > org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127) > at > org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515) > at > org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) > at > org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) > at > org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215) > at > org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110) > at > org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97) > at org.eclipse.jetty.server.Server.handle(Server.java:497) > at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310) > at > org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257) > at > org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540) > at > org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635) > at > org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555) > at java.lang.Thread.run(Thread.java:745) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-7864) timeAllowed causing ClassCastException
[ https://issues.apache.org/jira/browse/SOLR-7864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Isabelle Giguere updated SOLR-7864: --- Attachment: SOLR-7864_tag_7.5.0.patch > timeAllowed causing ClassCastException > -- > > Key: SOLR-7864 > URL: https://issues.apache.org/jira/browse/SOLR-7864 > Project: Solr > Issue Type: Bug >Affects Versions: 5.2 >Reporter: Markus Jelsma >Priority: Major > Attachments: SOLR-7864.patch, SOLR-7864.patch, SOLR-7864_extra.patch, > SOLR-7864_tag_7.5.0.patch > > > If timeAllowed kicks in, following exception is thrown and user gets HTTP 500. > {code} > 65219 [qtp2096057945-19] ERROR org.apache.solr.servlet.SolrDispatchFilter [ > search] – null:java.lang.ClassCastException: > org.apache.solr.response.ResultContext cannot be cast to > org.apache.solr.common.SolrDocumentList > at > org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:275) > at > org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143) > at org.apache.solr.core.SolrCore.execute(SolrCore.java:2064) > at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:654) > at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:450) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:227) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:196) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652) > at > org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) > at > org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577) > at > org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223) > at > org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127) > at > org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515) > at > org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) > at > org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) > at > org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215) > at > org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110) > at > org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97) > at org.eclipse.jetty.server.Server.handle(Server.java:497) > at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310) > at > org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257) > at > org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540) > at > org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635) > at > org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555) > at java.lang.Thread.run(Thread.java:745) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7642) Should launching Solr in cloud mode using a ZooKeeper chroot create the chroot znode if it doesn't exist?
[ https://issues.apache.org/jira/browse/SOLR-7642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16646641#comment-16646641 ] Isabelle Giguere commented on SOLR-7642: SOLR-7642_tag_7.5.0.patch : Same patch, on revision 61870, tag 7.5.0, latest release > Should launching Solr in cloud mode using a ZooKeeper chroot create the > chroot znode if it doesn't exist? > - > > Key: SOLR-7642 > URL: https://issues.apache.org/jira/browse/SOLR-7642 > Project: Solr > Issue Type: Improvement >Reporter: Timothy Potter >Priority: Minor > Attachments: SOLR-7642.patch, SOLR-7642.patch, > SOLR-7642_tag_7.5.0.patch > > > If you launch Solr for the first time in cloud mode using a ZooKeeper > connection string that includes a chroot leads to the following > initialization error: > {code} > ERROR - 2015-06-05 17:15:50.410; [ ] org.apache.solr.common.SolrException; > null:org.apache.solr.common.cloud.ZooKeeperException: A chroot was specified > in ZkHost but the znode doesn't exist. localhost:2181/lan > at > org.apache.solr.core.ZkContainer.initZooKeeper(ZkContainer.java:113) > at org.apache.solr.core.CoreContainer.load(CoreContainer.java:339) > at > org.apache.solr.servlet.SolrDispatchFilter.createCoreContainer(SolrDispatchFilter.java:140) > at > org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:110) > at > org.eclipse.jetty.servlet.FilterHolder.initialize(FilterHolder.java:138) > at > org.eclipse.jetty.servlet.ServletHandler.initialize(ServletHandler.java:852) > at > org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:298) > at > org.eclipse.jetty.webapp.WebAppContext.startWebapp(WebAppContext.java:1349) > at > org.eclipse.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1342) > at > org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:741) > at > org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:505) > {code} > The work-around for this is to use the scripts/cloud-scripts/zkcli.sh script > to create the chroot znode (bootstrap action does this). > I'm wondering if we shouldn't just create the znode if it doesn't exist? Or > is that some violation of using a chroot? -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-7642) Should launching Solr in cloud mode using a ZooKeeper chroot create the chroot znode if it doesn't exist?
[ https://issues.apache.org/jira/browse/SOLR-7642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Isabelle Giguere updated SOLR-7642: --- Attachment: SOLR-7642_tag_7.5.0.patch > Should launching Solr in cloud mode using a ZooKeeper chroot create the > chroot znode if it doesn't exist? > - > > Key: SOLR-7642 > URL: https://issues.apache.org/jira/browse/SOLR-7642 > Project: Solr > Issue Type: Improvement >Reporter: Timothy Potter >Priority: Minor > Attachments: SOLR-7642.patch, SOLR-7642.patch, > SOLR-7642_tag_7.5.0.patch > > > If you launch Solr for the first time in cloud mode using a ZooKeeper > connection string that includes a chroot leads to the following > initialization error: > {code} > ERROR - 2015-06-05 17:15:50.410; [ ] org.apache.solr.common.SolrException; > null:org.apache.solr.common.cloud.ZooKeeperException: A chroot was specified > in ZkHost but the znode doesn't exist. localhost:2181/lan > at > org.apache.solr.core.ZkContainer.initZooKeeper(ZkContainer.java:113) > at org.apache.solr.core.CoreContainer.load(CoreContainer.java:339) > at > org.apache.solr.servlet.SolrDispatchFilter.createCoreContainer(SolrDispatchFilter.java:140) > at > org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:110) > at > org.eclipse.jetty.servlet.FilterHolder.initialize(FilterHolder.java:138) > at > org.eclipse.jetty.servlet.ServletHandler.initialize(ServletHandler.java:852) > at > org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:298) > at > org.eclipse.jetty.webapp.WebAppContext.startWebapp(WebAppContext.java:1349) > at > org.eclipse.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1342) > at > org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:741) > at > org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:505) > {code} > The work-around for this is to use the scripts/cloud-scripts/zkcli.sh script > to create the chroot znode (bootstrap action does this). > I'm wondering if we shouldn't just create the znode if it doesn't exist? Or > is that some violation of using a chroot? -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-12853) Add ability to set CreateNodeList.shuffle parameter in Create collection requests
[ https://issues.apache.org/jira/browse/SOLR-12853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Benedict updated SOLR-12853: Summary: Add ability to set CreateNodeList.shuffle parameter in Create collection requests (was: SolrJ lacks the ability to set the CreateNodeList.shuffle parameter in Create collection requests) > Add ability to set CreateNodeList.shuffle parameter in Create collection > requests > - > > Key: SOLR-12853 > URL: https://issues.apache.org/jira/browse/SOLR-12853 > Project: Solr > Issue Type: Improvement > Components: SolrJ >Reporter: Benedict >Priority: Trivial > > SolrJ lacks the ability to set the CreateNodeList.shuffle parameter in Create > collection requests, even though Solr's API supports this functionality. This > parameter is already supported in the Restore collection request, so the fix > is simple. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-12853) SolrJ lacks the ability to set the CreateNodeList.shuffle parameter in Create collection requests
Benedict created SOLR-12853: --- Summary: SolrJ lacks the ability to set the CreateNodeList.shuffle parameter in Create collection requests Key: SOLR-12853 URL: https://issues.apache.org/jira/browse/SOLR-12853 Project: Solr Issue Type: Improvement Components: SolrJ Reporter: Benedict SolrJ lacks the ability to set the CreateNodeList.shuffle parameter in Create collection requests, even though Solr's API supports this functionality. This parameter is already supported in the Restore collection request, so the fix is simple. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12852) NPE in ClusteringComponent
[ https://issues.apache.org/jira/browse/SOLR-12852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16646604#comment-16646604 ] Dawid Weiss commented on SOLR-12852: Could be; I am not familiar with distributed mode, I think Koji wrote it (a long time ago). > NPE in ClusteringComponent > -- > > Key: SOLR-12852 > URL: https://issues.apache.org/jira/browse/SOLR-12852 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: contrib - Clustering >Affects Versions: 7.5 >Reporter: Markus Jelsma >Assignee: Dawid Weiss >Priority: Major > Fix For: master (8.0) > > > Got this exception: > {code} > o.a.s.s.HttpSolrCall null:java.lang.NullPointerException > at > org.apache.solr.handler.clustering.ClusteringComponent.process(ClusteringComponent.java:234) > at > org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:298) > at > org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199) > at org.apache.solr.core.SolrCore.execute(SolrCore.java:2541) > at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:709) > at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:515) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:377) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:323) > {code} > with this config (copied from reference guide, changed title and snippet > parameters) > {code} >class="solr.clustering.ClusteringComponent"> > > > lingo >name="carrot.algorithm">org.carrot2.clustering.lingo.LingoClusteringAlgorithm > > > > stc >name="carrot.algorithm">org.carrot2.clustering.stc.STCClusteringAlgorithm > > > > class="solr.SearchHandler"> > > true > true > > id > title_nl > content_nl > > 100 > *,score > > > > clustering > > > {code} > using this query: > http://localhost:8983/solr/collection/clustering?q=*:* > All libraries are present, Solr no longer complains about missing classes, > instead i got this. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12852) NPE in ClusteringComponent
[ https://issues.apache.org/jira/browse/SOLR-12852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16646597#comment-16646597 ] Markus Jelsma commented on SOLR-12852: -- Thanks [~dweiss]. I continued fiddling around and set distrib=false, and it worked! So it seems, in my case, there is a problem with distributed mode? > NPE in ClusteringComponent > -- > > Key: SOLR-12852 > URL: https://issues.apache.org/jira/browse/SOLR-12852 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: contrib - Clustering >Affects Versions: 7.5 >Reporter: Markus Jelsma >Assignee: Dawid Weiss >Priority: Major > Fix For: master (8.0) > > > Got this exception: > {code} > o.a.s.s.HttpSolrCall null:java.lang.NullPointerException > at > org.apache.solr.handler.clustering.ClusteringComponent.process(ClusteringComponent.java:234) > at > org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:298) > at > org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199) > at org.apache.solr.core.SolrCore.execute(SolrCore.java:2541) > at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:709) > at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:515) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:377) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:323) > {code} > with this config (copied from reference guide, changed title and snippet > parameters) > {code} >class="solr.clustering.ClusteringComponent"> > > > lingo >name="carrot.algorithm">org.carrot2.clustering.lingo.LingoClusteringAlgorithm > > > > stc >name="carrot.algorithm">org.carrot2.clustering.stc.STCClusteringAlgorithm > > > > class="solr.SearchHandler"> > > true > true > > id > title_nl > content_nl > > 100 > *,score > > > > clustering > > > {code} > using this query: > http://localhost:8983/solr/collection/clustering?q=*:* > All libraries are present, Solr no longer complains about missing classes, > instead i got this. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12852) NPE in ClusteringComponent
[ https://issues.apache.org/jira/browse/SOLR-12852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16646596#comment-16646596 ] Dawid Weiss commented on SOLR-12852: I'll take a look, thanks for reporting, Markus. > NPE in ClusteringComponent > -- > > Key: SOLR-12852 > URL: https://issues.apache.org/jira/browse/SOLR-12852 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: contrib - Clustering >Affects Versions: 7.5 >Reporter: Markus Jelsma >Assignee: Dawid Weiss >Priority: Major > Fix For: master (8.0) > > > Got this exception: > {code} > o.a.s.s.HttpSolrCall null:java.lang.NullPointerException > at > org.apache.solr.handler.clustering.ClusteringComponent.process(ClusteringComponent.java:234) > at > org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:298) > at > org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199) > at org.apache.solr.core.SolrCore.execute(SolrCore.java:2541) > at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:709) > at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:515) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:377) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:323) > {code} > with this config (copied from reference guide, changed title and snippet > parameters) > {code} >class="solr.clustering.ClusteringComponent"> > > > lingo >name="carrot.algorithm">org.carrot2.clustering.lingo.LingoClusteringAlgorithm > > > > stc >name="carrot.algorithm">org.carrot2.clustering.stc.STCClusteringAlgorithm > > > > class="solr.SearchHandler"> > > true > true > > id > title_nl > content_nl > > 100 > *,score > > > > clustering > > > {code} > using this query: > http://localhost:8983/solr/collection/clustering?q=*:* > All libraries are present, Solr no longer complains about missing classes, > instead i got this. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Assigned] (SOLR-12852) NPE in ClusteringComponent
[ https://issues.apache.org/jira/browse/SOLR-12852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dawid Weiss reassigned SOLR-12852: -- Assignee: Dawid Weiss > NPE in ClusteringComponent > -- > > Key: SOLR-12852 > URL: https://issues.apache.org/jira/browse/SOLR-12852 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: contrib - Clustering >Affects Versions: 7.5 >Reporter: Markus Jelsma >Assignee: Dawid Weiss >Priority: Major > Fix For: master (8.0) > > > Got this exception: > {code} > o.a.s.s.HttpSolrCall null:java.lang.NullPointerException > at > org.apache.solr.handler.clustering.ClusteringComponent.process(ClusteringComponent.java:234) > at > org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:298) > at > org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199) > at org.apache.solr.core.SolrCore.execute(SolrCore.java:2541) > at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:709) > at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:515) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:377) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:323) > {code} > with this config (copied from reference guide, changed title and snippet > parameters) > {code} >class="solr.clustering.ClusteringComponent"> > > > lingo >name="carrot.algorithm">org.carrot2.clustering.lingo.LingoClusteringAlgorithm > > > > stc >name="carrot.algorithm">org.carrot2.clustering.stc.STCClusteringAlgorithm > > > > class="solr.SearchHandler"> > > true > true > > id > title_nl > content_nl > > 100 > *,score > > > > clustering > > > {code} > using this query: > http://localhost:8983/solr/collection/clustering?q=*:* > All libraries are present, Solr no longer complains about missing classes, > instead i got this. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-12423) Upgrade to Tika 1.19.1 when available
[ https://issues.apache.org/jira/browse/SOLR-12423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tim Allison updated SOLR-12423: --- Description: In Tika 1.19, there will be the ability to call the ForkParser and specify a directory of jars from which to load the classes for the Parser in the child processes. This will allow us to remove all of the parser dependencies from Solr. We’ll still need tika-core, of course, but we could drop tika-app.jar in the child process’ bin directory and be done with the upgrade... no more fiddly dependency upgrades and threat of jar hell. The ForkParser also protects against ooms, infinite loops and jvm crashes. W00t! This issue covers the basic upgrading to 1.19.1. For the migration to the ForkParser, see SOLR-11721. was: In Tika 1.19, there will be the ability to call the ForkParser and specify a directory of jars from which to load the classes for the Parser in the child processes. This will allow us to remove all of the parser dependencies from Solr. We’ll still need tika-core, of course, but we could drop tika-app.jar in the child process’ bin directory and be done with the upgrade... no more fiddly dependency upgrades and threat of jar hell. The ForkParser also protects against ooms, infinite loops and jvm crashes. W00t! > Upgrade to Tika 1.19.1 when available > - > > Key: SOLR-12423 > URL: https://issues.apache.org/jira/browse/SOLR-12423 > Project: Solr > Issue Type: Task > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Tim Allison >Priority: Major > Time Spent: 0.5h > Remaining Estimate: 0h > > In Tika 1.19, there will be the ability to call the ForkParser and specify a > directory of jars from which to load the classes for the Parser in the child > processes. This will allow us to remove all of the parser dependencies from > Solr. We’ll still need tika-core, of course, but we could drop tika-app.jar > in the child process’ bin directory and be done with the upgrade... no more > fiddly dependency upgrades and threat of jar hell. > The ForkParser also protects against ooms, infinite loops and jvm crashes. > W00t! > This issue covers the basic upgrading to 1.19.1. For the migration to the > ForkParser, see SOLR-11721. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-12423) Upgrade to Tika 1.19.1 when available
[ https://issues.apache.org/jira/browse/SOLR-12423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tim Allison updated SOLR-12423: --- Summary: Upgrade to Tika 1.19.1 when available (was: Upgrade to Tika 1.19.1 when available and refactor to use the ForkParser) > Upgrade to Tika 1.19.1 when available > - > > Key: SOLR-12423 > URL: https://issues.apache.org/jira/browse/SOLR-12423 > Project: Solr > Issue Type: Task > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Tim Allison >Priority: Major > Time Spent: 0.5h > Remaining Estimate: 0h > > In Tika 1.19, there will be the ability to call the ForkParser and specify a > directory of jars from which to load the classes for the Parser in the child > processes. This will allow us to remove all of the parser dependencies from > Solr. We’ll still need tika-core, of course, but we could drop tika-app.jar > in the child process’ bin directory and be done with the upgrade... no more > fiddly dependency upgrades and threat of jar hell. > > The ForkParser also protects against ooms, infinite loops and jvm crashes. > W00t! -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-7.x-Linux (32bit/jdk1.8.0_172) - Build # 2895 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/2895/ Java: 32bit/jdk1.8.0_172 -server -XX:+UseG1GC 4 tests failed. FAILED: org.apache.solr.cloud.DeleteReplicaTest.raceConditionOnDeleteAndRegisterReplica Error Message: Expected new active leader null Live Nodes: [127.0.0.1:35353_solr, 127.0.0.1:39651_solr, 127.0.0.1:44159_solr] Last available state: DocCollection(raceDeleteReplica_true//collections/raceDeleteReplica_true/state.json/12)={ "pullReplicas":"0", "replicationFactor":"2", "shards":{"shard1":{ "range":"8000-7fff", "state":"active", "replicas":{ "core_node3":{ "core":"raceDeleteReplica_true_shard1_replica_n1", "base_url":"https://127.0.0.1:44857/solr";, "node_name":"127.0.0.1:44857_solr", "state":"down", "type":"NRT", "leader":"true"}, "core_node6":{ "core":"raceDeleteReplica_true_shard1_replica_n5", "base_url":"https://127.0.0.1:44857/solr";, "node_name":"127.0.0.1:44857_solr", "state":"down", "type":"NRT"}, "core_node4":{ "core":"raceDeleteReplica_true_shard1_replica_n2", "base_url":"https://127.0.0.1:44159/solr";, "node_name":"127.0.0.1:44159_solr", "state":"down", "type":"NRT", "router":{"name":"compositeId"}, "maxShardsPerNode":"1", "autoAddReplicas":"false", "nrtReplicas":"2", "tlogReplicas":"0"} Stack Trace: java.lang.AssertionError: Expected new active leader null Live Nodes: [127.0.0.1:35353_solr, 127.0.0.1:39651_solr, 127.0.0.1:44159_solr] Last available state: DocCollection(raceDeleteReplica_true//collections/raceDeleteReplica_true/state.json/12)={ "pullReplicas":"0", "replicationFactor":"2", "shards":{"shard1":{ "range":"8000-7fff", "state":"active", "replicas":{ "core_node3":{ "core":"raceDeleteReplica_true_shard1_replica_n1", "base_url":"https://127.0.0.1:44857/solr";, "node_name":"127.0.0.1:44857_solr", "state":"down", "type":"NRT", "leader":"true"}, "core_node6":{ "core":"raceDeleteReplica_true_shard1_replica_n5", "base_url":"https://127.0.0.1:44857/solr";, "node_name":"127.0.0.1:44857_solr", "state":"down", "type":"NRT"}, "core_node4":{ "core":"raceDeleteReplica_true_shard1_replica_n2", "base_url":"https://127.0.0.1:44159/solr";, "node_name":"127.0.0.1:44159_solr", "state":"down", "type":"NRT", "router":{"name":"compositeId"}, "maxShardsPerNode":"1", "autoAddReplicas":"false", "nrtReplicas":"2", "tlogReplicas":"0"} at __randomizedtesting.SeedInfo.seed([BC5C230FFF568C58:D64A42DF97A4C692]:0) at org.junit.Assert.fail(Assert.java:93) at org.apache.solr.cloud.SolrCloudTestCase.waitForState(SolrCloudTestCase.java:280) at org.apache.solr.cloud.DeleteReplicaTest.raceConditionOnDeleteAndRegisterReplica(DeleteReplicaTest.java:333) at org.apache.solr.cloud.DeleteReplicaTest.raceConditionOnDeleteAndRegisterReplica(DeleteReplicaTest.java:228) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomiz
[jira] [Updated] (SOLR-12852) NPE in ClusteringComponent
[ https://issues.apache.org/jira/browse/SOLR-12852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Markus Jelsma updated SOLR-12852: - Description: Got this exception: {code} o.a.s.s.HttpSolrCall null:java.lang.NullPointerException at org.apache.solr.handler.clustering.ClusteringComponent.process(ClusteringComponent.java:234) at org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:298) at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199) at org.apache.solr.core.SolrCore.execute(SolrCore.java:2541) at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:709) at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:515) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:377) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:323) {code} with this config (copied from reference guide) {code} lingo org.carrot2.clustering.lingo.LingoClusteringAlgorithm stc org.carrot2.clustering.stc.STCClusteringAlgorithm true true id title_nl content_nl 100 *,score clustering {code} using this query: http://localhost:8983/solr/collection/clustering?q=*:* All libraries are present, Solr no longer complains about missing classes, instead i got this. was: Got this exception: {code} o.a.s.s.HttpSolrCall null:java.lang.NullPointerException at org.apache.solr.handler.clustering.ClusteringComponent.process(ClusteringComponent.java:234) at org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:298) at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199) at org.apache.solr.core.SolrCore.execute(SolrCore.java:2541) at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:709) at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:515) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:377) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:323) {code} with this config (copied from reference guide) {code} lingo org.carrot2.clustering.lingo.LingoClusteringAlgorithm stc org.carrot2.clustering.stc.STCClusteringAlgorithm true true id doctitle content 100 *,score clustering {code} using this query: http://localhost:8983/solr/collection/clustering?q=*:* All libraries are present, Solr no longer complains about missing classes, instead i got this. > NPE in ClusteringComponent > -- > > Key: SOLR-12852 > URL: https://issues.apache.org/jira/browse/SOLR-12852 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: contrib - Clustering >Affects Versions: 7.5 >Reporter: Markus Jelsma >Priority: Major > Fix For: master (8.0) > > > Got this exception: > {code} > o.a.s.s.HttpSolrCall null:java.lang.NullPointerException > at > org.apache.solr.handler.clustering.ClusteringComponent.process(ClusteringComponent.java:234) > at > org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:298) > at > org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199) > at org.apache.solr.core.SolrCore.execute(SolrCore.java:2541) > at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:709) > at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:515) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:377) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:323) > {code} > with this config (copied from reference guide) > {code} >class="solr.clustering.ClusteringComponent"> > > > lingo >name="carrot.algorithm">org.carrot2.clustering.lingo.LingoClusteringAlgorithm > > > > stc >name="carrot.algorithm">org.carrot2.clustering.stc.STCClusteringAlgorithm > > > > class="solr.SearchHandler"> > > true > true > > id > title_nl > content_nl > > 100 > *,score > > > > clustering > > > {code} > using this query: > http://localhost:8983/solr/collection/clustering?q=*:* > All libraries are present, Solr no longer complai
[jira] [Updated] (SOLR-12852) NPE in ClusteringComponent
[ https://issues.apache.org/jira/browse/SOLR-12852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Markus Jelsma updated SOLR-12852: - Description: Got this exception: {code} o.a.s.s.HttpSolrCall null:java.lang.NullPointerException at org.apache.solr.handler.clustering.ClusteringComponent.process(ClusteringComponent.java:234) at org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:298) at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199) at org.apache.solr.core.SolrCore.execute(SolrCore.java:2541) at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:709) at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:515) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:377) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:323) {code} with this config (copied from reference guide, changed title and snippet parameters) {code} lingo org.carrot2.clustering.lingo.LingoClusteringAlgorithm stc org.carrot2.clustering.stc.STCClusteringAlgorithm true true id title_nl content_nl 100 *,score clustering {code} using this query: http://localhost:8983/solr/collection/clustering?q=*:* All libraries are present, Solr no longer complains about missing classes, instead i got this. was: Got this exception: {code} o.a.s.s.HttpSolrCall null:java.lang.NullPointerException at org.apache.solr.handler.clustering.ClusteringComponent.process(ClusteringComponent.java:234) at org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:298) at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199) at org.apache.solr.core.SolrCore.execute(SolrCore.java:2541) at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:709) at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:515) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:377) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:323) {code} with this config (copied from reference guide) {code} lingo org.carrot2.clustering.lingo.LingoClusteringAlgorithm stc org.carrot2.clustering.stc.STCClusteringAlgorithm true true id title_nl content_nl 100 *,score clustering {code} using this query: http://localhost:8983/solr/collection/clustering?q=*:* All libraries are present, Solr no longer complains about missing classes, instead i got this. > NPE in ClusteringComponent > -- > > Key: SOLR-12852 > URL: https://issues.apache.org/jira/browse/SOLR-12852 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: contrib - Clustering >Affects Versions: 7.5 >Reporter: Markus Jelsma >Priority: Major > Fix For: master (8.0) > > > Got this exception: > {code} > o.a.s.s.HttpSolrCall null:java.lang.NullPointerException > at > org.apache.solr.handler.clustering.ClusteringComponent.process(ClusteringComponent.java:234) > at > org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:298) > at > org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199) > at org.apache.solr.core.SolrCore.execute(SolrCore.java:2541) > at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:709) > at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:515) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:377) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:323) > {code} > with this config (copied from reference guide, changed title and snippet > parameters) > {code} >class="solr.clustering.ClusteringComponent"> > > > lingo >name="carrot.algorithm">org.carrot2.clustering.lingo.LingoClusteringAlgorithm > > > > stc >name="carrot.algorithm">org.carrot2.clustering.stc.STCClusteringAlgorithm > > > > class="solr.SearchHandler"> > > true > true > > id > title_nl > content_nl > > 100 > *,score > > > > clustering > > > {code} > using this query: > http://localhost:8983/s
[JENKINS] Lucene-Solr-repro - Build # 1667 - Still Unstable
Build: https://builds.apache.org/job/Lucene-Solr-repro/1667/ [...truncated 28 lines...] [repro] Jenkins log URL: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1664/consoleText [repro] Revision: 971a0e3f4afddab4687642834037c52fef0c6758 [repro] Ant options: -Dtests.multiplier=2 -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt [repro] Repro line: ant test -Dtestcase=LIROnShardRestartTest -Dtests.method=testAllReplicasInLIR -Dtests.seed=8C817C4A81BE13BB -Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt -Dtests.locale=zh-SG -Dtests.timezone=America/Nassau -Dtests.asserts=true -Dtests.file.encoding=UTF-8 [repro] Repro line: ant test -Dtestcase=LIROnShardRestartTest -Dtests.method=testSeveralReplicasInLIR -Dtests.seed=8C817C4A81BE13BB -Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt -Dtests.locale=zh-SG -Dtests.timezone=America/Nassau -Dtests.asserts=true -Dtests.file.encoding=UTF-8 [repro] git rev-parse --abbrev-ref HEAD [repro] git rev-parse HEAD [repro] Initial local git branch/revision: c87778c50472ab81c6bfae7a5371f36a105544b3 [repro] git fetch [repro] git checkout 971a0e3f4afddab4687642834037c52fef0c6758 [...truncated 2 lines...] [repro] git merge --ff-only [...truncated 1 lines...] [repro] ant clean [...truncated 6 lines...] [repro] Test suites by module: [repro]solr/core [repro] LIROnShardRestartTest [repro] ant compile-test [...truncated 3424 lines...] [repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 -Dtests.class="*.LIROnShardRestartTest" -Dtests.showOutput=onerror -Dtests.multiplier=2 -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt -Dtests.seed=8C817C4A81BE13BB -Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt -Dtests.locale=zh-SG -Dtests.timezone=America/Nassau -Dtests.asserts=true -Dtests.file.encoding=UTF-8 [...truncated 10446 lines...] [repro] Setting last failure code to 256 [repro] Failures: [repro] 5/5 failed: org.apache.solr.cloud.LIROnShardRestartTest [repro] Re-testing 100% failures at the tip of master [repro] git fetch [repro] git checkout master [...truncated 3 lines...] [repro] git merge --ff-only [...truncated 4 lines...] [repro] ant clean [...truncated 8 lines...] [repro] Test suites by module: [repro]solr/core [repro] LIROnShardRestartTest [repro] ant compile-test [...truncated 3424 lines...] [repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 -Dtests.class="*.LIROnShardRestartTest" -Dtests.showOutput=onerror -Dtests.multiplier=2 -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt -Dtests.seed=8C817C4A81BE13BB -Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt -Dtests.locale=zh-SG -Dtests.timezone=America/Nassau -Dtests.asserts=true -Dtests.file.encoding=UTF-8 [...truncated 10625 lines...] [repro] Setting last failure code to 256 [repro] Failures at the tip of master: [repro] 5/5 failed: org.apache.solr.cloud.LIROnShardRestartTest [repro] Re-testing 100% failures at the tip of master without a seed [repro] ant clean [...truncated 8 lines...] [repro] Test suites by module: [repro]solr/core [repro] LIROnShardRestartTest [repro] ant compile-test [...truncated 3424 lines...] [repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 -Dtests.class="*.LIROnShardRestartTest" -Dtests.showOutput=onerror -Dtests.multiplier=2 -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt -Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt -Dtests.locale=zh-SG -Dtests.timezone=America/Nassau -Dtests.asserts=true -Dtests.file.encoding=UTF-8 [...truncated 10373 lines...] [repro] Setting last failure code to 256 [repro] Failures at the tip of master without a seed: [repro] 5/5 failed: org.apache.solr.cloud.LIROnShardRestartTest [repro] git checkout c87778c50472ab81c6bfae7a5371f36a105544b3 [...truncated 8 lines...] [repro] Exiting with code 256 [...truncated 5 lines...] - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional co
[JENKINS] Lucene-Solr-BadApples-Tests-7.x - Build # 183 - Still Unstable
Build: https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-7.x/183/ 2 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.cloud.hdfs.HdfsRecoveryZkTest Error Message: ObjectTracker found 1 object(s) that were not released!!! [HdfsTransactionLog] org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: org.apache.solr.update.HdfsTransactionLog at org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42) at org.apache.solr.update.HdfsTransactionLog.(HdfsTransactionLog.java:132) at org.apache.solr.update.HdfsUpdateLog.init(HdfsUpdateLog.java:177) at org.apache.solr.update.UpdateHandler.(UpdateHandler.java:161) at org.apache.solr.update.UpdateHandler.(UpdateHandler.java:116) at org.apache.solr.update.DirectUpdateHandler2.(DirectUpdateHandler2.java:119) at sun.reflect.GeneratedConstructorAccessor200.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.solr.core.SolrCore.createInstance(SolrCore.java:799) at org.apache.solr.core.SolrCore.createUpdateHandler(SolrCore.java:861) at org.apache.solr.core.SolrCore.initUpdateHandler(SolrCore.java:1114) at org.apache.solr.core.SolrCore.(SolrCore.java:984) at org.apache.solr.core.SolrCore.(SolrCore.java:869) at org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1138) at org.apache.solr.core.CoreContainer.lambda$load$13(CoreContainer.java:684) at com.codahale.metrics.InstrumentedExecutorService$InstrumentedCallable.call(InstrumentedExecutorService.java:197) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Stack Trace: java.lang.AssertionError: ObjectTracker found 1 object(s) that were not released!!! [HdfsTransactionLog] org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: org.apache.solr.update.HdfsTransactionLog at org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42) at org.apache.solr.update.HdfsTransactionLog.(HdfsTransactionLog.java:132) at org.apache.solr.update.HdfsUpdateLog.init(HdfsUpdateLog.java:177) at org.apache.solr.update.UpdateHandler.(UpdateHandler.java:161) at org.apache.solr.update.UpdateHandler.(UpdateHandler.java:116) at org.apache.solr.update.DirectUpdateHandler2.(DirectUpdateHandler2.java:119) at sun.reflect.GeneratedConstructorAccessor200.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.solr.core.SolrCore.createInstance(SolrCore.java:799) at org.apache.solr.core.SolrCore.createUpdateHandler(SolrCore.java:861) at org.apache.solr.core.SolrCore.initUpdateHandler(SolrCore.java:1114) at org.apache.solr.core.SolrCore.(SolrCore.java:984) at org.apache.solr.core.SolrCore.(SolrCore.java:869) at org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1138) at org.apache.solr.core.CoreContainer.lambda$load$13(CoreContainer.java:684) at com.codahale.metrics.InstrumentedExecutorService$InstrumentedCallable.call(InstrumentedExecutorService.java:197) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) at __randomizedtesting.SeedInfo.seed([BE5155F3D629A536]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.junit.Assert.assertNull(Assert.java:551) at org.apache.solr.SolrTestCaseJ4.teardownTestCases(SolrTestCaseJ4.java:304) at sun.reflect.GeneratedMethodAccessor59.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:898) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(
[jira] [Created] (SOLR-12852) NPE in ClusteringComponent
Markus Jelsma created SOLR-12852: Summary: NPE in ClusteringComponent Key: SOLR-12852 URL: https://issues.apache.org/jira/browse/SOLR-12852 Project: Solr Issue Type: Bug Security Level: Public (Default Security Level. Issues are Public) Components: contrib - Clustering Affects Versions: 7.5 Reporter: Markus Jelsma Fix For: master (8.0) Got this exception: {code} o.a.s.s.HttpSolrCall null:java.lang.NullPointerException at org.apache.solr.handler.clustering.ClusteringComponent.process(ClusteringComponent.java:234) at org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:298) at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199) at org.apache.solr.core.SolrCore.execute(SolrCore.java:2541) at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:709) at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:515) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:377) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:323) {code} with this config (copied from reference guide) {code} lingo org.carrot2.clustering.lingo.LingoClusteringAlgorithm stc org.carrot2.clustering.stc.STCClusteringAlgorithm true true id doctitle content 100 *,score clustering {code} using this query: http://localhost:8983/solr/collection/clustering?q=*:* All libraries are present, Solr no longer complains about missing classes, instead i got this. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: [DISCUSS] Moving from Ant build to Gradle
Hi Ryan, Do you have a wip patch? That will be helpful for others who want to continue from your work. The current ant build have tons of tasks but we may want to port several most important tasks. On Thu, Oct 11, 2018 at 8:37 PM Ryan Ernst wrote: > There was an issue before ( > https://issues.apache.org/jira/browse/LUCENE-5755) that looked at > switching to some other build system. A few were discussed, but at the time > nobody had the time to do the work. I've investigated migrating to gradle a > couple times in the past, but there is so much stuff in the ant build (and > the shadow maven build) that migration becomes a lot of work to swtich > everythin at once. The last time I looked, though, was about 1.5 years ago. > It is something I would like to pick back up, but still do not have the > time to invest personally. > > On Wed, Oct 10, 2018 at 3:53 PM Đạt Cao Mạnh > wrote: > >> Hi all, >> >> Recently I wanted to create another module in Solr to group all common >> dependencies of Server and Solrj module. It seems that to do such kind of >> thing is very painful, including hacks and adding support for different ide >> and maven. Should we consider on moving to Gradle which seems better and >> standard nowadays? >> >> Thanks! >> Dat >> >
[JENKINS] Lucene-Solr-repro - Build # 1666 - Unstable
Build: https://builds.apache.org/job/Lucene-Solr-repro/1666/ [...truncated 28 lines...] [repro] Jenkins log URL: https://builds.apache.org/job/Lucene-Solr-NightlyTests-7.x/343/consoleText [repro] Revision: ac11c9e5b17dc7f9abd151dfe0ee880374a38542 [repro] Ant options: -Dtests.multiplier=2 -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt [repro] Repro line: ant test -Dtestcase=CdcrReplicationHandlerTest -Dtests.method=testReplicationWithBufferedUpdates -Dtests.seed=746BA5538DE908D4 -Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt -Dtests.locale=cs-CZ -Dtests.timezone=Australia/Darwin -Dtests.asserts=true -Dtests.file.encoding=UTF-8 [repro] Repro line: ant test -Dtestcase=CloudSolrClientTest -Dtests.method=preferReplicaTypesTest -Dtests.seed=22FAD5837FEB937F -Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt -Dtests.locale=hi-IN -Dtests.timezone=Asia/Ust-Nera -Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1 [repro] Repro line: ant test -Dtestcase=CloudSolrClientTest -Dtests.method=testParallelUpdateQTime -Dtests.seed=22FAD5837FEB937F -Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt -Dtests.locale=hi-IN -Dtests.timezone=Asia/Ust-Nera -Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1 [repro] git rev-parse --abbrev-ref HEAD [repro] git rev-parse HEAD [repro] Initial local git branch/revision: c87778c50472ab81c6bfae7a5371f36a105544b3 [repro] git fetch [repro] git checkout ac11c9e5b17dc7f9abd151dfe0ee880374a38542 [...truncated 2 lines...] [repro] git merge --ff-only [...truncated 1 lines...] [repro] ant clean [...truncated 6 lines...] [repro] Test suites by module: [repro]solr/solrj [repro] CloudSolrClientTest [repro]solr/core [repro] CdcrReplicationHandlerTest [repro] ant compile-test [...truncated 2573 lines...] [repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 -Dtests.class="*.CloudSolrClientTest" -Dtests.showOutput=onerror -Dtests.multiplier=2 -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt -Dtests.seed=22FAD5837FEB937F -Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt -Dtests.locale=hi-IN -Dtests.timezone=Asia/Ust-Nera -Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1 [...truncated 1028 lines...] [repro] Setting last failure code to 256 [repro] ant compile-test [...truncated 1352 lines...] [repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 -Dtests.class="*.CdcrReplicationHandlerTest" -Dtests.showOutput=onerror -Dtests.multiplier=2 -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt -Dtests.seed=746BA5538DE908D4 -Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt -Dtests.locale=cs-CZ -Dtests.timezone=Australia/Darwin -Dtests.asserts=true -Dtests.file.encoding=UTF-8 [...truncated 88 lines...] [repro] Failures: [repro] 0/5 failed: org.apache.solr.cloud.cdcr.CdcrReplicationHandlerTest [repro] 1/5 failed: org.apache.solr.client.solrj.impl.CloudSolrClientTest [repro] git checkout c87778c50472ab81c6bfae7a5371f36a105544b3 [...truncated 2 lines...] [repro] Exiting with code 256 [...truncated 6 lines...] - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-12851) Improvements and fixes to let and select Streaming Expressions
[ https://issues.apache.org/jira/browse/SOLR-12851?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joel Bernstein updated SOLR-12851: -- Attachment: SOLR-12851.patch > Improvements and fixes to let and select Streaming Expressions > -- > > Key: SOLR-12851 > URL: https://issues.apache.org/jira/browse/SOLR-12851 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Joel Bernstein >Assignee: Joel Bernstein >Priority: Major > Attachments: SOLR-12851.patch > > > This ticket fixes a few issues: > 1) It allows the *let* expression to properly parse numeric literal variables. > 2) It allows evaluators in the *select* expression to operate on fields that > were created before it in the same select. > These problems become apparent when working on regression modeling using Math > Expressions. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-12851) Improvements and fixes to let and select Streaming Expressions
[ https://issues.apache.org/jira/browse/SOLR-12851?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joel Bernstein updated SOLR-12851: -- Description: This ticket fixes a few issues: 1) It allows the *let* expression to properly parse numeric literal variables. 2) It allows evaluators in the *select* expression to operate on fields that were created before it in the same select. These problems become apparent when working on regression modeling using Math Expressions. was: This ticket fixes a few issues: 1) It allows the *let* expression to properly parse numeric literal variables. 2) It allows evaluators in the *select* expression to operate on fields that were created before it in the same select. > Improvements and fixes to let and select Streaming Expressions > -- > > Key: SOLR-12851 > URL: https://issues.apache.org/jira/browse/SOLR-12851 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Joel Bernstein >Assignee: Joel Bernstein >Priority: Major > > This ticket fixes a few issues: > 1) It allows the *let* expression to properly parse numeric literal variables. > 2) It allows evaluators in the *select* expression to operate on fields that > were created before it in the same select. > These problems become apparent when working on regression modeling using Math > Expressions. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-12851) Improvements and fixes to let and select Streaming Expressions
Joel Bernstein created SOLR-12851: - Summary: Improvements and fixes to let and select Streaming Expressions Key: SOLR-12851 URL: https://issues.apache.org/jira/browse/SOLR-12851 Project: Solr Issue Type: New Feature Security Level: Public (Default Security Level. Issues are Public) Reporter: Joel Bernstein This ticket fixes a few issues: 1) It allows the *let* expression to properly parse numeric literal variables. 2) It allows evaluators in the *select* expression to operate on fields that were created before it in the same select. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Assigned] (SOLR-12851) Improvements and fixes to let and select Streaming Expressions
[ https://issues.apache.org/jira/browse/SOLR-12851?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joel Bernstein reassigned SOLR-12851: - Assignee: Joel Bernstein > Improvements and fixes to let and select Streaming Expressions > -- > > Key: SOLR-12851 > URL: https://issues.apache.org/jira/browse/SOLR-12851 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Joel Bernstein >Assignee: Joel Bernstein >Priority: Major > > This ticket fixes a few issues: > 1) It allows the *let* expression to properly parse numeric literal variables. > 2) It allows evaluators in the *select* expression to operate on fields that > were created before it in the same select. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-7394) Make MemoryIndex immutable
[ https://issues.apache.org/jira/browse/LUCENE-7394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16646482#comment-16646482 ] Tim Owen commented on LUCENE-7394: -- Related to this (although I am happy to raise a separate Jira as a bug report) is that mutating a MemoryIndex by calling addField you can end up with a corrupt internal state (and ArrayIndexOutOfBoundsException) if you've done a search on the index beforehand e.g. call addField, then search, then addField again, then search. This appears to be because the sortedTerms internal state gets built when the first search happens, and isn't invalidated/null'd when the next addField happens. So the second search sees a state where sortedTerms and terms are out of sync, and fails. The documentation doesn't say this is a bad sequence of usage (or prevent it) so making it immutable with a Builder would fix that situation. Alternatively, calling search could implicitly call freeze, or addField could null out sortedTerms. > Make MemoryIndex immutable > -- > > Key: LUCENE-7394 > URL: https://issues.apache.org/jira/browse/LUCENE-7394 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Martijn van Groningen >Priority: Major > > The MemoryIndex itself should just be a builder that constructs an > IndexReader instance. The whole notion of freezing a memory index should be > removed. > While we change this we should also clean this class up. There are many > methods to add a field, we should just have a single method that accepts a > `IndexableField`. > The `keywordTokenStream(...)` method is unused and untested and should be > removed and it doesn't belong with the memory index. > The `setSimilarity(...)`, `createSearcher(...)` and `search(...)` methods > should be removed, because the MemoryIndex should just be responsible for > creating an IndexReader instance. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-Tests-7.x - Build # 940 - Unstable
Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.x/940/ 1 tests failed. FAILED: org.apache.solr.cloud.autoscaling.SearchRateTriggerIntegrationTest.testAboveSearchRate Error Message: Stack Trace: java.lang.AssertionError at __randomizedtesting.SeedInfo.seed([FAA1F42FB1713797:CAA49E95B150D4B]:0) at org.junit.Assert.fail(Assert.java:92) at org.junit.Assert.assertTrue(Assert.java:43) at org.junit.Assert.assertTrue(Assert.java:54) at org.apache.solr.cloud.autoscaling.SearchRateTriggerIntegrationTest.testAboveSearchRate(SearchRateTriggerIntegrationTest.java:270) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.lang.Thread.run(Thread.java:748) Build Log: [...truncated 12425 lines...] [junit4] Suite: org.apache.solr.cloud.autoscaling.SearchRateTriggerIntegrationTest [junit4] 2> Creating dataDir: /home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/solr/build/solr-core/test/J2/temp/solr.cloud.autoscaling.SearchRateTriggerInt
[jira] [Commented] (SOLR-12699) make LTRScoringModel immutable (to allow hashCode caching)
[ https://issues.apache.org/jira/browse/SOLR-12699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16646475#comment-16646475 ] Christine Poerschke commented on SOLR-12699: Hello [~slivotov] and [~eribeiro] - thank you for progressing this ticket here! bq. *... if `features` is a large collection this could impact the performance ...* Good point. The [ManagedModelStore|https://github.com/apache/lucene-solr/blob/releases/lucene-solr/7.5.0/solr/contrib/ltr/src/java/org/apache/solr/ltr/store/rest/ManagedModelStore.java#L236] instantiates the {{LTRScoringModel}} objects and since this would be basically once per the lifetime of the {{SolrCore}} that hopefully would be affordable performance wise. Or looking at it another way, unlike the {{hashCode()}} method calls the object construction does not happen on a per-query basis. Does that kind of make sense? bq. ... this.params = params != null ? Collections.unmodifiableMap(new HashMap<>(params)) ... Would suggest to use a {{LinkedHashMap}} so that {{this.params}} and the passed in {{params}} maps have the same ordering. The {{to...Map}} methods in [ManagedModelStore|https://github.com/apache/lucene-solr/blob/releases/lucene-solr/7.5.0/solr/contrib/ltr/src/java/org/apache/solr/ltr/store/rest/ManagedModelStore.java#L272] return a linked hash map and I vaguely recall that there was a reason for that, somehow. bq. ... Wouldn't be better to make it be `Lists.emptyList()` if `features` is null? Excuse me if I am missing something, but it's usually an anti-pattern to return null, but I am very well aware that the codebases in the wild usually don't follow this advice. ... Great question, I hadn't really noticed about this before. In practice `features` shouldn't ever actually be null and most (all?) use of `this.features` presumes non-null i.e. would throw a NullPointerException otherwise. In that context {{this.features}} being null if the passed in {{features}} was null is probably clearest i.e. a very apparent NPE pointing towards a code bug of some sort whereas a empty list could be indicative both of a code bug (the 'things' configured by the user were not correctly recognised by the code) or of a misconfiguration issue (the user or tools used to generated the model did not add 'things' as they intended). Likewise for allFeatures, params and norms. > make LTRScoringModel immutable (to allow hashCode caching) > -- > > Key: SOLR-12699 > URL: https://issues.apache.org/jira/browse/SOLR-12699 > Project: Solr > Issue Type: Sub-task > Security Level: Public(Default Security Level. Issues are Public) > Components: contrib - LTR >Reporter: Stanislav Livotov >Priority: Major > Attachments: SOLR-12699.patch, SOLR-12699.patch, SOLR-12699.patch > > > [~slivotov] wrote in SOLR-12688: > bq. ... LTRScoringModel was a mutable object. It was leading to the > calculation of hashcode on each query, which in turn can consume a lot of > time ... So I decided to make LTRScoringModel immutable and cache hashCode > calculation. ... > (Please see SOLR-12688 description for overall context and analysis results.) -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11812) Remove backward compatibility of old LIR implementation in 8.0
[ https://issues.apache.org/jira/browse/SOLR-11812?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16646445#comment-16646445 ] Steve Rowe commented on SOLR-11812: --- LIROnShardRestartTest is failing without a seed on master, and the first failing commit is {{a37a21397}} on this issue. E.g. from [https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1664/]: {noformat} Checking out Revision 971a0e3f4afddab4687642834037c52fef0c6758 (refs/remotes/origin/master) [...] [junit4] 2> NOTE: reproduce with: ant test -Dtestcase=LIROnShardRestartTest -Dtests.method=testAllReplicasInLIR -Dtests.seed=8C817C4A81BE13BB -Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt -Dtests.locale=zh-SG -Dtests.timezone=America/Nassau -Dtests.asserts=true -Dtests.file.encoding=UTF-8 [junit4] ERROR119s J1 | LIROnShardRestartTest.testAllReplicasInLIR <<< [junit4]> Throwable #1: java.lang.IllegalArgumentException: Path must not end with / character [junit4]>at __randomizedtesting.SeedInfo.seed([8C817C4A81BE13BB:D619468CFF3E745C]:0) [junit4]>at org.apache.zookeeper.common.PathUtils.validatePath(PathUtils.java:58) [junit4]>at org.apache.zookeeper.ZooKeeper.getChildren(ZooKeeper.java:1517) [junit4]>at org.apache.solr.common.cloud.SolrZkClient.lambda$getChildren$4(SolrZkClient.java:329) [junit4]>at org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(ZkCmdExecutor.java:60) [junit4]>at org.apache.solr.common.cloud.SolrZkClient.getChildren(SolrZkClient.java:329) [junit4]>at org.apache.solr.cloud.LIROnShardRestartTest.testAllReplicasInLIR(LIROnShardRestartTest.java:168) [junit4]>at java.lang.Thread.run(Thread.java:748) [...] [junit4] 2> NOTE: reproduce with: ant test -Dtestcase=LIROnShardRestartTest -Dtests.method=testSeveralReplicasInLIR -Dtests.seed=8C817C4A81BE13BB -Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt -Dtests.locale=zh-SG -Dtests.timezone=America/Nassau -Dtests.asserts=true -Dtests.file.encoding=UTF-8 [junit4] ERROR 0.27s J1 | LIROnShardRestartTest.testSeveralReplicasInLIR <<< [junit4]> Throwable #1: org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error from server at https://127.0.0.1:36816/solr: Cannot create collection severalReplicasInLIR. Value of maxShardsPerNode is 1, and the number of nodes currently live or live and part of your createNodeSet is 2. This allows a maximum of 2 to be created. Value of numShards is 1, value of nrtReplicas is 3, value of tlogReplicas is 0 and value of pullReplicas is 0. This requires 3 shards to be created (higher than the allowed number) [junit4]>at __randomizedtesting.SeedInfo.seed([8C817C4A81BE13BB:ADBEB7EA1E45EE2]:0) [junit4]>at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:643) [junit4]>at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255) [junit4]>at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244) [junit4]>at org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483) [junit4]>at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413) [junit4]>at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1107) [junit4]>at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:884) [junit4]>at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817) [junit4]>at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194) [junit4]>at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211) [junit4]>at org.apache.solr.cloud.LIROnShardRestartTest.testSeveralReplicasInLIR(LIROnShardRestartTest.java:190) [junit4]>at java.lang.Thread.run(Thread.java:748) {noformat} > Remove backward compatibility of old LIR implementation in 8.0 > -- > > Key: SOLR-11812 > URL: https://issues.apache.org/jira/browse/SOLR-11812 > Project: Solr > Issue Type: Task > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Cao Manh Dat >Assignee: Cao Manh Dat >Priority: Blocker > Fix For:
Re: [DISCUSS] Moving from Ant build to Gradle
There was an issue before (https://issues.apache.org/jira/browse/LUCENE-5755) that looked at switching to some other build system. A few were discussed, but at the time nobody had the time to do the work. I've investigated migrating to gradle a couple times in the past, but there is so much stuff in the ant build (and the shadow maven build) that migration becomes a lot of work to swtich everythin at once. The last time I looked, though, was about 1.5 years ago. It is something I would like to pick back up, but still do not have the time to invest personally. On Wed, Oct 10, 2018 at 3:53 PM Đạt Cao Mạnh wrote: > Hi all, > > Recently I wanted to create another module in Solr to group all common > dependencies of Server and Solrj module. It seems that to do such kind of > thing is very painful, including hacks and adding support for different ide > and maven. Should we consider on moving to Gradle which seems better and > standard nowadays? > > Thanks! > Dat >
[JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-12-ea+12) - Build # 23010 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/23010/ Java: 64bit/jdk-12-ea+12 -XX:+UseCompressedOops -XX:+UseSerialGC 49 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.client.solrj.io.stream.StreamDecoratorTest Error Message: 20 threads leaked from SUITE scope at org.apache.solr.client.solrj.io.stream.StreamDecoratorTest: 1) Thread[id=233, name=Connection evictor, state=TIMED_WAITING, group=TGRP-StreamDecoratorTest] at java.base@12-ea/java.lang.Thread.sleep(Native Method) at app//org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66) at java.base@12-ea/java.lang.Thread.run(Thread.java:835)2) Thread[id=2493, name=TEST-StreamDecoratorTest.testParallelHavingStream-seed#[D7E050EE579045A7]-EventThread, state=WAITING, group=TGRP-StreamDecoratorTest] at java.base@12-ea/jdk.internal.misc.Unsafe.park(Native Method) at java.base@12-ea/java.util.concurrent.locks.LockSupport.park(LockSupport.java:194) at java.base@12-ea/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2081) at java.base@12-ea/java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:433) at app//org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:502)3) Thread[id=1400, name=Connection evictor, state=TIMED_WAITING, group=TGRP-StreamDecoratorTest] at java.base@12-ea/java.lang.Thread.sleep(Native Method) at app//org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66) at java.base@12-ea/java.lang.Thread.run(Thread.java:835)4) Thread[id=227, name=Connection evictor, state=TIMED_WAITING, group=TGRP-StreamDecoratorTest] at java.base@12-ea/java.lang.Thread.sleep(Native Method) at app//org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66) at java.base@12-ea/java.lang.Thread.run(Thread.java:835)5) Thread[id=2492, name=TEST-StreamDecoratorTest.testParallelHavingStream-seed#[D7E050EE579045A7]-SendThread(127.0.0.1:34801), state=TIMED_WAITING, group=TGRP-StreamDecoratorTest] at java.base@12-ea/java.lang.Thread.sleep(Native Method) at app//org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1054)6) Thread[id=2497, name=Connection evictor, state=TIMED_WAITING, group=TGRP-StreamDecoratorTest] at java.base@12-ea/java.lang.Thread.sleep(Native Method) at app//org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66) at java.base@12-ea/java.lang.Thread.run(Thread.java:835)7) Thread[id=229, name=TEST-StreamDecoratorTest.testParallelExecutorStream-seed#[D7E050EE579045A7]-EventThread, state=WAITING, group=TGRP-StreamDecoratorTest] at java.base@12-ea/jdk.internal.misc.Unsafe.park(Native Method) at java.base@12-ea/java.util.concurrent.locks.LockSupport.park(LockSupport.java:194) at java.base@12-ea/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2081) at java.base@12-ea/java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:433) at app//org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:502)8) Thread[id=1415, name=Connection evictor, state=TIMED_WAITING, group=TGRP-StreamDecoratorTest] at java.base@12-ea/java.lang.Thread.sleep(Native Method) at app//org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66) at java.base@12-ea/java.lang.Thread.run(Thread.java:835)9) Thread[id=1401, name=TEST-StreamDecoratorTest.testExecutorStream-seed#[D7E050EE579045A7]-SendThread(127.0.0.1:34801), state=TIMED_WAITING, group=TGRP-StreamDecoratorTest] at java.base@12-ea/java.lang.Thread.sleep(Native Method) at app//org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:105) at app//org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1000) at app//org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1063) 10) Thread[id=228, name=TEST-StreamDecoratorTest.testParallelExecutorStream-seed#[D7E050EE579045A7]-SendThread(127.0.0.1:34801), state=TIMED_WAITING, group=TGRP-StreamDecoratorTest] at java.base@12-ea/java.lang.Thread.sleep(Native Method) at app//org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:105) at app//org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1000) at app//org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1063) 11) Thread[id=234, name=Connection evictor, state=TIMED_WAITING, group=TGRP-StreamDecoratorTest] at java.base@12-ea/java.lang.Thread.sleep(Native Method) at app//org.apache.http.impl.client.Id
[JENKINS] Lucene-Solr-7.x-Solaris (64bit/jdk1.8.0) - Build # 855 - Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Solaris/855/ Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseParallelGC 2 tests failed. FAILED: org.apache.solr.util.UtilsToolTest.testRelativePath Error Message: expected:<14> but was:<15> Stack Trace: java.lang.AssertionError: expected:<14> but was:<15> at __randomizedtesting.SeedInfo.seed([A1B2AD66D1B99514:552ECA59CE456425]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.failNotEquals(Assert.java:647) at org.junit.Assert.assertEquals(Assert.java:128) at org.junit.Assert.assertEquals(Assert.java:472) at org.junit.Assert.assertEquals(Assert.java:456) at org.apache.solr.util.UtilsToolTest.testRelativePath(UtilsToolTest.java:124) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.lang.Thread.run(Thread.java:748) FAILED: org.apache.solr.client.solrj.impl.CloudSolrClientTest.testRouting Error Message: Error from server at http://127.0.0.1:64429/solr/collecti
[jira] [Resolved] (LUCENE-8526) StandardTokenizer doesn't separate hangul characters from other non-CJK chars
[ https://issues.apache.org/jira/browse/LUCENE-8526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jim Ferenczi resolved LUCENE-8526. -- Resolution: Not A Bug I pushed the javadocs addition in master and 7x, thanks [~steve_rowe] > StandardTokenizer doesn't separate hangul characters from other non-CJK chars > - > > Key: LUCENE-8526 > URL: https://issues.apache.org/jira/browse/LUCENE-8526 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Jim Ferenczi >Priority: Minor > > It was first reported here > https://github.com/elastic/elasticsearch/issues/34285. > I don't know if it's the expected behavior but the StandardTokenizer does not > split words > which are composed of a mixed of non-CJK characters and hangul syllabs. For > instance "한국2018" or "한국abc" is kept as is by this tokenizer and mark as an > alpha-numeric group. This breaks the CJKBigram token filter which will not > build bigrams on such groups. The other CJK characters are correctly splitted > when they are mixed with other alphabet so I'd expect the same for hangul. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org