[jira] [Commented] (LUCENE-6119) Add auto-io-throttle to ConcurrentMergeScheduler
[ https://issues.apache.org/jira/browse/LUCENE-6119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14267490#comment-14267490 ] ASF subversion and git services commented on LUCENE-6119: - Commit 1650025 from [~mikemccand] in branch 'dev/trunk' [ https://svn.apache.org/r1650025 ] LUCENE-6119: fix just arrived merge to throttle correctly Add auto-io-throttle to ConcurrentMergeScheduler Key: LUCENE-6119 URL: https://issues.apache.org/jira/browse/LUCENE-6119 Project: Lucene - Core Issue Type: Improvement Reporter: Michael McCandless Assignee: Michael McCandless Fix For: 5.0, Trunk Attachments: LUCENE-6119.patch, LUCENE-6119.patch, LUCENE-6119.patch, LUCENE-6119.patch, LUCENE-6119.patch, LUCENE-6119.patch This method returns number of incoming bytes IW has written since it was opened, excluding merging. It tracks flushed segments, new commits (segments_N), incoming files/segments by addIndexes, newly written live docs / doc values updates files. It's an easy statistic for IW to track and should be useful to help applications more intelligently set defaults for IO throttling (RateLimiter). For example, an application that does hardly any indexing but finally triggered a large merge can afford to heavily throttle that large merge so it won't interfere with ongoing searches. But an application that's causing IW to write new bytes at 50 MB/sec must set a correspondingly higher IO throttling otherwise merges will clearly fall behind. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6119) Add auto-io-throttle to ConcurrentMergeScheduler
[ https://issues.apache.org/jira/browse/LUCENE-6119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14267493#comment-14267493 ] ASF subversion and git services commented on LUCENE-6119: - Commit 1650027 from [~mikemccand] in branch 'dev/branches/branch_5x' [ https://svn.apache.org/r1650027 ] LUCENE-6119: fix just arrived merge to throttle correctly Add auto-io-throttle to ConcurrentMergeScheduler Key: LUCENE-6119 URL: https://issues.apache.org/jira/browse/LUCENE-6119 Project: Lucene - Core Issue Type: Improvement Reporter: Michael McCandless Assignee: Michael McCandless Fix For: 5.0, Trunk Attachments: LUCENE-6119.patch, LUCENE-6119.patch, LUCENE-6119.patch, LUCENE-6119.patch, LUCENE-6119.patch, LUCENE-6119.patch This method returns number of incoming bytes IW has written since it was opened, excluding merging. It tracks flushed segments, new commits (segments_N), incoming files/segments by addIndexes, newly written live docs / doc values updates files. It's an easy statistic for IW to track and should be useful to help applications more intelligently set defaults for IO throttling (RateLimiter). For example, an application that does hardly any indexing but finally triggered a large merge can afford to heavily throttle that large merge so it won't interfere with ongoing searches. But an application that's causing IW to write new bytes at 50 MB/sec must set a correspondingly higher IO throttling otherwise merges will clearly fall behind. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6119) Add auto-io-throttle to ConcurrentMergeScheduler
[ https://issues.apache.org/jira/browse/LUCENE-6119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14267491#comment-14267491 ] ASF subversion and git services commented on LUCENE-6119: - Commit 1650026 from [~mikemccand] in branch 'dev/branches/branch_5x' [ https://svn.apache.org/r1650026 ] LUCENE-6119: fix just arrived merge to throttle correctly Add auto-io-throttle to ConcurrentMergeScheduler Key: LUCENE-6119 URL: https://issues.apache.org/jira/browse/LUCENE-6119 Project: Lucene - Core Issue Type: Improvement Reporter: Michael McCandless Assignee: Michael McCandless Fix For: 5.0, Trunk Attachments: LUCENE-6119.patch, LUCENE-6119.patch, LUCENE-6119.patch, LUCENE-6119.patch, LUCENE-6119.patch, LUCENE-6119.patch This method returns number of incoming bytes IW has written since it was opened, excluding merging. It tracks flushed segments, new commits (segments_N), incoming files/segments by addIndexes, newly written live docs / doc values updates files. It's an easy statistic for IW to track and should be useful to help applications more intelligently set defaults for IO throttling (RateLimiter). For example, an application that does hardly any indexing but finally triggered a large merge can afford to heavily throttle that large merge so it won't interfere with ongoing searches. But an application that's causing IW to write new bytes at 50 MB/sec must set a correspondingly higher IO throttling otherwise merges will clearly fall behind. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 2440 - Still Failing
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/2440/ 4 tests failed. FAILED: org.apache.solr.cloud.HttpPartitionTest.testDistribSearch Error Message: org.apache.solr.client.solrj.SolrServerException: IOException occured when talking to server at: http://127.0.0.1:54821/c8n_1x2_shard1_replica1 Stack Trace: org.apache.solr.client.solrj.SolrServerException: org.apache.solr.client.solrj.SolrServerException: IOException occured when talking to server at: http://127.0.0.1:54821/c8n_1x2_shard1_replica1 at __randomizedtesting.SeedInfo.seed([CFB651FA1308A4E4:4E50DFE26457C4D8]:0) at org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:581) at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:890) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:793) at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:736) at org.apache.solr.cloud.HttpPartitionTest.sendDoc(HttpPartitionTest.java:480) at org.apache.solr.cloud.HttpPartitionTest.testRf2(HttpPartitionTest.java:201) at org.apache.solr.cloud.HttpPartitionTest.doTest(HttpPartitionTest.java:114) at org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:868) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836) at com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738) at com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at
[jira] [Commented] (LUCENE-6165) Change merging APIs to work on CodecReader instead of LeafReader
[ https://issues.apache.org/jira/browse/LUCENE-6165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14267447#comment-14267447 ] Adrien Grand commented on LUCENE-6165: -- +1 I like the patch Change merging APIs to work on CodecReader instead of LeafReader Key: LUCENE-6165 URL: https://issues.apache.org/jira/browse/LUCENE-6165 Project: Lucene - Core Issue Type: Bug Reporter: Robert Muir Attachments: LUCENE-6165.patch Patch factors out reader based on codec apis and changes all merge policy/addIndexes apis to use this. If you want to do slow wrapping, you can still do it, just use SlowCodecReaderWrapper.wrap(LeafReader) yourself (versus SegmentMerger doing it always if its not a SegmentReader). Also adds FilterCodecReader, to make it easier to start efficiently filtering on merge. I cutover all the index splitters to this. This means they should be much much faster with this patch, they just change the deletes as you expect, and the merge is as optimal as a normal one. In other places, for now I think we should just do a rote conversion with SlowCodecReaderWrapper.wrap. Its no slower than today, just explicit, and we can incrementally fix them to do the right thing in the future rather than all at once. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6165) Change merging APIs to work on CodecReader instead of LeafReader
[ https://issues.apache.org/jira/browse/LUCENE-6165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14267449#comment-14267449 ] Michael McCandless commented on LUCENE-6165: +1 Change merging APIs to work on CodecReader instead of LeafReader Key: LUCENE-6165 URL: https://issues.apache.org/jira/browse/LUCENE-6165 Project: Lucene - Core Issue Type: Bug Reporter: Robert Muir Attachments: LUCENE-6165.patch Patch factors out reader based on codec apis and changes all merge policy/addIndexes apis to use this. If you want to do slow wrapping, you can still do it, just use SlowCodecReaderWrapper.wrap(LeafReader) yourself (versus SegmentMerger doing it always if its not a SegmentReader). Also adds FilterCodecReader, to make it easier to start efficiently filtering on merge. I cutover all the index splitters to this. This means they should be much much faster with this patch, they just change the deletes as you expect, and the merge is as optimal as a normal one. In other places, for now I think we should just do a rote conversion with SlowCodecReaderWrapper.wrap. Its no slower than today, just explicit, and we can incrementally fix them to do the right thing in the future rather than all at once. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6149) Infix suggesters' highlighting, allTermsRequired options are hardwired and not configurable for non-contextual lookup
[ https://issues.apache.org/jira/browse/LUCENE-6149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14267451#comment-14267451 ] Boon Low commented on LUCENE-6149: -- Thanks Tomás, good to see the patch making into the trunk and branch_5x. I shall find some time soon to update and post the v.4.10.3 patch to include your changes. Infix suggesters' highlighting, allTermsRequired options are hardwired and not configurable for non-contextual lookup - Key: LUCENE-6149 URL: https://issues.apache.org/jira/browse/LUCENE-6149 Project: Lucene - Core Issue Type: Improvement Components: modules/other Affects Versions: 4.9, 4.10.1, 4.10.2, 4.10.3 Reporter: Boon Low Assignee: Tomás Fernández Löbbe Priority: Minor Labels: suggester Fix For: 5.0, Trunk Attachments: LUCENE-6149.patch, LUCENE-6149.patch, LUCENE-6149.patch, LUCENE-6149.patch Highlighting and allTermsRequired are hardwired in _AnalyzingInfixSuggester_ for non-contextual lookup (via _Lookup_) see *true*, *true* below: {code:title=AnalyzingInfixSuggester.java (extends Lookup.java) } public ListLookupResult lookup(CharSequence key, SetBytesRef contexts, boolean onlyMorePopular, int num) throws IOException { return lookup(key, contexts, num, true, true); } /** Lookup, without any context. */ public ListLookupResult lookup(CharSequence key, int num, boolean allTermsRequired, boolean doHighlight) throws IOException { return lookup(key, null, num, allTermsRequired, doHighlight); } {code} {code:title=Lookup.java} public ListLookupResult lookup(CharSequence key, boolean onlyMorePopular, int num) throws IOException { return lookup(key, null, onlyMorePopular, num); } {code} The above means the majority of the current infix suggester lookup always return highlighted results with allTermsRequired in effect. There is no way to change this despite the options and improvement of LUCENE-6050, made to incorporate Boolean lookup clauses (MUST/SHOULD). This shortcoming has also been reported in SOLR-6648. The suggesters (AnalyzingInfixSuggester, BlendedInfixSuggester) should provide a proper mechanism to set defaults for highlighting and allTermsRequired, e.g. in constructors (and in Solr factories, thus configurable via solrconfig.xml). -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.8.0_40-ea-b09) - Build # 11862 - Failure!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/11862/ Java: 64bit/jdk1.8.0_40-ea-b09 -XX:-UseCompressedOops -XX:+UseG1GC (asserts: true) 3 tests failed. FAILED: org.apache.solr.client.solrj.SolrSchemalessExampleTest.testCommitWithinOnAdd Error Message: Error from server at http://127.0.0.1:56934/solr/collection1: undefined field: price Stack Trace: org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error from server at http://127.0.0.1:56934/solr/collection1: undefined field: price at __randomizedtesting.SeedInfo.seed([920662D75354D2DB:8E8948BB22F632D0]:0) at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:558) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:214) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:210) at org.apache.solr.client.solrj.request.AbstractUpdateRequest.process(AbstractUpdateRequest.java:124) at org.apache.solr.client.solrj.SolrExampleTestsBase.testCommitWithinOnAdd(SolrExampleTestsBase.java:63) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836) at com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738) at com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
[jira] [Commented] (SOLR-6640) ChaosMonkeySafeLeaderTest failure with CorruptIndexException
[ https://issues.apache.org/jira/browse/SOLR-6640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14267403#comment-14267403 ] Varun Thacker commented on SOLR-6640: - Shouldn't the logic be a try-finally block instead of try-with-resource ? {code} try { IndexWriter indexWriter = writer.get(); int c = 0; indexWriter.deleteUnusedFiles(); while (hasUnusedFiles(indexDir, commit)) { indexWriter.deleteUnusedFiles(); LOG.info(Sleeping for 1000ms to wait for unused lucene index files to be delete-able); Thread.sleep(1000); c++; if (c = 30) { LOG.warn(SnapPuller unable to cleanup unused lucene index files so we must do a full copy instead); isFullCopyNeeded = true; break; } } if (c 0) { LOG.info(SnapPuller slept for + (c * 1000) + ms for unused lucene index files to be delete-able); } } finally { if (writer != null) { writer.decref(); } } {code} ChaosMonkeySafeLeaderTest failure with CorruptIndexException Key: SOLR-6640 URL: https://issues.apache.org/jira/browse/SOLR-6640 Project: Solr Issue Type: Bug Components: replication (java) Affects Versions: 5.0 Reporter: Shalin Shekhar Mangar Fix For: 5.0 Attachments: Lucene-Solr-5.x-Linux-64bit-jdk1.8.0_20-Build-11333.txt, SOLR-6640.patch, SOLR-6640.patch, SOLR-6640.patch, SOLR-6640_new_index_dir.patch Test failure found on jenkins: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/11333/ {code} 1 tests failed. REGRESSION: org.apache.solr.cloud.ChaosMonkeySafeLeaderTest.testDistribSearch Error Message: shard2 is not consistent. Got 62 from http://127.0.0.1:57436/collection1lastClient and got 24 from http://127.0.0.1:53065/collection1 Stack Trace: java.lang.AssertionError: shard2 is not consistent. Got 62 from http://127.0.0.1:57436/collection1lastClient and got 24 from http://127.0.0.1:53065/collection1 at __randomizedtesting.SeedInfo.seed([F4B371D421E391CD:7555FFCC56BCF1F1]:0) at org.junit.Assert.fail(Assert.java:93) at org.apache.solr.cloud.AbstractFullDistribZkTestBase.checkShardConsistency(AbstractFullDistribZkTestBase.java:1255) at org.apache.solr.cloud.AbstractFullDistribZkTestBase.checkShardConsistency(AbstractFullDistribZkTestBase.java:1234) at org.apache.solr.cloud.ChaosMonkeySafeLeaderTest.doTest(ChaosMonkeySafeLeaderTest.java:162) at org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869) {code} Cause of inconsistency is: {code} Caused by: org.apache.lucene.index.CorruptIndexException: file mismatch, expected segment id=yhq3vokoe1den2av9jbd3yp8, got=yhq3vokoe1den2av9jbd3yp7 (resource=BufferedChecksumIndexInput(MMapIndexInput(path=/mnt/ssd/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeySafeLeaderTest-F4B371D421E391CD-001/tempDir-001/jetty3/index/_1_2.liv))) [junit4] 2 at org.apache.lucene.codecs.CodecUtil.checkSegmentHeader(CodecUtil.java:259) [junit4] 2 at org.apache.lucene.codecs.lucene50.Lucene50LiveDocsFormat.readLiveDocs(Lucene50LiveDocsFormat.java:88) [junit4] 2 at org.apache.lucene.codecs.asserting.AssertingLiveDocsFormat.readLiveDocs(AssertingLiveDocsFormat.java:64) [junit4] 2 at org.apache.lucene.index.SegmentReader.init(SegmentReader.java:102) {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-6920) During replication use checksums to verify if files are the same
[ https://issues.apache.org/jira/browse/SOLR-6920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14267406#comment-14267406 ] Varun Thacker commented on SOLR-6920: - When using a try-finally block code in SOLR-6440 the threads don't hang anymore. The test still fails because of errors like this - {code} 391305 T11 C196 P59014 oasu.SolrIndexWriter.close ERROR Error closing IndexWriter java.lang.AssertionError: file _2_2.liv does not exist; files=[segments_2, _1.cfs, _3.si, _2.cfe, _1.si, _1.cfe, _3.cfe, _0.cfs, _0.cfe, _2.si, _0.si, _3.cfs, _2.cfs] at org.apache.lucene.index.IndexWriter.filesExist(IndexWriter.java:4232) at org.apache.lucene.index.IndexWriter.startCommit(IndexWriter.java:4303) at org.apache.lucene.index.IndexWriter.prepareCommitInternal(IndexWriter.java:2785) at org.apache.lucene.index.IndexWriter.commitInternal(IndexWriter.java:2888) at org.apache.lucene.index.IndexWriter.shutdown(IndexWriter.java:965) at org.apache.lucene.index.IndexWriter.close(IndexWriter.java:1008) at org.apache.solr.update.SolrIndexWriter.close(SolrIndexWriter.java:129) at org.apache.solr.update.DirectUpdateHandler2.closeWriter(DirectUpdateHandler2.java:804) at org.apache.solr.update.DefaultSolrCoreState.closeIndexWriter(DefaultSolrCoreState.java:68) at org.apache.solr.update.DefaultSolrCoreState.close(DefaultSolrCoreState.java:359) at org.apache.solr.update.SolrCoreState.decrefSolrCoreState(SolrCoreState.java:72) at org.apache.solr.core.SolrCore.close(SolrCore.java:1110) at org.apache.solr.core.SolrCores.close(SolrCores.java:117) at org.apache.solr.core.CoreContainer.shutdown(CoreContainer.java:366) at org.apache.solr.servlet.SolrDispatchFilter.destroy(SolrDispatchFilter.java:194) at org.apache.solr.cloud.ChaosMonkey.stopJettySolrRunner(ChaosMonkey.java:197) at org.apache.solr.cloud.ChaosMonkey.stop(ChaosMonkey.java:550) at org.apache.solr.cloud.AbstractFullDistribZkTestBase.destroyServers(AbstractFullDistribZkTestBase.java:1568) at org.apache.solr.BaseDistributedSearchTestCase.tearDown(BaseDistributedSearchTestCase.java:283) at org.apache.solr.cloud.AbstractDistribZkTestBase.tearDown(AbstractDistribZkTestBase.java:231) at org.apache.solr.cloud.AbstractFullDistribZkTestBase.tearDown(AbstractFullDistribZkTestBase.java:1546) at org.apache.solr.cloud.BasicDistributedZkTest.tearDown(BasicDistributedZkTest.java:1166) at org.apache.solr.cloud.ShardSplitTest.tearDown(ShardSplitTest.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:885) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836) at com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738) at com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at
[jira] [Updated] (LUCENE-6165) Change merging APIs to work on CodecReader instead of LeafReader
[ https://issues.apache.org/jira/browse/LUCENE-6165?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Muir updated LUCENE-6165: Attachment: LUCENE-6165.patch patch. all tests pass. Change merging APIs to work on CodecReader instead of LeafReader Key: LUCENE-6165 URL: https://issues.apache.org/jira/browse/LUCENE-6165 Project: Lucene - Core Issue Type: Bug Reporter: Robert Muir Attachments: LUCENE-6165.patch Patch factors out reader based on codec apis and changes all merge policy/addIndexes apis to use this. If you want to do slow wrapping, you can still do it, just use SlowCodecReaderWrapper.wrap(LeafReader) yourself (versus SegmentMerger doing it always if its not a SegmentReader). Also adds FilterCodecReader, to make it easier to start efficiently filtering on merge. I cutover all the index splitters to this. This means they should be much much faster with this patch, they just change the deletes as you expect, and the merge is as optimal as a normal one. In other places, for now I think we should just do a rote conversion with SlowCodecReaderWrapper.wrap. Its no slower than today, just explicit, and we can incrementally fix them to do the right thing in the future rather than all at once. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (LUCENE-6165) Change merging APIs to work on CodecReader instead of LeafReader
Robert Muir created LUCENE-6165: --- Summary: Change merging APIs to work on CodecReader instead of LeafReader Key: LUCENE-6165 URL: https://issues.apache.org/jira/browse/LUCENE-6165 Project: Lucene - Core Issue Type: Bug Reporter: Robert Muir Patch factors out reader based on codec apis and changes all merge policy/addIndexes apis to use this. If you want to do slow wrapping, you can still do it, just use SlowCodecReaderWrapper.wrap(LeafReader) yourself (versus SegmentMerger doing it always if its not a SegmentReader). Also adds FilterCodecReader, to make it easier to start efficiently filtering on merge. I cutover all the index splitters to this. This means they should be much much faster with this patch, they just change the deletes as you expect, and the merge is as optimal as a normal one. In other places, for now I think we should just do a rote conversion with SlowCodecReaderWrapper.wrap. Its no slower than today, just explicit, and we can incrementally fix them to do the right thing in the future rather than all at once. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS-MAVEN] Lucene-Solr-Maven-5.x #812: POMs out of sync
Build: https://builds.apache.org/job/Lucene-Solr-Maven-5.x/812/ No tests ran. Build Log: [...truncated 40201 lines...] - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-5.x-Linux (32bit/jdk1.8.0_20) - Build # 11694 - Failure!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/11694/ Java: 32bit/jdk1.8.0_20 -server -XX:+UseSerialGC (asserts: false) 1 tests failed. FAILED: org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest.testDistribSearch Error Message: There were too many update fails - we expect it can happen, but shouldn't easily Stack Trace: java.lang.AssertionError: There were too many update fails - we expect it can happen, but shouldn't easily at __randomizedtesting.SeedInfo.seed([1CC77FA7A05FE0DB:9D21F1BFD70080E7]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.junit.Assert.assertFalse(Assert.java:68) at org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest.doTest(ChaosMonkeyNothingIsSafeTest.java:222) at org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:868) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:483) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836) at com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738) at com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
[jira] [Commented] (SOLR-1782) stats.facet assumes FieldCache.StringIndex - fails horribly on multivalued fields
[ https://issues.apache.org/jira/browse/SOLR-1782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14267497#comment-14267497 ] Elran Dvir commented on SOLR-1782: -- Hi Patanachai, I am using your patch for stats.facet for multivalue fields above Solr 4.8. It works perfectly for most cases. I found a case in which it doesn't work. When the field we facet on is a numeric field but is not multivalue. The code fails on: if (topLevelSortedValues == null) { topLevelSortedValues = FieldCache.DEFAULT.getTermsIndex(topLevelReader, name); and this the exception I get: (SolrException.java:120) - null:java.lang.IllegalStateException: Type mismatch:time was indexed as NUMERIC at org.apache.lucene.search.FieldCacheImpl.getTermsIndex(FieldCacheImpl.java:1161) at org.apache.lucene.search.FieldCacheImpl.getTermsIndex(FieldCacheImpl.java:1145) at org.apache.solr.handler.component.FieldFacetStats.facetTermNum(FieldFacetStats.java:152) at org.apache.solr.request.UnInvertedField.getStats(UnInvertedField.java:587) at org.apache.solr.handler.component.SimpleStats.getStatsFields(StatsComponent.java:514) at org.apache.solr.handler.component.StatsComponent.process(StatsComponent.java:64) at org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:218) at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135) at org.apache.solr.core.SolrCore.execute(SolrCore.java:1953) at org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:774) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:418) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:207) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1474) at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:499) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137) at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557) at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231) at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1086) at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:428) at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193) at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1020) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135) at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255) at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116) at org.eclipse.jetty.server.Server.handle(Server.java:370) at org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489) at org.eclipse.jetty.server.AbstractHttpConnection.content(AbstractHttpConnection.java:960) at org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.content(AbstractHttpConnection.java:1021) at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:865) at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:240) at org.eclipse.jetty.server.AsyncHttpConnection.handle(AsyncHttpConnection.java:82) at org.eclipse.jetty.io.nio.SslConnection.handle(SslConnection.java:196) at org.eclipse.jetty.io.nio.SelectChannelEndPoint.handle(SelectChannelEndPoint.java:668) at org.eclipse.jetty.io.nio.SelectChannelEndPoint$1.run(SelectChannelEndPoint.java:52) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608) at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543) at java.lang.Thread.run(Thread.java:804) So which field cache should I be using for numeric field? Thanks. stats.facet assumes FieldCache.StringIndex - fails horribly on multivalued fields - Key: SOLR-1782 URL: https://issues.apache.org/jira/browse/SOLR-1782 Project: Solr Issue Type: Bug Components: search Affects Versions: 1.4 Environment: reproduced on Win2k3 using 1.5.0-dev solr ($Id: CHANGES.txt 906924 2010-02-05 12:43:11Z noble $) Reporter: Gerald DeConto Assignee: Hoss Man Attachments: SOLR-1782.2.patch, SOLR-1782.2013-01-07.patch, SOLR-1782.2013-04-10.patch,
[jira] [Commented] (SOLR-6787) API to manage blobs in Solr
[ https://issues.apache.org/jira/browse/SOLR-6787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14267504#comment-14267504 ] ASF subversion and git services commented on SOLR-6787: --- Commit 1650030 from [~noble.paul] in branch 'dev/trunk' [ https://svn.apache.org/r1650030 ] SOLR-6787 hardening tests API to manage blobs in Solr Key: SOLR-6787 URL: https://issues.apache.org/jira/browse/SOLR-6787 Project: Solr Issue Type: Sub-task Reporter: Noble Paul Assignee: Noble Paul Fix For: 5.0, Trunk Attachments: SOLR-6787.patch, SOLR-6787.patch A special collection called .system needs to be created by the user to store/manage blobs. The schema/solrconfig of that collection need to be automatically supplied by the system so that there are no errors APIs need to be created to manage the content of that collection {code} #create your .system collection first http://localhost:8983/solr/admin/collections?action=CREATEname=.systemreplicationFactor=2 #The config for this collection is automatically created . numShards for this collection is hardcoded to 1 #create a new jar or add a new version of a jar curl -X POST -H 'Content-Type: application/octet-stream' --data-binary @mycomponent.jar http://localhost:8983/solr/.system/blob/mycomponent # GET on the end point would give a list of jars and other details curl http://localhost:8983/solr/.system/blob # GET on the end point with jar name would give details of various versions of the available jars curl http://localhost:8983/solr/.system/blob/mycomponent # GET on the end point with jar name and version with a wt=filestream to get the actual file curl http://localhost:8983/solr/.system/blob/mycomponent/1?wt=filestream mycomponent.1.jar # GET on the end point with jar name and wt=filestream to get the latest version of the file curl http://localhost:8983/solr/.system/blob/mycomponent?wt=filestream mycomponent.jar {code} Please note that the jars are never deleted. a new version is added to the system everytime a new jar is posted for the name. You must use the standard delete commands to delete the old entries -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-6787) API to manage blobs in Solr
[ https://issues.apache.org/jira/browse/SOLR-6787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14267507#comment-14267507 ] ASF subversion and git services commented on SOLR-6787: --- Commit 1650032 from [~noble.paul] in branch 'dev/branches/branch_5x' [ https://svn.apache.org/r1650032 ] SOLR-6787 hardening tests API to manage blobs in Solr Key: SOLR-6787 URL: https://issues.apache.org/jira/browse/SOLR-6787 Project: Solr Issue Type: Sub-task Reporter: Noble Paul Assignee: Noble Paul Fix For: 5.0, Trunk Attachments: SOLR-6787.patch, SOLR-6787.patch A special collection called .system needs to be created by the user to store/manage blobs. The schema/solrconfig of that collection need to be automatically supplied by the system so that there are no errors APIs need to be created to manage the content of that collection {code} #create your .system collection first http://localhost:8983/solr/admin/collections?action=CREATEname=.systemreplicationFactor=2 #The config for this collection is automatically created . numShards for this collection is hardcoded to 1 #create a new jar or add a new version of a jar curl -X POST -H 'Content-Type: application/octet-stream' --data-binary @mycomponent.jar http://localhost:8983/solr/.system/blob/mycomponent # GET on the end point would give a list of jars and other details curl http://localhost:8983/solr/.system/blob # GET on the end point with jar name would give details of various versions of the available jars curl http://localhost:8983/solr/.system/blob/mycomponent # GET on the end point with jar name and version with a wt=filestream to get the actual file curl http://localhost:8983/solr/.system/blob/mycomponent/1?wt=filestream mycomponent.1.jar # GET on the end point with jar name and wt=filestream to get the latest version of the file curl http://localhost:8983/solr/.system/blob/mycomponent?wt=filestream mycomponent.jar {code} Please note that the jars are never deleted. a new version is added to the system everytime a new jar is posted for the name. You must use the standard delete commands to delete the old entries -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-6921) Stats.field fails on multivalue fields which are doc valued
Elran Dvir created SOLR-6921: Summary: Stats.field fails on multivalue fields which are doc valued Key: SOLR-6921 URL: https://issues.apache.org/jira/browse/SOLR-6921 Project: Solr Issue Type: Bug Affects Versions: 4.8 Reporter: Elran Dvir I am using stats.field on a field with the following definition in schema: field name=myField type=string indexed=true stored=false multiValued=true docValues=true/ I get the following exception: org.apache.solr.common.SolrException: Type mismatch: myField was indexed as SORTED_SET at org.apache.solr.request.UnInvertedField.init(UnInvertedField.java:193) at org.apache.solr.request.UnInvertedField.getUnInvertedField(UnInvertedField.java:703) at org.apache.solr.handler.component.SimpleStats.getStatsFields(StatsComponent.java:513) at org.apache.solr.handler.component.StatsComponent.process(StatsComponent.java:64) at org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:218) at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135) at org.apache.solr.core.SolrCore.execute(SolrCore.java:1953) at org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:774) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:418) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:207) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1474) at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:499) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137) at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557) at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231) at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1086) at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:428) at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193) at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1020) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135) at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255) at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116) at org.eclipse.jetty.server.Server.handle(Server.java:370) at org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489) at org.eclipse.jetty.server.AbstractHttpConnection.content(AbstractHttpConnection.java:960) at org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.content(AbstractHttpConnection.java:1021) at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:865) at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:240) at org.eclipse.jetty.server.AsyncHttpConnection.handle(AsyncHttpConnection.java:82) at org.eclipse.jetty.io.nio.SslConnection.handle(SslConnection.java:196) at org.eclipse.jetty.io.nio.SelectChannelEndPoint.handle(SelectChannelEndPoint.java:668) at org.eclipse.jetty.io.nio.SelectChannelEndPoint$1.run(SelectChannelEndPoint.java:52) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608) at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543) at java.lang.Thread.run(Thread.java:804) -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-6922) Spatial example in Solr
Ishan Chattopadhyaya created SOLR-6922: -- Summary: Spatial example in Solr Key: SOLR-6922 URL: https://issues.apache.org/jira/browse/SOLR-6922 Project: Solr Issue Type: Bug Reporter: Ishan Chattopadhyaya Fix For: 5.0 I was going through examples, and realized that spatial capabilities aren't exposed via examples very well. Currently, the examples (techproducts or films) don't even have an RPT field in the schema. There's a nice geonames dataset (geonames.org) that could be ingested into a collection and powered off a spatial field as an example (similar to films). There could be other datasets as well. The benefit would be ease of use to users who could try out the spatial queries right off the back of an install without having to ingest their own data in order to try out all that lucene/solr spatial provides. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-6923) kill -9 doesn't change the replica state in clusterstate.json
Varun Thacker created SOLR-6923: --- Summary: kill -9 doesn't change the replica state in clusterstate.json Key: SOLR-6923 URL: https://issues.apache.org/jira/browse/SOLR-6923 Project: Solr Issue Type: Bug Reporter: Varun Thacker - I did the following {code} ./solr start -e cloud -noprompt kill -9 pid-of-node2 //Not the node which is running ZK {code} - /live_nodes reflects that the node is gone. - This is the only message which gets logged on the node1 server after killing node2 {code} 45812 [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:9983] WARN org.apache.zookeeper.server.NIOServerCnxn – caught end of stream exception EndOfStreamException: Unable to read additional data from client sessionid 0x14ac40f26660001, likely client has closed socket at org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228) at org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208) at java.lang.Thread.run(Thread.java:745) {code} - The graph shows the node2 as 'Gone' state - clusterstate.json keeps showing the replica as 'active' {code} {collection1:{ shards:{shard1:{ range:8000-7fff, state:active, replicas:{ core_node1:{ state:active, core:collection1, node_name:169.254.113.194:8983_solr, base_url:http://169.254.113.194:8983/solr;, leader:true}, core_node2:{ state:active, core:collection1, node_name:169.254.113.194:8984_solr, base_url:http://169.254.113.194:8984/solr, maxShardsPerNode:1, router:{name:compositeId}, replicationFactor:1, autoAddReplicas:false, autoCreated:true}} {code} One immediate problem I can see is that AutoAddReplicas doesn't work since the clusterstate.json never changes. There might be more features which are affected by this. On first thought I think we can handle this - The shard leader could listen to changes on /live_nodes and if it has replicas that were on that node, mark it as 'down' in the clusterstate.json? -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-6921) Stats.field fails on multivalue fields which are doc valued
[ https://issues.apache.org/jira/browse/SOLR-6921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14267589#comment-14267589 ] Ahmet Arslan commented on SOLR-6921: I think this is duplicate of SOLR-6024 and already fixed. Stats.field fails on multivalue fields which are doc valued --- Key: SOLR-6921 URL: https://issues.apache.org/jira/browse/SOLR-6921 Project: Solr Issue Type: Bug Affects Versions: 4.8 Reporter: Elran Dvir I am using stats.field on a field with the following definition in schema: field name=myField type=string indexed=true stored=false multiValued=true docValues=true/ I get the following exception: org.apache.solr.common.SolrException: Type mismatch: myField was indexed as SORTED_SET at org.apache.solr.request.UnInvertedField.init(UnInvertedField.java:193) at org.apache.solr.request.UnInvertedField.getUnInvertedField(UnInvertedField.java:703) at org.apache.solr.handler.component.SimpleStats.getStatsFields(StatsComponent.java:513) at org.apache.solr.handler.component.StatsComponent.process(StatsComponent.java:64) at org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:218) at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135) at org.apache.solr.core.SolrCore.execute(SolrCore.java:1953) at org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:774) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:418) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:207) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1474) at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:499) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137) at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557) at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231) at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1086) at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:428) at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193) at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1020) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135) at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255) at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116) at org.eclipse.jetty.server.Server.handle(Server.java:370) at org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489) at org.eclipse.jetty.server.AbstractHttpConnection.content(AbstractHttpConnection.java:960) at org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.content(AbstractHttpConnection.java:1021) at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:865) at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:240) at org.eclipse.jetty.server.AsyncHttpConnection.handle(AsyncHttpConnection.java:82) at org.eclipse.jetty.io.nio.SslConnection.handle(SslConnection.java:196) at org.eclipse.jetty.io.nio.SelectChannelEndPoint.handle(SelectChannelEndPoint.java:668) at org.eclipse.jetty.io.nio.SelectChannelEndPoint$1.run(SelectChannelEndPoint.java:52) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608) at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543) at java.lang.Thread.run(Thread.java:804) -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-6922) Spatial example in Solr
[ https://issues.apache.org/jira/browse/SOLR-6922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Smiley updated SOLR-6922: --- Component/s: spatial Cool; but note geonames.org is a lot of data, and is also just point data so won't exercise the more advanced spatial stuff. Spatial example in Solr --- Key: SOLR-6922 URL: https://issues.apache.org/jira/browse/SOLR-6922 Project: Solr Issue Type: Bug Components: spatial Reporter: Ishan Chattopadhyaya Fix For: 5.0 I was going through examples, and realized that spatial capabilities aren't exposed via examples very well. Currently, the examples (techproducts or films) don't even have an RPT field in the schema. There's a nice geonames dataset (geonames.org) that could be ingested into a collection and powered off a spatial field as an example (similar to films). There could be other datasets as well. The benefit would be ease of use to users who could try out the spatial queries right off the back of an install without having to ingest their own data in order to try out all that lucene/solr spatial provides. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-6365) specify appends, defaults, invariants outside of the component
[ https://issues.apache.org/jira/browse/SOLR-6365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14267673#comment-14267673 ] ASF subversion and git services commented on SOLR-6365: --- Commit 1650065 from [~noble.paul] in branch 'dev/branches/branch_5x' [ https://svn.apache.org/r1650065 ] SOLR-6365 last commit caused regression specify appends, defaults, invariants outside of the component --- Key: SOLR-6365 URL: https://issues.apache.org/jira/browse/SOLR-6365 Project: Solr Issue Type: Improvement Reporter: Noble Paul Assignee: Noble Paul Fix For: 5.0, Trunk Attachments: SOLR-6365-crappy-test.patch, SOLR-6365.patch, SOLR-6365.patch, SOLR-6365.patch The components are configured in solrconfig.xml mostly for specifying these extra parameters. If we separate these out, we can avoid specifying the components altogether and make solrconfig much simpler. Eventually we want users to see all functions as paths instead of components and control these params from outside , through an API and persisted in ZK objectives : * define standard components implicitly and let users override some params only * reuse standard params across components * define multiple param sets and mix and match these params at request time example {code:xml} !-- use json for all paths and _txt as the default search field-- initParams name=global path=/** lst name=defaults str name=wtjson/str str name=df_txt/str /lst /initParams {code} other examples {code:xml} initParams name=a path=/dump3,/root/*,/root1/** lst name=defaults str name=aA/str /lst lst name=invariants str name=bB/str /lst lst name=appends str name=cC/str /lst /initParams requestHandler name=/dump3 class=DumpRequestHandler/ requestHandler name=/dump4 class=DumpRequestHandler/ requestHandler name=/root/dump5 class=DumpRequestHandler/ requestHandler name=/root1/anotherlevel/dump6 class=DumpRequestHandler/ requestHandler name=/dump1 class=DumpRequestHandler initParams=a/ requestHandler name=/dump2 class=DumpRequestHandler initParams=a lst name=defaults str name=aA1/str /lst lst name=invariants str name=bB1/str /lst lst name=appends str name=cC1/str /lst /requestHandler {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-5.x-Linux (32bit/jdk1.8.0_40-ea-b09) - Build # 11695 - Still Failing!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/11695/ Java: 32bit/jdk1.8.0_40-ea-b09 -server -XX:+UseG1GC (asserts: false) 3 tests failed. FAILED: org.apache.solr.client.solrj.SolrSchemalessExampleTest.testCommitWithinOnAdd Error Message: Error from server at http://127.0.0.1:40644/solr/collection1: undefined field: price Stack Trace: org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error from server at http://127.0.0.1:40644/solr/collection1: undefined field: price at __randomizedtesting.SeedInfo.seed([936CFDDE7DE6601C:8FE3D7B20C448017]:0) at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:558) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:214) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:210) at org.apache.solr.client.solrj.request.AbstractUpdateRequest.process(AbstractUpdateRequest.java:124) at org.apache.solr.client.solrj.SolrExampleTestsBase.testCommitWithinOnAdd(SolrExampleTestsBase.java:63) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836) at com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738) at com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at
[jira] [Closed] (SOLR-6921) Stats.field fails on multivalue fields which are doc valued
[ https://issues.apache.org/jira/browse/SOLR-6921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ahmet Arslan closed SOLR-6921. -- Resolution: Duplicate Closing issue as duplicate of SOLR-6024. Elran, we can re-open if this is something different. Stats.field fails on multivalue fields which are doc valued --- Key: SOLR-6921 URL: https://issues.apache.org/jira/browse/SOLR-6921 Project: Solr Issue Type: Bug Affects Versions: 4.8 Reporter: Elran Dvir I am using stats.field on a field with the following definition in schema: field name=myField type=string indexed=true stored=false multiValued=true docValues=true/ I get the following exception: org.apache.solr.common.SolrException: Type mismatch: myField was indexed as SORTED_SET at org.apache.solr.request.UnInvertedField.init(UnInvertedField.java:193) at org.apache.solr.request.UnInvertedField.getUnInvertedField(UnInvertedField.java:703) at org.apache.solr.handler.component.SimpleStats.getStatsFields(StatsComponent.java:513) at org.apache.solr.handler.component.StatsComponent.process(StatsComponent.java:64) at org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:218) at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135) at org.apache.solr.core.SolrCore.execute(SolrCore.java:1953) at org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:774) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:418) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:207) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1474) at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:499) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137) at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557) at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231) at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1086) at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:428) at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193) at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1020) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135) at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255) at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116) at org.eclipse.jetty.server.Server.handle(Server.java:370) at org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489) at org.eclipse.jetty.server.AbstractHttpConnection.content(AbstractHttpConnection.java:960) at org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.content(AbstractHttpConnection.java:1021) at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:865) at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:240) at org.eclipse.jetty.server.AsyncHttpConnection.handle(AsyncHttpConnection.java:82) at org.eclipse.jetty.io.nio.SslConnection.handle(SslConnection.java:196) at org.eclipse.jetty.io.nio.SelectChannelEndPoint.handle(SelectChannelEndPoint.java:668) at org.eclipse.jetty.io.nio.SelectChannelEndPoint$1.run(SelectChannelEndPoint.java:52) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608) at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543) at java.lang.Thread.run(Thread.java:804) -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.9.0-ea-b34) - Build # 11863 - Still Failing!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/11863/ Java: 64bit/jdk1.9.0-ea-b34 -XX:-UseCompressedOops -XX:+UseG1GC (asserts: false) 1 tests failed. FAILED: org.apache.solr.cloud.ShardSplitTest.testDistribSearch Error Message: Wrong doc count on shard1_0. See SOLR-5309 expected:273 but was:272 Stack Trace: java.lang.AssertionError: Wrong doc count on shard1_0. See SOLR-5309 expected:273 but was:272 at __randomizedtesting.SeedInfo.seed([A9641232E57EBC8E:28829C2A9221DCB2]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.failNotEquals(Assert.java:647) at org.junit.Assert.assertEquals(Assert.java:128) at org.junit.Assert.assertEquals(Assert.java:472) at org.apache.solr.cloud.ShardSplitTest.checkDocCountsAndShardStates(ShardSplitTest.java:471) at org.apache.solr.cloud.ShardSplitTest.splitByUniqueKeyTest(ShardSplitTest.java:241) at org.apache.solr.cloud.ShardSplitTest.doTest(ShardSplitTest.java:103) at org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:868) at sun.reflect.GeneratedMethodAccessor54.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836) at com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738) at com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at
[jira] [Comment Edited] (SOLR-6917) TestDynamicLoading fails frequently.
[ https://issues.apache.org/jira/browse/SOLR-6917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14267617#comment-14267617 ] Noble Paul edited comment on SOLR-6917 at 1/7/15 1:31 PM: -- Yes, SOLR-6801 is reopened and the reason why this is failing because the jar upload failed. was (Author: noble.paul): Yes, SOLR-6801 is reopened and that is the reason why this is failing because the jar upload failed. TestDynamicLoading fails frequently. Key: SOLR-6917 URL: https://issues.apache.org/jira/browse/SOLR-6917 Project: Solr Issue Type: Test Reporter: Mark Miller Priority: Minor most recent failure: {noformat} [junit4] FAILURE 39.7s J5 | TestDynamicLoading.testDistribSearch [junit4] Throwable #1: java.lang.AssertionError: New version of class is not loaded { [junit4] responseHeader:{ [junit4] status:404, [junit4] QTime:2}, [junit4] error:{ [junit4] msg:no such blob or version available: test/2, [junit4] code:404}} [junit4] at __randomizedtesting.SeedInfo.seed([B49634A982DC7AFE:3570BAB1F5831AC2]:0) [junit4] at org.apache.solr.core.TestDynamicLoading.dynamicLoading(TestDynamicLoading.java:154) [junit4] at org.apache.solr.core.TestDynamicLoading.doTest(TestDynamicLoading.java:64) [junit4] at org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:868) [junit4] at java.lang.Thread.run(Thread.java:745) {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6149) Infix suggesters' highlighting, allTermsRequired options are hardwired and not configurable for non-contextual lookup
[ https://issues.apache.org/jira/browse/LUCENE-6149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14267627#comment-14267627 ] Boon Low commented on LUCENE-6149: -- There's a typo the test class: testConstructorDefatuls Infix suggesters' highlighting, allTermsRequired options are hardwired and not configurable for non-contextual lookup - Key: LUCENE-6149 URL: https://issues.apache.org/jira/browse/LUCENE-6149 Project: Lucene - Core Issue Type: Improvement Components: modules/other Affects Versions: 4.9, 4.10.1, 4.10.2, 4.10.3 Reporter: Boon Low Assignee: Tomás Fernández Löbbe Priority: Minor Labels: suggester Fix For: 5.0, Trunk Attachments: LUCENE-6149.patch, LUCENE-6149.patch, LUCENE-6149.patch, LUCENE-6149.patch Highlighting and allTermsRequired are hardwired in _AnalyzingInfixSuggester_ for non-contextual lookup (via _Lookup_) see *true*, *true* below: {code:title=AnalyzingInfixSuggester.java (extends Lookup.java) } public ListLookupResult lookup(CharSequence key, SetBytesRef contexts, boolean onlyMorePopular, int num) throws IOException { return lookup(key, contexts, num, true, true); } /** Lookup, without any context. */ public ListLookupResult lookup(CharSequence key, int num, boolean allTermsRequired, boolean doHighlight) throws IOException { return lookup(key, null, num, allTermsRequired, doHighlight); } {code} {code:title=Lookup.java} public ListLookupResult lookup(CharSequence key, boolean onlyMorePopular, int num) throws IOException { return lookup(key, null, onlyMorePopular, num); } {code} The above means the majority of the current infix suggester lookup always return highlighted results with allTermsRequired in effect. There is no way to change this despite the options and improvement of LUCENE-6050, made to incorporate Boolean lookup clauses (MUST/SHOULD). This shortcoming has also been reported in SOLR-6648. The suggesters (AnalyzingInfixSuggester, BlendedInfixSuggester) should provide a proper mechanism to set defaults for highlighting and allTermsRequired, e.g. in constructors (and in Solr factories, thus configurable via solrconfig.xml). -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-5.x-Windows (64bit/jdk1.8.0_40-ea-b09) - Build # 4436 - Still Failing!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Windows/4436/ Java: 64bit/jdk1.8.0_40-ea-b09 -XX:-UseCompressedOops -XX:+UseSerialGC (asserts: true) 1 tests failed. FAILED: org.apache.solr.cloud.ReplicationFactorTest.testDistribSearch Error Message: org.apache.solr.client.solrj.SolrServerException: IOException occured when talking to server at: http://127.0.0.1:60792/wc_avm/repfacttest_c8n_1x3_shard1_replica1 Stack Trace: org.apache.solr.client.solrj.SolrServerException: org.apache.solr.client.solrj.SolrServerException: IOException occured when talking to server at: http://127.0.0.1:60792/wc_avm/repfacttest_c8n_1x3_shard1_replica1 at org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:581) at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:890) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:793) at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:736) at org.apache.solr.cloud.ReplicationFactorTest.testRf3(ReplicationFactorTest.java:277) at org.apache.solr.cloud.ReplicationFactorTest.doTest(ReplicationFactorTest.java:123) at org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:868) at sun.reflect.GeneratedMethodAccessor88.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836) at com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738) at com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54) at
Re: Anybody having troubles building trunk?
I had the same issue earlier today, and identified the problem here, along with a workaround: https://issues.apache.org/jira/browse/SOLR-4839?focusedCommentId=14268311page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14268311 On Wed, Jan 7, 2015 at 10:36 PM, Alexandre Rafalovitch arafa...@gmail.com wrote: I am having dependencies issues even if I blow away everything, check it out again and do 'ant resolve': resolve: [ivy:retrieve] [ivy:retrieve] :: problems summary :: [ivy:retrieve] WARNINGS [ivy:retrieve] :: [ivy:retrieve] :: UNRESOLVED DEPENDENCIES :: [ivy:retrieve] :: [ivy:retrieve] :: org.restlet.jee#org.restlet.ext.servlet;2.3.0: configuration not found in org.restlet.jee#org.restlet.ext.servlet;2.3.0: 'master'. It was required from org.apache.solr#core;working@Alexs-MacBook-Pro.local compile [ivy:retrieve] :: [ivy:retrieve] [ivy:retrieve] :: USE VERBOSE OR DEBUG MESSAGE LEVEL FOR MORE DETAILS BUILD FAILED Regards, Alex. Sign up for my Solr resources newsletter at http://www.solr-start.com/ - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-5287) Allow at least solrconfig.xml and schema.xml to be edited via the admin screen
[ https://issues.apache.org/jira/browse/SOLR-5287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14268902#comment-14268902 ] ASF subversion and git services commented on SOLR-5287: --- Commit 1650213 from [~erickoerickson] in branch 'dev/branches/branch_5x' [ https://svn.apache.org/r1650213 ] SOLR-6925: Back out changes having to do with SOLR-5287 (editing configs from admin UI) Allow at least solrconfig.xml and schema.xml to be edited via the admin screen -- Key: SOLR-5287 URL: https://issues.apache.org/jira/browse/SOLR-5287 Project: Solr Issue Type: Improvement Components: Schema and Analysis, web gui Affects Versions: 4.5, Trunk Reporter: Erick Erickson Assignee: Erick Erickson Priority: Blocker Fix For: 5.0, Trunk Attachments: SOLR-5287.patch, SOLR-5287.patch, SOLR-5287.patch, SOLR-5287.patch, SOLR-5287.patch A user asking a question on the Solr list got me to thinking about editing the main config files from the Solr admin screen. I chatted briefly with [~steffkes] about the mechanics of this on the browser side, he doesn't see a problem on that end. His comment is there's no end point that'll write the file back. Am I missing something here or is this actually not a hard problem? I see a couple of issues off the bat, neither of which seem troublesome. 1 file permissions. I'd imagine lots of installations will get file permission exceptions if Solr tries to write the file out. Well, do a chmod/chown. 2 screwing up the system maliciously or not. I don't think this is an issue, this would be part of the admin handler after all. Does anyone have objections to the idea? And how does this fit into the work that [~sar...@syr.edu] has been doing? I can imagine this extending to SolrCloud with a push this to ZK option or something like that, perhaps not in V1 unless it's easy. Of course any pointers gratefully received. Especially ones that start with Don't waste your effort, it'll never work (or be accepted)... Because what scares me is this seems like such an easy thing to do that would be a significant ease-of-use improvement, so there _has_ to be something I'm missing. So if we go forward with this we'll make this the umbrella JIRA, the two immediate sub-JIRAs that spring to mind will be the UI work and the endpoints for the UI work to use. I think there are only two end-points here 1 list all the files in the conf (or arbitrary from solr_home/collection) directory. 2 write this text to this file Possibly later we could add clone the configs from coreX to coreY. BTW, I've assigned this to myself so I don't lose it, but if anyone wants to take it over it won't hurt my feelings a bit -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-6925) Back out changes having to do with SOLR-5287 (editing configs from admin UI)
[ https://issues.apache.org/jira/browse/SOLR-6925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14268901#comment-14268901 ] ASF subversion and git services commented on SOLR-6925: --- Commit 1650213 from [~erickoerickson] in branch 'dev/branches/branch_5x' [ https://svn.apache.org/r1650213 ] SOLR-6925: Back out changes having to do with SOLR-5287 (editing configs from admin UI) Back out changes having to do with SOLR-5287 (editing configs from admin UI) Key: SOLR-6925 URL: https://issues.apache.org/jira/browse/SOLR-6925 Project: Solr Issue Type: Bug Affects Versions: 5.0, Trunk Reporter: Erick Erickson Assignee: Erick Erickson Priority: Blocker Attachments: SOLR-6925.patch Should have something today/tomorrow. The history here is that I had this bright idea to edit files directly from the admin UI, especially schema.xml and solrxconifg.xml. Brilliant I sez to myself... except it's a significant security hole and I'm really glad that was pointed out before we released it in 4x. So we pulled it completely from 4.x and made it something in 5.x (then trunk) that you could enable (disabled by default) if you wanted to live dangerously and we'd deal with it later. Well it's later. Given all the work for managed schemas and the like in the interim, I think this is cruft that should be removed completely from current trunk and 5x. Marking it as a blocker so we don't release 5x with this in it or we'll have back-compat issues. Should have a fix in very quickly. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.9.0-ea-b34) - Build # 11870 - Still Failing!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/11870/ Java: 64bit/jdk1.9.0-ea-b34 -XX:-UseCompressedOops -XX:+UseG1GC (asserts: false) All tests passed Build Log: [...truncated 19506 lines...] check-licenses: [echo] License check under: /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene [licenses] MISSING sha1 checksum file for: /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/replicator/lib/javax.servlet-api-3.1.0.jar [licenses] EXPECTED sha1 checksum file : /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/licenses/javax.servlet-api-3.1.0.jar.sha1 [...truncated 1 lines...] BUILD FAILED /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:519: The following error occurred while executing this line: /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:90: The following error occurred while executing this line: /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/build.xml:84: The following error occurred while executing this line: /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/tools/custom-tasks.xml:62: License check failed. Check the logs. If you recently modified ivy-versions.properties or any module's ivy.xml, make sure you run ant clean-jars jar-checksums before running precommit. Total time: 68 minutes 38 seconds Build step 'Invoke Ant' marked build as failure [description-setter] Description set: Java: 64bit/jdk1.9.0-ea-b34 -XX:-UseCompressedOops -XX:+UseG1GC (asserts: false) Archiving artifacts Recording test results Email was triggered for: Failure - Any Sending email for trigger: Failure - Any - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-6926) ant example makes no sense anymore - should be ant server (or refactored into some other compilation realted target)
[ https://issues.apache.org/jira/browse/SOLR-6926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14268897#comment-14268897 ] Ramkumar Aiyengar commented on SOLR-6926: - I had a patch put up on the original stop shipping war which never got in, could be useful.. https://github.com/apache/lucene-solr/pull/112 ant example makes no sense anymore - should be ant server (or refactored into some other compilation realted target) Key: SOLR-6926 URL: https://issues.apache.org/jira/browse/SOLR-6926 Project: Solr Issue Type: Improvement Reporter: Hoss Man Assignee: Timothy Potter (filing as followup to a chat i had with tim offline the other day) the ant target ant example doesn't really make any sense anymore ... that name was created way, way, back when ant compile built up the dist/solr.war file that people were expected to install and ant example took care of copying that war file into the example/jetty directory these days, it should probably be named something like ant server or refactored inside an existing task like ant compile -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-6787) API to manage blobs in Solr
[ https://issues.apache.org/jira/browse/SOLR-6787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14268904#comment-14268904 ] ASF subversion and git services commented on SOLR-6787: --- Commit 1650214 from [~noble.paul] in branch 'dev/branches/branch_5x' [ https://svn.apache.org/r1650214 ] SOLR-6787 commit right away instead of waiting API to manage blobs in Solr Key: SOLR-6787 URL: https://issues.apache.org/jira/browse/SOLR-6787 Project: Solr Issue Type: Sub-task Reporter: Noble Paul Assignee: Noble Paul Fix For: 5.0, Trunk Attachments: SOLR-6787.patch, SOLR-6787.patch A special collection called .system needs to be created by the user to store/manage blobs. The schema/solrconfig of that collection need to be automatically supplied by the system so that there are no errors APIs need to be created to manage the content of that collection {code} #create your .system collection first http://localhost:8983/solr/admin/collections?action=CREATEname=.systemreplicationFactor=2 #The config for this collection is automatically created . numShards for this collection is hardcoded to 1 #create a new jar or add a new version of a jar curl -X POST -H 'Content-Type: application/octet-stream' --data-binary @mycomponent.jar http://localhost:8983/solr/.system/blob/mycomponent # GET on the end point would give a list of jars and other details curl http://localhost:8983/solr/.system/blob # GET on the end point with jar name would give details of various versions of the available jars curl http://localhost:8983/solr/.system/blob/mycomponent # GET on the end point with jar name and version with a wt=filestream to get the actual file curl http://localhost:8983/solr/.system/blob/mycomponent/1?wt=filestream mycomponent.1.jar # GET on the end point with jar name and wt=filestream to get the latest version of the file curl http://localhost:8983/solr/.system/blob/mycomponent?wt=filestream mycomponent.jar {code} Please note that the jars are never deleted. a new version is added to the system everytime a new jar is posted for the name. You must use the standard delete commands to delete the old entries -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-6167) Speed up SortingMergePolicy by string
[ https://issues.apache.org/jira/browse/LUCENE-6167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Muir updated LUCENE-6167: Attachment: LUCENE-6167.patch Here is one simple solution that is ~2x faster building docmaps for geonames by string name. There are other ways to skin the cat though. Speed up SortingMergePolicy by string - Key: LUCENE-6167 URL: https://issues.apache.org/jira/browse/LUCENE-6167 Project: Lucene - Core Issue Type: Improvement Reporter: Robert Muir Attachments: LUCENE-6167.patch Building the sorted docmaps can take a nontrivial amount of time, for String we currently don't do a very good job. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.9.0-ea-b34) - Build # 11869 - Still Failing!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/11869/ Java: 64bit/jdk1.9.0-ea-b34 -XX:+UseCompressedOops -XX:+UseG1GC (asserts: false) All tests passed Build Log: [...truncated 19492 lines...] check-licenses: [echo] License check under: /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene [licenses] MISSING sha1 checksum file for: /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/replicator/lib/javax.servlet-api-3.1.0.jar [licenses] EXPECTED sha1 checksum file : /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/licenses/javax.servlet-api-3.1.0.jar.sha1 [...truncated 1 lines...] BUILD FAILED /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:519: The following error occurred while executing this line: /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:90: The following error occurred while executing this line: /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/build.xml:84: The following error occurred while executing this line: /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/tools/custom-tasks.xml:62: License check failed. Check the logs. If you recently modified ivy-versions.properties or any module's ivy.xml, make sure you run ant clean-jars jar-checksums before running precommit. Total time: 66 minutes 49 seconds Build step 'Invoke Ant' marked build as failure [description-setter] Description set: Java: 64bit/jdk1.9.0-ea-b34 -XX:+UseCompressedOops -XX:+UseG1GC (asserts: false) Archiving artifacts Recording test results Email was triggered for: Failure - Any Sending email for trigger: Failure - Any - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (LUCENE-6167) Speed up SortingMergePolicy by string
Robert Muir created LUCENE-6167: --- Summary: Speed up SortingMergePolicy by string Key: LUCENE-6167 URL: https://issues.apache.org/jira/browse/LUCENE-6167 Project: Lucene - Core Issue Type: Improvement Reporter: Robert Muir Building the sorted docmaps can take a nontrivial amount of time, for String we currently don't do a very good job. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-6925) Back out changes having to do with SOLR-5287 (editing configs from admin UI)
[ https://issues.apache.org/jira/browse/SOLR-6925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Erick Erickson updated SOLR-6925: - Summary: Back out changes having to do with SOLR-5287 (editing configs from admin UI) (was: Back out all changes having to do with SOLR-5287) Back out changes having to do with SOLR-5287 (editing configs from admin UI) Key: SOLR-6925 URL: https://issues.apache.org/jira/browse/SOLR-6925 Project: Solr Issue Type: Bug Affects Versions: 5.0, Trunk Reporter: Erick Erickson Assignee: Erick Erickson Priority: Blocker Attachments: SOLR-6925.patch Should have something today/tomorrow. The history here is that I had this bright idea to edit files directly from the admin UI, especially schema.xml and solrxconifg.xml. Brilliant I sez to myself... except it's a significant security hole and I'm really glad that was pointed out before we released it in 4x. So we pulled it completely from 4.x and made it something in 5.x (then trunk) that you could enable (disabled by default) if you wanted to live dangerously and we'd deal with it later. Well it's later. Given all the work for managed schemas and the like in the interim, I think this is cruft that should be removed completely from current trunk and 5x. Marking it as a blocker so we don't release 5x with this in it or we'll have back-compat issues. Should have a fix in very quickly. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: Anybody having troubles building trunk?
Similar but different? I got rid of cloudera references all together, did ant clean and it is still the same error. The build line that failed is: ivy:retrieve conf=compile,compile.hadoop type=jar,bundle sync=${ivy.sync} log=download-only symlink=${ivy.symlink}/ in trunk/solr/core/build.xml:65 Regards, Alex. Sign up for my Solr resources newsletter at http://www.solr-start.com/ On 8 January 2015 at 00:12, Steve Rowe sar...@gmail.com wrote: I had the same issue earlier today, and identified the problem here, along with a workaround: https://issues.apache.org/jira/browse/SOLR-4839?focusedCommentId=14268311page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14268311 On Wed, Jan 7, 2015 at 10:36 PM, Alexandre Rafalovitch arafa...@gmail.com wrote: I am having dependencies issues even if I blow away everything, check it out again and do 'ant resolve': resolve: [ivy:retrieve] [ivy:retrieve] :: problems summary :: [ivy:retrieve] WARNINGS [ivy:retrieve] :: [ivy:retrieve] :: UNRESOLVED DEPENDENCIES :: [ivy:retrieve] :: [ivy:retrieve] :: org.restlet.jee#org.restlet.ext.servlet;2.3.0: configuration not found in org.restlet.jee#org.restlet.ext.servlet;2.3.0: 'master'. It was required from org.apache.solr#core;working@Alexs-MacBook-Pro.local compile [ivy:retrieve] :: [ivy:retrieve] [ivy:retrieve] :: USE VERBOSE OR DEBUG MESSAGE LEVEL FOR MORE DETAILS BUILD FAILED Regards, Alex. Sign up for my Solr resources newsletter at http://www.solr-start.com/ - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-5.x-Linux (32bit/jdk1.8.0_40-ea-b09) - Build # 11701 - Still Failing!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/11701/ Java: 32bit/jdk1.8.0_40-ea-b09 -server -XX:+UseSerialGC (asserts: true) 1 tests failed. FAILED: org.apache.solr.client.solrj.TestLBHttpSolrClient.testReliability Error Message: IOException occured when talking to server at: https://127.0.0.1:59697/solr Stack Trace: org.apache.solr.client.solrj.SolrServerException: IOException occured when talking to server at: https://127.0.0.1:59697/solr at __randomizedtesting.SeedInfo.seed([83CB6BBD129B7CCB:4203B6FBB3FDAD62]:0) at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:573) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:214) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:210) at org.apache.solr.client.solrj.request.AbstractUpdateRequest.process(AbstractUpdateRequest.java:124) at org.apache.solr.client.solrj.SolrClient.commit(SolrClient.java:169) at org.apache.solr.client.solrj.SolrClient.commit(SolrClient.java:147) at org.apache.solr.client.solrj.TestLBHttpSolrClient.addDocs(TestLBHttpSolrClient.java:112) at org.apache.solr.client.solrj.TestLBHttpSolrClient.setUp(TestLBHttpSolrClient.java:95) at sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:861) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836) at com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738) at com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at
[jira] [Closed] (SOLR-6925) Back out changes having to do with SOLR-5287 (editing configs from admin UI)
[ https://issues.apache.org/jira/browse/SOLR-6925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Erick Erickson closed SOLR-6925. Resolution: Fixed Fix Version/s: Trunk 5.0 Back out changes having to do with SOLR-5287 (editing configs from admin UI) Key: SOLR-6925 URL: https://issues.apache.org/jira/browse/SOLR-6925 Project: Solr Issue Type: Bug Affects Versions: 5.0, Trunk Reporter: Erick Erickson Assignee: Erick Erickson Priority: Blocker Fix For: 5.0, Trunk Attachments: SOLR-6925.patch Should have something today/tomorrow. The history here is that I had this bright idea to edit files directly from the admin UI, especially schema.xml and solrxconifg.xml. Brilliant I sez to myself... except it's a significant security hole and I'm really glad that was pointed out before we released it in 4x. So we pulled it completely from 4.x and made it something in 5.x (then trunk) that you could enable (disabled by default) if you wanted to live dangerously and we'd deal with it later. Well it's later. Given all the work for managed schemas and the like in the interim, I think this is cruft that should be removed completely from current trunk and 5x. Marking it as a blocker so we don't release 5x with this in it or we'll have back-compat issues. Should have a fix in very quickly. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-6581) Prepare CollapsingQParserPlugin and ExpandComponent for 5.0
[ https://issues.apache.org/jira/browse/SOLR-6581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joel Bernstein updated SOLR-6581: - Attachment: SOLR-6581.patch Patch with performance improvements for the ExpandComponent. Tests still need to be re-worked to support the performance enhancements. Prepare CollapsingQParserPlugin and ExpandComponent for 5.0 --- Key: SOLR-6581 URL: https://issues.apache.org/jira/browse/SOLR-6581 Project: Solr Issue Type: Bug Reporter: Joel Bernstein Assignee: Joel Bernstein Priority: Minor Fix For: 5.0 Attachments: SOLR-6581.patch, SOLR-6581.patch, SOLR-6581.patch, SOLR-6581.patch, SOLR-6581.patch, SOLR-6581.patch, SOLR-6581.patch, SOLR-6581.patch, SOLR-6581.patch, SOLR-6581.patch, SOLR-6581.patch, renames.diff *Background* The 4x implementation of the CollapsingQParserPlugin and the ExpandComponent are optimized to work with a top level FieldCache. Top level FieldCaches have a very fast docID to top-level ordinal lookup. Fast access to the top-level ordinals allows for very high performance field collapsing on high cardinality fields. LUCENE-5666 unified the DocValues and FieldCache api's so that the top level FieldCache is no longer in regular use. Instead all top level caches are accessed through MultiDocValues. There are some major advantages of using the MultiDocValues rather then a top level FieldCache. But there is one disadvantage, the lookup from docId to top-level ordinals is slower using MultiDocValues. My testing has shown that *after optimizing* the CollapsingQParserPlugin code to use MultiDocValues, the performance drop is around 100%. For some use cases this performance drop is a blocker. *What About Faceting?* String faceting also relies on the top level ordinals. Is faceting performance affected also? My testing has shown that the faceting performance is affected much less then collapsing. One possible reason for this may be that field collapsing is memory bound and faceting is not. So the additional memory accesses needed for MultiDocValues affects field collapsing much more then faceting. *Proposed Solution* The proposed solution is to have the default Collapse and Expand algorithm use MultiDocValues, but to provide an option to use a top level FieldCache if the performance of MultiDocValues is a blocker. The proposed mechanism for switching to the FieldCache would be a new hint parameter. If the hint parameter is set to FAST_QUERY then the top-level FieldCache would be used for both Collapse and Expand. Example syntax: {code} fq={!collapse field=x hint=FAST_QUERY} {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Anybody having troubles building trunk?
I am having dependencies issues even if I blow away everything, check it out again and do 'ant resolve': resolve: [ivy:retrieve] [ivy:retrieve] :: problems summary :: [ivy:retrieve] WARNINGS [ivy:retrieve] :: [ivy:retrieve] :: UNRESOLVED DEPENDENCIES :: [ivy:retrieve] :: [ivy:retrieve] :: org.restlet.jee#org.restlet.ext.servlet;2.3.0: configuration not found in org.restlet.jee#org.restlet.ext.servlet;2.3.0: 'master'. It was required from org.apache.solr#core;working@Alexs-MacBook-Pro.local compile [ivy:retrieve] :: [ivy:retrieve] [ivy:retrieve] :: USE VERBOSE OR DEBUG MESSAGE LEVEL FOR MORE DETAILS BUILD FAILED Regards, Alex. Sign up for my Solr resources newsletter at http://www.solr-start.com/ - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-5287) Allow at least solrconfig.xml and schema.xml to be edited via the admin screen
[ https://issues.apache.org/jira/browse/SOLR-5287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14268832#comment-14268832 ] ASF subversion and git services commented on SOLR-5287: --- Commit 1650208 from [~erickoerickson] in branch 'dev/trunk' [ https://svn.apache.org/r1650208 ] SOLR-6925: Back out changes having to do with SOLR-5287 (editing configs from admin UI) Allow at least solrconfig.xml and schema.xml to be edited via the admin screen -- Key: SOLR-5287 URL: https://issues.apache.org/jira/browse/SOLR-5287 Project: Solr Issue Type: Improvement Components: Schema and Analysis, web gui Affects Versions: 4.5, Trunk Reporter: Erick Erickson Assignee: Erick Erickson Priority: Blocker Fix For: 5.0, Trunk Attachments: SOLR-5287.patch, SOLR-5287.patch, SOLR-5287.patch, SOLR-5287.patch, SOLR-5287.patch A user asking a question on the Solr list got me to thinking about editing the main config files from the Solr admin screen. I chatted briefly with [~steffkes] about the mechanics of this on the browser side, he doesn't see a problem on that end. His comment is there's no end point that'll write the file back. Am I missing something here or is this actually not a hard problem? I see a couple of issues off the bat, neither of which seem troublesome. 1 file permissions. I'd imagine lots of installations will get file permission exceptions if Solr tries to write the file out. Well, do a chmod/chown. 2 screwing up the system maliciously or not. I don't think this is an issue, this would be part of the admin handler after all. Does anyone have objections to the idea? And how does this fit into the work that [~sar...@syr.edu] has been doing? I can imagine this extending to SolrCloud with a push this to ZK option or something like that, perhaps not in V1 unless it's easy. Of course any pointers gratefully received. Especially ones that start with Don't waste your effort, it'll never work (or be accepted)... Because what scares me is this seems like such an easy thing to do that would be a significant ease-of-use improvement, so there _has_ to be something I'm missing. So if we go forward with this we'll make this the umbrella JIRA, the two immediate sub-JIRAs that spring to mind will be the UI work and the endpoints for the UI work to use. I think there are only two end-points here 1 list all the files in the conf (or arbitrary from solr_home/collection) directory. 2 write this text to this file Possibly later we could add clone the configs from coreX to coreY. BTW, I've assigned this to myself so I don't lose it, but if anyone wants to take it over it won't hurt my feelings a bit -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-6925) Back out changes having to do with SOLR-5287 (editing configs from admin UI)
[ https://issues.apache.org/jira/browse/SOLR-6925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14268831#comment-14268831 ] ASF subversion and git services commented on SOLR-6925: --- Commit 1650208 from [~erickoerickson] in branch 'dev/trunk' [ https://svn.apache.org/r1650208 ] SOLR-6925: Back out changes having to do with SOLR-5287 (editing configs from admin UI) Back out changes having to do with SOLR-5287 (editing configs from admin UI) Key: SOLR-6925 URL: https://issues.apache.org/jira/browse/SOLR-6925 Project: Solr Issue Type: Bug Affects Versions: 5.0, Trunk Reporter: Erick Erickson Assignee: Erick Erickson Priority: Blocker Attachments: SOLR-6925.patch Should have something today/tomorrow. The history here is that I had this bright idea to edit files directly from the admin UI, especially schema.xml and solrxconifg.xml. Brilliant I sez to myself... except it's a significant security hole and I'm really glad that was pointed out before we released it in 4x. So we pulled it completely from 4.x and made it something in 5.x (then trunk) that you could enable (disabled by default) if you wanted to live dangerously and we'd deal with it later. Well it's later. Given all the work for managed schemas and the like in the interim, I think this is cruft that should be removed completely from current trunk and 5x. Marking it as a blocker so we don't release 5x with this in it or we'll have back-compat issues. Should have a fix in very quickly. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-trunk-MacOSX (64bit/jdk1.8.0) - Build # 2043 - Failure!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-MacOSX/2043/ Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseParallelGC (asserts: true) 1 tests failed. FAILED: org.apache.solr.handler.TestBlobHandler.testDistribSearch Error Message: {responseHeader={status=0, QTime=1}, response={numFound=0, start=0, docs=[]}} Stack Trace: java.lang.AssertionError: {responseHeader={status=0, QTime=1}, response={numFound=0, start=0, docs=[]}} at __randomizedtesting.SeedInfo.seed([EE5A15A56B1B9BD0:6FBC9BBD1C44FBEC]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.apache.solr.handler.TestBlobHandler.doBlobHandlerTest(TestBlobHandler.java:95) at org.apache.solr.handler.TestBlobHandler.doTest(TestBlobHandler.java:195) at org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:868) at sun.reflect.GeneratedMethodAccessor39.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:483) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836) at com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738) at com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at
[JENKINS] Lucene-Solr-5.x-Linux (32bit/jdk1.7.0_67) - Build # 11700 - Failure!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/11700/ Java: 32bit/jdk1.7.0_67 -server -XX:+UseConcMarkSweepGC (asserts: true) 1 tests failed. FAILED: org.apache.solr.cloud.BasicDistributedZkTest.testDistribSearch Error Message: commitWithin did not work on node: http://127.0.0.1:57237/kwy/s/collection1 expected:68 but was:67 Stack Trace: java.lang.AssertionError: commitWithin did not work on node: http://127.0.0.1:57237/kwy/s/collection1 expected:68 but was:67 at __randomizedtesting.SeedInfo.seed([2F70A50135C0EAEF:AE962B19429F8AD3]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.failNotEquals(Assert.java:647) at org.junit.Assert.assertEquals(Assert.java:128) at org.junit.Assert.assertEquals(Assert.java:472) at org.apache.solr.cloud.BasicDistributedZkTest.doTest(BasicDistributedZkTest.java:345) at org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:868) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836) at com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738) at com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55) at
[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 2445 - Still Failing
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/2445/ 4 tests failed. FAILED: org.apache.solr.cloud.HttpPartitionTest.testDistribSearch Error Message: org.apache.http.NoHttpResponseException: The target server failed to respond Stack Trace: org.apache.solr.client.solrj.SolrServerException: org.apache.http.NoHttpResponseException: The target server failed to respond at __randomizedtesting.SeedInfo.seed([AA19638AA1391E27:2BFFED92D6667E1B]:0) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:871) at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:736) at org.apache.solr.cloud.HttpPartitionTest.sendDoc(HttpPartitionTest.java:480) at org.apache.solr.cloud.HttpPartitionTest.testRf2(HttpPartitionTest.java:201) at org.apache.solr.cloud.HttpPartitionTest.doTest(HttpPartitionTest.java:114) at org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:868) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836) at com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738) at com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at
[jira] [Commented] (SOLR-6787) API to manage blobs in Solr
[ https://issues.apache.org/jira/browse/SOLR-6787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14268891#comment-14268891 ] ASF subversion and git services commented on SOLR-6787: --- Commit 1650212 from [~noble.paul] in branch 'dev/trunk' [ https://svn.apache.org/r1650212 ] SOLR-6787 commit right away instead of waiting API to manage blobs in Solr Key: SOLR-6787 URL: https://issues.apache.org/jira/browse/SOLR-6787 Project: Solr Issue Type: Sub-task Reporter: Noble Paul Assignee: Noble Paul Fix For: 5.0, Trunk Attachments: SOLR-6787.patch, SOLR-6787.patch A special collection called .system needs to be created by the user to store/manage blobs. The schema/solrconfig of that collection need to be automatically supplied by the system so that there are no errors APIs need to be created to manage the content of that collection {code} #create your .system collection first http://localhost:8983/solr/admin/collections?action=CREATEname=.systemreplicationFactor=2 #The config for this collection is automatically created . numShards for this collection is hardcoded to 1 #create a new jar or add a new version of a jar curl -X POST -H 'Content-Type: application/octet-stream' --data-binary @mycomponent.jar http://localhost:8983/solr/.system/blob/mycomponent # GET on the end point would give a list of jars and other details curl http://localhost:8983/solr/.system/blob # GET on the end point with jar name would give details of various versions of the available jars curl http://localhost:8983/solr/.system/blob/mycomponent # GET on the end point with jar name and version with a wt=filestream to get the actual file curl http://localhost:8983/solr/.system/blob/mycomponent/1?wt=filestream mycomponent.1.jar # GET on the end point with jar name and wt=filestream to get the latest version of the file curl http://localhost:8983/solr/.system/blob/mycomponent?wt=filestream mycomponent.jar {code} Please note that the jars are never deleted. a new version is added to the system everytime a new jar is posted for the name. You must use the standard delete commands to delete the old entries -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
IBM Java 7 JVM Support
Hi folks, Sorry for the spam, but I need to start somewhere :) I am currently evaluating technologies like Solr and ElasticSearch (both built on Lucene). I came across a page, http://wiki.apache.org/lucene-java/JavaBugs, which mentions a possible JIT compiler issue specific to the IBM JVM that could lead to index corruption. I am assuming this is an outstanding problem, since logging PMRs with IBM is difficult, unless you are a paying customer. I recently browsed the fix list for IBM Java 7 and there are few JIT compiler issues that have been resolved. Has anyone officially tested Lucene against IBM Java 7 Refresh 8? If not, can someone please verify whether or not the issue still exists? If it does, I may be able to help get the details of the issue into the hand of someone who can fix it. At the very least, I can have a PMR logged and some pressure applied. Thanks in advance! Joseph Fourny Software Developer, IBM Watson Analytics IBM Canada 3755 Riverside Drive ITN: 315-6295 Office: 1-613-356-6295 joseph.fou...@ca.ibm.com
[jira] [Created] (SOLR-6926) ant example makes no sense anymore - should be ant server (or refactored into some other compilation realted target)
Hoss Man created SOLR-6926: -- Summary: ant example makes no sense anymore - should be ant server (or refactored into some other compilation realted target) Key: SOLR-6926 URL: https://issues.apache.org/jira/browse/SOLR-6926 Project: Solr Issue Type: Improvement Reporter: Hoss Man Assignee: Timothy Potter (filing as followup to a chat i had with tim offline the other day) the ant target ant example doesn't really make any sense anymore ... that name was created way, way, back when ant compile built up the dist/solr.war file that people were expected to install and ant example took care of copying that war file into the example/jetty directory these days, it should probably be named something like ant server or refactored inside an existing task like ant compile -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
RE: IBM Java 7 JVM Support
Hi Joseph, Thanks for your interest in our testing procedures! We are testing various JVMs with random settings on the so-called “Policeman Jenkins server” (http://jenkins.thetaphi.de/, which runs Linux, Windows, MacOS in various VirtualBOX VMs. IBM J9 v7 is checked in Linux only for Lucene/Solr 5.x builds). You might also be interested in this talk: http://2013.berlinbuzzwords.de/sessions/testing-lucene-and-solr-various-jvms-bugs-bugs-bugs, video https://www.youtube.com/watch?v=PVRdLyQGUxE We are testing Lucene only with J9 in the stable branch (coming Lucene 5.0, Java 7 minimum), because Lucene trunk (aka Lucene 6) will require Java 8. Because I had not much time in installing recent releases (I / My company is hosting this server, and of course, I am busy) the J9 version tested there is a bit older (the version listed here is used for both 32bits and 64bits tests): -print-java-info: [java-info] java version 1.7.0 [java-info] Java(TM) SE Runtime Environment (pxi3270_27sr1-20140411_01 (SR1), IBM Corporation) [java-info] IBM J9 VM (2.7, IBM Corporation) [java-info] Test args: [-Xjit:exclude={org/apache/lucene/util/fst/FST.pack(IIF)Lorg/apache/lucene/util/fst/FST;}] As you see, it uses special Test args to make the tests pass at all (it excludes the given method from optimizations, otherwise the index corruption happens for sure - reproducible!). This bug is also listed on the given JavaBugs page. Give me a few more days so I have time to update the JVM list and install newer releases! I am glad to install them… but: The problem with IBM J9 VMs is the hard install procedure, because it is not just a tgz file to unzip. You have to: - Download the installer in an endless login-then-click-agree-of-license-then-fill-out multiple pages-of-personal-details web page. - Run a Installshield-like installer and say yes to various questions. It is also not easily possible to install multiple versions in parallel, which makes testing multiple versions hard – also the Installshield installer is hard to use from a Console Window. You can also only install it in /opt, but just unzip to your home directory without root access. - Install the “Unrestricted policy files” support patch by unzipping various files and placing them in the install folder, otherwise Lucene/Solr build does not succeed and various Solr tests don’t pass at all, because its missing some encryption algorithms which are under export regulations, needed to setup the Jetty SSL sockets for testing Solr’s SSL support. Default IBM J9 is also not able to connect to Apache’s Foundation Webservers during the build process for downloading various files (like Issue list, changes,… from https://issues.apache.org), because the HTTPS encryption used there is too strong. So the build does not even succeed by default, without using the “Unrestricted policy files” patch. Unfortunately, on every reinstall of IBM J9 – although to same installation folder as before - you have to do the patch procedure again. On the IBM web page I can currently only find downloads for: “IBM SDK, Java Technology Edition, Version 7 Release 1, Service Refresh 2 Fix pack 0” - is this the one you mean? Do you have Java 8 downloads, too? It would really be good, if IBM would provide suitable tar.gz downloads with full encryption support, maybe from EU download servers. This would make updating much easier. Oracle is able to provide those simple TGZ files with full encryption to connect to standard HTTPS web servers, so why not IBM? Is it maybe possible to get easier through some other channel? Thanks for support! I hope we can make the current situation better and work together with IBM, like we started some closer communication with Oracle Quality engineeres. We would be interested to have better support for J9. Maybe IBM could run Lucene/Solr tests, which are “famous” for finding all kinds of JVM bugs because of the special and very intensive testing procedure, during their own QA. Thanks, Uwe - Uwe Schindler H.-H.-Meier-Allee 63, D-28213 Bremen http://www.thetaphi.de/ http://www.thetaphi.de eMail: u...@thetaphi.de From: Joseph Fourny [mailto:joseph.fou...@ca.ibm.com] Sent: Wednesday, January 07, 2015 11:38 PM To: dev@lucene.apache.org Subject: IBM Java 7 JVM Support Hi folks, Sorry for the spam, but I need to start somewhere :) I am currently evaluating technologies like Solr and ElasticSearch (both built on Lucene). I came across a page, http://wiki.apache.org/lucene-java/JavaBugs http://wiki.apache.org/lucene-java/JavaBugs, which mentions a possible JIT compiler issue specific to the IBM JVM that could lead to index corruption. I am assuming this is an outstanding problem, since logging PMRs with IBM is difficult, unless you are a paying customer. I recently browsed the fix list for IBM Java 7 and there are few JIT compiler issues that have
[jira] [Created] (LUCENE-6166) Deletions alone never trigger merges
Michael McCandless created LUCENE-6166: -- Summary: Deletions alone never trigger merges Key: LUCENE-6166 URL: https://issues.apache.org/jira/browse/LUCENE-6166 Project: Lucene - Core Issue Type: Bug Reporter: Michael McCandless Assignee: Michael McCandless Fix For: 5.x, Trunk If an app has an old index and only does deletions against it, we seem to never trigger a merge, so deletions are never reclaimed in this case. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-6166) Deletions alone never trigger merges
[ https://issues.apache.org/jira/browse/LUCENE-6166?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael McCandless updated LUCENE-6166: --- Attachment: LUCENE-6166.patch Patch + tests, applies to 5.x. Deletions alone never trigger merges Key: LUCENE-6166 URL: https://issues.apache.org/jira/browse/LUCENE-6166 Project: Lucene - Core Issue Type: Bug Reporter: Michael McCandless Assignee: Michael McCandless Fix For: Trunk, 5.x Attachments: LUCENE-6166.patch If an app has an old index and only does deletions against it, we seem to never trigger a merge, so deletions are never reclaimed in this case. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-6925) Back out all changes having to do with SOLR-5287
[ https://issues.apache.org/jira/browse/SOLR-6925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14268607#comment-14268607 ] Anshum Gupta commented on SOLR-6925: That sounds fine to me Erick. And yes, 15th Jan, 2015 is what I meant when I said next Thursday. Back out all changes having to do with SOLR-5287 Key: SOLR-6925 URL: https://issues.apache.org/jira/browse/SOLR-6925 Project: Solr Issue Type: Bug Affects Versions: 5.0, Trunk Reporter: Erick Erickson Assignee: Erick Erickson Priority: Blocker Attachments: SOLR-6925.patch Should have something today/tomorrow. The history here is that I had this bright idea to edit files directly from the admin UI, especially schema.xml and solrxconifg.xml. Brilliant I sez to myself... except it's a significant security hole and I'm really glad that was pointed out before we released it in 4x. So we pulled it completely from 4.x and made it something in 5.x (then trunk) that you could enable (disabled by default) if you wanted to live dangerously and we'd deal with it later. Well it's later. Given all the work for managed schemas and the like in the interim, I think this is cruft that should be removed completely from current trunk and 5x. Marking it as a blocker so we don't release 5x with this in it or we'll have back-compat issues. Should have a fix in very quickly. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-6161) Applying deletes is sometimes dog slow
[ https://issues.apache.org/jira/browse/LUCENE-6161?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael McCandless updated LUCENE-6161: --- Attachment: LUCENE-6161.patch Another patch, this one using DaciukMihovAutomatonBuilder to create an automaton from the terms to delete, and then using Terms.intersect. This one spends even less time applying deletes (46 sec vs 129 sec on trunk) yet overall indexing time is still a bit slower (272 sec vs 263 on trunk). I also fixed Automaton to implement Accountable ... Applying deletes is sometimes dog slow -- Key: LUCENE-6161 URL: https://issues.apache.org/jira/browse/LUCENE-6161 Project: Lucene - Core Issue Type: Bug Reporter: Michael McCandless Fix For: 5.0, Trunk Attachments: LUCENE-6161.patch, LUCENE-6161.patch I hit this while testing various use cases for LUCENE-6119 (adding auto-throttle to ConcurrentMergeScheduler). When I tested always call updateDocument (each add buffers a delete term), with many indexing threads, opening an NRT reader once per second (forcing all deleted terms to be applied), I see that BufferedUpdatesStream.applyDeletes sometimes seems to take a lng time, e.g.: {noformat} BD 0 [2015-01-04 09:31:12.597; Lucene Merge Thread #69]: applyDeletes took 339 msec for 10 segments, 117 deleted docs, 607333 visited terms BD 0 [2015-01-04 09:31:18.148; Thread-4]: applyDeletes took 5533 msec for 62 segments, 10989 deleted docs, 8517225 visited terms BD 0 [2015-01-04 09:31:21.463; Lucene Merge Thread #71]: applyDeletes took 1065 msec for 10 segments, 470 deleted docs, 1825649 visited terms BD 0 [2015-01-04 09:31:26.301; Thread-5]: applyDeletes took 4835 msec for 61 segments, 14676 deleted docs, 9649860 visited terms BD 0 [2015-01-04 09:31:35.572; Thread-11]: applyDeletes took 6073 msec for 72 segments, 13835 deleted docs, 11865319 visited terms BD 0 [2015-01-04 09:31:37.604; Lucene Merge Thread #75]: applyDeletes took 251 msec for 10 segments, 58 deleted docs, 240721 visited terms BD 0 [2015-01-04 09:31:44.641; Thread-11]: applyDeletes took 5956 msec for 64 segments, 15109 deleted docs, 10599034 visited terms BD 0 [2015-01-04 09:31:47.814; Lucene Merge Thread #77]: applyDeletes took 396 msec for 10 segments, 137 deleted docs, 719914 visit {noformat} What this means is even though I want an NRT reader every second, often I don't get one for up to ~7 or more seconds. This is on an SSD, machine has 48 GB RAM, heap size is only 2 GB. 12 indexing threads. As hideously complex as this code is, I think there are some inefficiencies, but fixing them could be hard / make code even hairier ... Also, this code is mega-locked: holds IW's lock, holds BD's lock. It blocks things like merges kicking off or finishing... E.g., we pull the MergedIterator many times on the same set of sub-iterators. Maybe we can create the sorted terms up front and reuse that? Maybe we should go term stride (one term visits all N segments) not segment stride (visit each segment, iterating all deleted terms for it). Just iterating the terms to be deleted takes a sizable part of the time, and we now do that once for every segment in the index. Also, the isUnique bit in LUCENE-6005 should help here, since if we know the field is unique, we can stop seekExact once we found a segment that has the deleted term, we can maybe pass false for removeDuplicates to MergedIterator... -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-6927) optimze various usages of DocTermOrdsRangeFilter and DocValuesRangeFilter to useFieldValueFilter if appropriate
Hoss Man created SOLR-6927: -- Summary: optimze various usages of DocTermOrdsRangeFilter and DocValuesRangeFilter to useFieldValueFilter if appropriate Key: SOLR-6927 URL: https://issues.apache.org/jira/browse/SOLR-6927 Project: Solr Issue Type: Improvement Reporter: Hoss Man there are handful of code paths in solr that use DocTermOrdsRangeFilter and DocValuesRangeFilter to do range queries over fields with DocValues -- and in many cases this is to account for the field:[* TO *] type usecase (in some cases it's even done internally as an optimization of the field:* usecase. in these DocValue situations, if we know that hte uper lower bounds are both null, we should make this code more optimized to use FieldValueFilter instead -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 2444 - Still Failing
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/2444/ 5 tests failed. REGRESSION: org.apache.solr.cloud.RecoveryZkTest.testDistribSearch Error Message: shard1 is not consistent. Got 126 from http://127.0.0.1:53862/collection1lastClient and got 54 from http://127.0.0.1:53871/collection1 Stack Trace: java.lang.AssertionError: shard1 is not consistent. Got 126 from http://127.0.0.1:53862/collection1lastClient and got 54 from http://127.0.0.1:53871/collection1 at __randomizedtesting.SeedInfo.seed([10B9CEA1CB19F593:915F40B9BC4695AF]:0) at org.junit.Assert.fail(Assert.java:93) at org.apache.solr.cloud.RecoveryZkTest.doTest(RecoveryZkTest.java:122) at org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:868) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836) at com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738) at com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at
[jira] [Commented] (SOLR-4839) Jetty 9
[ https://issues.apache.org/jira/browse/SOLR-4839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14268651#comment-14268651 ] Steve Rowe commented on SOLR-4839: -- [~ehatcher] told me offline, and I confirmed, that the directory {{solr/server/solr-webapp/}} was being deleted when Solr is run from a trunk checkout. In {{solr/server/contexts/solr-jetty-context.xml}}, we tell Jetty to use this directory as its {{tempDirectory}}. [~hossman_luc...@fucit.org] told me offline that he suspected that [Jetty 9's new option to persist its temp directory ({{persistTempDirectory}})|http://www.eclipse.org/jetty/documentation/9.2.6.v20141205/ref-temporary-directories.html#d0e3831], which defaults to {{false}}, was causing this problem: when Jetty shuts down gracefully and this option is not set to {{true}}, its temp dir is deleted. I verified that this is the case: after {{ant example}} and {{bin/solr start}}, {{solr/server/solr-webapp/}} holds the exploded war, but after {{bin/solr stop}}, {{solr/server/solr-webapp}} ceases to exist. When I add the following inside the {{Configure}} tag in {{solr-jetty-context.xml}}, the temp dir and the exploded war contained within are left intact after {{bin/solr stop}}: {code:xml} Set name=persistTempDirectorytrue/Set {code} I think the above addition is the way to go. Alternatively, to preserve the {{solr/server/solr-webapp/}} directory, we could tell Jetty to use a sub-directory, but leave the temp dir persistence option at false - I tested this and it worked - when I changed the {{tempDirectory}} setting to include a sub-directory named {{tempdir/}} as shown below, the sub-directory was created at startup and deleted at shutdown, leaving the parent directory intact: {code:xml} Set name=tempDirectoryProperty name=jetty.base default=.//solr-webapp/tempdir/Set {code} If there are no objections, I'll add the {{persistTempDir=true}} setting to {{solr-jetty-context.xml}} tomorrow. Jetty 9 --- Key: SOLR-4839 URL: https://issues.apache.org/jira/browse/SOLR-4839 Project: Solr Issue Type: Improvement Reporter: Bill Bell Assignee: Shalin Shekhar Mangar Fix For: 5.0, Trunk Attachments: SOLR-4839-fix-eclipse.patch, SOLR-4839.patch, SOLR-4839.patch, SOLR-4839.patch, SOLR-4839.patch, SOLR-4839.patch, SOLR-4839.patch, SOLR-4839.patch, SOLR-4839.patch, SOLR-4839.patch Implement Jetty 9 -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 2442 - Still Failing
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/2442/ 4 tests failed. FAILED: org.apache.solr.cloud.HttpPartitionTest.testDistribSearch Error Message: org.apache.solr.client.solrj.SolrServerException: IOException occured when talking to server at: http://127.0.0.1:12194/c8n_1x2_shard1_replica2 Stack Trace: org.apache.solr.client.solrj.SolrServerException: org.apache.solr.client.solrj.SolrServerException: IOException occured when talking to server at: http://127.0.0.1:12194/c8n_1x2_shard1_replica2 at __randomizedtesting.SeedInfo.seed([E327129DD08BD9FE:62C19C85A7D4B9C2]:0) at org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:581) at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:890) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:793) at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:736) at org.apache.solr.cloud.HttpPartitionTest.sendDoc(HttpPartitionTest.java:480) at org.apache.solr.cloud.HttpPartitionTest.testRf2(HttpPartitionTest.java:201) at org.apache.solr.cloud.HttpPartitionTest.doTest(HttpPartitionTest.java:114) at org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:868) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836) at com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738) at com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at
[jira] [Updated] (SOLR-2035) Add a VelocityResponseWriter $resource tool for locale-specific string lookups.
[ https://issues.apache.org/jira/browse/SOLR-2035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Erik Hatcher updated SOLR-2035: --- Summary: Add a VelocityResponseWriter $resource tool for locale-specific string lookups. (was: Add a VelocityResponseWriter $resource tool to for locale-specific string lookups.) Add a VelocityResponseWriter $resource tool for locale-specific string lookups. --- Key: SOLR-2035 URL: https://issues.apache.org/jira/browse/SOLR-2035 Project: Solr Issue Type: Improvement Components: Response Writers Reporter: Erik Hatcher Assignee: Erik Hatcher Priority: Minor Fix For: 5.0, Trunk Attachments: SOLR-2035.patch Being able to look up string resources through Java's ResourceBundle facility can be really useful in Velocity templates (through VelocityResponseWriter). Velocity Tools includes a ResourceTool. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-2035) Add a VelocityResponseWriter $resource tool to for locale-specific string lookups.
[ https://issues.apache.org/jira/browse/SOLR-2035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Erik Hatcher updated SOLR-2035: --- Summary: Add a VelocityResponseWriter $resource tool to for locale-specific string lookups. (was: Add Velocity's ResourceTool to allow for i18n string lookups) Add a VelocityResponseWriter $resource tool to for locale-specific string lookups. -- Key: SOLR-2035 URL: https://issues.apache.org/jira/browse/SOLR-2035 Project: Solr Issue Type: Improvement Components: Response Writers Reporter: Erik Hatcher Assignee: Erik Hatcher Priority: Minor Fix For: 5.0, Trunk Attachments: SOLR-2035.patch Being able to look up string resources through Java's ResourceBundle facility can be really useful in Velocity templates (through VelocityResponseWriter). Velocity Tools includes a ResourceTool. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 2443 - Still Failing
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/2443/ 5 tests failed. REGRESSION: org.apache.solr.core.TestDynamicLoading.testDistribSearch Error Message: Could not successfully add blob { responseHeader:{ status:0, QTime:0}, response:{ numFound:1, start:0, docs:[{ id:test/1, md5:2559cb7a6c8ca1b12ec3d5dd66172650, blobName:test, version:1, timestamp:2015-01-07T20:43:46.153Z, size:5222}]}} Stack Trace: java.lang.AssertionError: Could not successfully add blob { responseHeader:{ status:0, QTime:0}, response:{ numFound:1, start:0, docs:[{ id:test/1, md5:2559cb7a6c8ca1b12ec3d5dd66172650, blobName:test, version:1, timestamp:2015-01-07T20:43:46.153Z, size:5222}]}} at __randomizedtesting.SeedInfo.seed([B640ABD38869D84E:37A625CBFF36B872]:0) at org.junit.Assert.fail(Assert.java:93) at org.apache.solr.handler.TestBlobHandler.postAndCheck(TestBlobHandler.java:146) at org.apache.solr.core.TestDynamicLoading.dynamicLoading(TestDynamicLoading.java:111) at org.apache.solr.core.TestDynamicLoading.doTest(TestDynamicLoading.java:64) at org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:868) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836) at com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738) at com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at
[JENKINS] Lucene-Solr-5.x-Linux (32bit/jdk1.7.0_67) - Build # 11698 - Still Failing!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/11698/ Java: 32bit/jdk1.7.0_67 -client -XX:+UseG1GC (asserts: false) 1 tests failed. FAILED: org.apache.solr.cloud.DistribDocExpirationUpdateProcessorTest.testDistribSearch Error Message: There are still nodes recoverying - waited for 30 seconds Stack Trace: java.lang.AssertionError: There are still nodes recoverying - waited for 30 seconds at __randomizedtesting.SeedInfo.seed([6242DECD25D12107:E3A450D5528E413B]:0) at org.junit.Assert.fail(Assert.java:93) at org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:178) at org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForRecoveriesToFinish(AbstractFullDistribZkTestBase.java:835) at org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForThingsToLevelOut(AbstractFullDistribZkTestBase.java:1454) at org.apache.solr.cloud.DistribDocExpirationUpdateProcessorTest.doTest(DistribDocExpirationUpdateProcessorTest.java:69) at org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:868) at sun.reflect.GeneratedMethodAccessor47.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836) at com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738) at com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at
[jira] [Updated] (SOLR-6840) Remove legacy solr.xml mode
[ https://issues.apache.org/jira/browse/SOLR-6840?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alan Woodward updated SOLR-6840: Attachment: SOLR-6840.patch Latest patch - there are still a couple of test failures, mainly to do with Cloud setups. I'm working on those now. Remove legacy solr.xml mode --- Key: SOLR-6840 URL: https://issues.apache.org/jira/browse/SOLR-6840 Project: Solr Issue Type: Task Reporter: Steve Rowe Assignee: Erick Erickson Priority: Blocker Fix For: 5.0 Attachments: SOLR-6840.patch, SOLR-6840.patch, SOLR-6840.patch On the [Solr Cores and solr.xml page|https://cwiki.apache.org/confluence/display/solr/Solr+Cores+and+solr.xml], the Solr Reference Guide says: {quote} Starting in Solr 4.3, Solr will maintain two distinct formats for {{solr.xml}}, the _legacy_ and _discovery_ modes. The former is the format we have become accustomed to in which all of the cores one wishes to define in a Solr instance are defined in {{solr.xml}} in {{corescore/...core//cores}} tags. This format will continue to be supported through the entire 4.x code line. As of Solr 5.0 this form of solr.xml will no longer be supported. Instead Solr will support _core discovery_. [...] The new core discovery mode structure for solr.xml will become mandatory as of Solr 5.0, see: Format of solr.xml. {quote} AFAICT, nothing has been done to remove legacy {{solr.xml}} mode from 5.0 or trunk. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Closed] (SOLR-5457) Admin UI - Reload Core/Collection from 'Files' Page
[ https://issues.apache.org/jira/browse/SOLR-5457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Erick Erickson closed SOLR-5457. Resolution: Won't Fix Related to SOLR-5287. Since we're removing the capability of modifying the conf files from all code, this is no longer relevant. Admin UI - Reload Core/Collection from 'Files' Page --- Key: SOLR-5457 URL: https://issues.apache.org/jira/browse/SOLR-5457 Project: Solr Issue Type: Improvement Components: web gui Reporter: Stefan Matheis (steffkes) Assignee: Stefan Matheis (steffkes) Fix For: Trunk, 4.9 So that the Workflow, we introduced in SOLR-5446, could be improved, we should add a Reload Core (resp. Collection) Button on the 'Files' Page, so that one could the changes he actually made live. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-5.x-MacOSX (64bit/jdk1.7.0) - Build # 1998 - Failure!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-MacOSX/1998/ Java: 64bit/jdk1.7.0 -XX:-UseCompressedOops -XX:+UseSerialGC (asserts: true) 1 tests failed. FAILED: org.apache.solr.handler.TestSolrConfigHandlerCloud.testDistribSearch Error Message: Could not get expected value A val for path [params, a] full output null Stack Trace: java.lang.AssertionError: Could not get expected value A val for path [params, a] full output null at __randomizedtesting.SeedInfo.seed([331AF4C375B6014:82D7215440040028]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.apache.solr.core.TestSolrConfigHandler.testForResponseElement(TestSolrConfigHandler.java:256) at org.apache.solr.handler.TestSolrConfigHandlerCloud.testReqParams(TestSolrConfigHandlerCloud.java:129) at org.apache.solr.handler.TestSolrConfigHandlerCloud.doTest(TestSolrConfigHandlerCloud.java:61) at org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:868) at sun.reflect.GeneratedMethodAccessor89.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836) at com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738) at com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55) at
[jira] [Resolved] (LUCENE-6149) Infix suggesters' highlighting, allTermsRequired options are hardwired and not configurable for non-contextual lookup
[ https://issues.apache.org/jira/browse/LUCENE-6149?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tomás Fernández Löbbe resolved LUCENE-6149. --- Resolution: Fixed Lucene Fields: (was: New,Patch Available) Fixed. Thanks Boon! Infix suggesters' highlighting, allTermsRequired options are hardwired and not configurable for non-contextual lookup - Key: LUCENE-6149 URL: https://issues.apache.org/jira/browse/LUCENE-6149 Project: Lucene - Core Issue Type: Improvement Components: modules/other Affects Versions: 4.9, 4.10.1, 4.10.2, 4.10.3 Reporter: Boon Low Assignee: Tomás Fernández Löbbe Priority: Minor Labels: suggester Fix For: 5.0, Trunk Attachments: LUCENE-6149-v4.10.3.patch, LUCENE-6149.patch, LUCENE-6149.patch, LUCENE-6149.patch Highlighting and allTermsRequired are hardwired in _AnalyzingInfixSuggester_ for non-contextual lookup (via _Lookup_) see *true*, *true* below: {code:title=AnalyzingInfixSuggester.java (extends Lookup.java) } public ListLookupResult lookup(CharSequence key, SetBytesRef contexts, boolean onlyMorePopular, int num) throws IOException { return lookup(key, contexts, num, true, true); } /** Lookup, without any context. */ public ListLookupResult lookup(CharSequence key, int num, boolean allTermsRequired, boolean doHighlight) throws IOException { return lookup(key, null, num, allTermsRequired, doHighlight); } {code} {code:title=Lookup.java} public ListLookupResult lookup(CharSequence key, boolean onlyMorePopular, int num) throws IOException { return lookup(key, null, onlyMorePopular, num); } {code} The above means the majority of the current infix suggester lookup always return highlighted results with allTermsRequired in effect. There is no way to change this despite the options and improvement of LUCENE-6050, made to incorporate Boolean lookup clauses (MUST/SHOULD). This shortcoming has also been reported in SOLR-6648. The suggesters (AnalyzingInfixSuggester, BlendedInfixSuggester) should provide a proper mechanism to set defaults for highlighting and allTermsRequired, e.g. in constructors (and in Solr factories, thus configurable via solrconfig.xml). -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.9.0-ea-b34) - Build # 11866 - Failure!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/11866/ Java: 64bit/jdk1.9.0-ea-b34 -XX:-UseCompressedOops -XX:+UseG1GC (asserts: false) 4 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.cloud.CollectionsAPIDistributedZkTest Error Message: ERROR: SolrZkClient opens=17 closes=16 Stack Trace: java.lang.AssertionError: ERROR: SolrZkClient opens=17 closes=16 at __randomizedtesting.SeedInfo.seed([B61275133AF60A28]:0) at org.junit.Assert.fail(Assert.java:93) at org.apache.solr.SolrTestCaseJ4.endTrackingZkClients(SolrTestCaseJ4.java:462) at org.apache.solr.SolrTestCaseJ4.afterClass(SolrTestCaseJ4.java:188) at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:790) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365) at java.lang.Thread.run(Thread.java:745) FAILED: junit.framework.TestSuite.org.apache.solr.cloud.CollectionsAPIDistributedZkTest Error Message: 5 threads leaked from SUITE scope at org.apache.solr.cloud.CollectionsAPIDistributedZkTest: 1) Thread[id=6465, name=zkCallback-692-thread-3, state=TIMED_WAITING, group=TGRP-CollectionsAPIDistributedZkTest] at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) at java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) at java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745)2) Thread[id=6464, name=zkCallback-692-thread-2, state=TIMED_WAITING, group=TGRP-CollectionsAPIDistributedZkTest] at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) at java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) at java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745)3) Thread[id=6333, name=zkCallback-692-thread-1, state=TIMED_WAITING,
[jira] [Updated] (SOLR-2035) Add a VelocityResponseWriter $resource tool for locale-specific string lookups
[ https://issues.apache.org/jira/browse/SOLR-2035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Erik Hatcher updated SOLR-2035: --- Summary: Add a VelocityResponseWriter $resource tool for locale-specific string lookups (was: Add a VelocityResponseWriter $resource tool for locale-specific string lookups.) Add a VelocityResponseWriter $resource tool for locale-specific string lookups -- Key: SOLR-2035 URL: https://issues.apache.org/jira/browse/SOLR-2035 Project: Solr Issue Type: Improvement Components: Response Writers Reporter: Erik Hatcher Assignee: Erik Hatcher Priority: Minor Fix For: 5.0, Trunk Attachments: SOLR-2035.patch Being able to look up string resources through Java's ResourceBundle facility can be really useful in Velocity templates (through VelocityResponseWriter). Velocity Tools includes a ResourceTool. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-4839) Jetty 9
[ https://issues.apache.org/jira/browse/SOLR-4839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14268243#comment-14268243 ] Shalin Shekhar Mangar commented on SOLR-4839: - Fixed, thanks Mark! Jetty 9 --- Key: SOLR-4839 URL: https://issues.apache.org/jira/browse/SOLR-4839 Project: Solr Issue Type: Improvement Reporter: Bill Bell Assignee: Shalin Shekhar Mangar Fix For: 5.0, Trunk Attachments: SOLR-4839-fix-eclipse.patch, SOLR-4839.patch, SOLR-4839.patch, SOLR-4839.patch, SOLR-4839.patch, SOLR-4839.patch, SOLR-4839.patch, SOLR-4839.patch, SOLR-4839.patch, SOLR-4839.patch Implement Jetty 9 -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-4839) Jetty 9
[ https://issues.apache.org/jira/browse/SOLR-4839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14268242#comment-14268242 ] ASF subversion and git services commented on SOLR-4839: --- Commit 1650169 from sha...@apache.org in branch 'dev/trunk' [ https://svn.apache.org/r1650169 ] SOLR-4839: Remove dependency to jetty.orbit Jetty 9 --- Key: SOLR-4839 URL: https://issues.apache.org/jira/browse/SOLR-4839 Project: Solr Issue Type: Improvement Reporter: Bill Bell Assignee: Shalin Shekhar Mangar Fix For: 5.0, Trunk Attachments: SOLR-4839-fix-eclipse.patch, SOLR-4839.patch, SOLR-4839.patch, SOLR-4839.patch, SOLR-4839.patch, SOLR-4839.patch, SOLR-4839.patch, SOLR-4839.patch, SOLR-4839.patch, SOLR-4839.patch Implement Jetty 9 -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.8.0_20) - Build # 11864 - Still Failing!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/11864/ Java: 64bit/jdk1.8.0_20 -XX:-UseCompressedOops -XX:+UseG1GC (asserts: true) 3 tests failed. FAILED: org.apache.solr.client.solrj.SolrSchemalessExampleTest.testStreamingRequest Error Message: Error from server at http://127.0.0.1:55129/solr/collection1: undefined field: cat Stack Trace: org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error from server at http://127.0.0.1:55129/solr/collection1: undefined field: cat at __randomizedtesting.SeedInfo.seed([ACD022C903CA8F30:6B395AC721C12463]:0) at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:558) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:214) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:210) at org.apache.solr.client.solrj.request.AbstractUpdateRequest.process(AbstractUpdateRequest.java:124) at org.apache.solr.client.solrj.SolrExampleTestsBase.testStreamingRequest(SolrExampleTestsBase.java:230) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:483) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836) at com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738) at com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at
[jira] [Commented] (SOLR-6024) StatsComponent does not work for docValues enabled multiValued fields
[ https://issues.apache.org/jira/browse/SOLR-6024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14267630#comment-14267630 ] Elran Dvir commented on SOLR-6024: -- Hi All, I am trying to apply this patch on Solr 4.8. I have compilation problems with the class DocValuesStats. I get the following errors: [ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.1:compile (default-compile) on project solr-core: Compilation failure: Compilation failure: [ERROR] /D:/ckp/src/solr_4.8/solr/core/src/java/org/apache/solr/request/DocValuesStats.java:[88,21] cannot find symbol [ERROR] symbol: method emptySortedSet() [ERROR] location: class org.apache.lucene.index.DocValues [ERROR] /D:/ckp/src/solr_4.8/solr/core/src/java/org/apache/solr/request/DocValuesStats.java:[116,28] cannot find symbol [ERROR] symbol: method emptySortedSet() [ERROR] location: class org.apache.lucene.index.DocValues [ERROR] /D:/ckp/src/solr_4.8/solr/core/src/java/org/apache/solr/request/DocValuesStats.java:[128,28] cannot find symbol [ERROR] symbol: method emptySorted() [ERROR] location: class org.apache.lucene.index.DocValues [ERROR] /D:/ckp/src/solr_4.8/solr/core/src/java/org/apache/solr/request/DocValuesStats.java:[139,34] method lookupOrd in class org.apache.lucene.index.SortedSetDocValues cannot be applied to given types; [ERROR] required: long,org.apache.lucene.util.BytesRef [ERROR] found: int [ERROR] reason: actual and formal argument lists differ in length [ERROR] /D:/ckp/src/solr_4.8/solr/core/src/java/org/apache/solr/request/DocValuesStats.java:[165,55] cannot find symbol [ERROR] symbol: method getGlobalOrds(int) [ERROR] location: variable map of type org.apache.lucene.index.MultiDocValues.OrdinalMap [ERROR] /D:/ckp/src/solr_4.8/solr/core/src/java/org/apache/solr/request/DocValuesStats.java:[183,55] cannot find symbol [ERROR] symbol: method getGlobalOrds(int) [ERROR] location: variable map of type org.apache.lucene.index.MultiDocValues.OrdinalMap I guess these methods are implemented in newer versions of Solr/lucene. How can I fix it in 4.8? What is the oldest version the patch can be applied to? Thank you very much. StatsComponent does not work for docValues enabled multiValued fields - Key: SOLR-6024 URL: https://issues.apache.org/jira/browse/SOLR-6024 Project: Solr Issue Type: Bug Components: SearchComponents - other Affects Versions: 4.8 Environment: java version 1.7.0_45 Mac OS X Version 10.7.5 Reporter: Ahmet Arslan Assignee: Tomás Fernández Löbbe Labels: StatsComponent, docValues, multiValued Fix For: 4.10.1, 5.0, Trunk Attachments: SOLR-6024-branch_4x.patch, SOLR-6024-trunk.patch, SOLR-6024-trunk.patch, SOLR-6024-trunk.patch, SOLR-6024-trunk.patch, SOLR-6024-trunk.patch, SOLR-6024.patch, SOLR-6024.patch Harish Agarwal reported this in solr user mailing list : http://search-lucene.com/m/QTPaoTJXV1 It is east to re-produce with default example solr setup. Following types are added example schema.xml. And exampledocs are indexed. {code:xml} field name=cat type=string indexed=true stored=true docValues=true multiValued=true/ field name=popularity type=int indexed=true stored=false docValues=true multiValued=true/ {code} When {{docValues=true}} *and* {{multiValued=true}} are used at the same time, StatsComponent throws : {noformat} ERROR org.apache.solr.core.SolrCore – org.apache.solr.common.SolrException: Type mismatch: popularity was indexed as SORTED_SET at org.apache.solr.request.UnInvertedField.init(UnInvertedField.java:193) at org.apache.solr.request.UnInvertedField.getUnInvertedField(UnInvertedField.java:699) at org.apache.solr.handler.component.SimpleStats.getStatsFields(StatsComponent.java:319) at org.apache.solr.handler.component.SimpleStats.getStatsCounts(StatsComponent.java:290) at org.apache.solr.handler.component.StatsComponent.process(StatsComponent.java:78) at org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:221) at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135) at org.apache.solr.core.SolrCore.execute(SolrCore.java:1964) {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-6838) Bulk loading with the default of updateDocument blocks all indexing for long periods of time.
[ https://issues.apache.org/jira/browse/SOLR-6838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14268259#comment-14268259 ] Shalin Shekhar Mangar commented on SOLR-6838: - Perhaps this slowness is related to LUCENE-6161? Bulk loading with the default of updateDocument blocks all indexing for long periods of time. - Key: SOLR-6838 URL: https://issues.apache.org/jira/browse/SOLR-6838 Project: Solr Issue Type: Sub-task Reporter: Mark Miller -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-6923) kill -9 doesn't change the replica state in clusterstate.json
[ https://issues.apache.org/jira/browse/SOLR-6923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14268278#comment-14268278 ] Timothy Potter commented on SOLR-6923: -- The actual runtime state of a replica is determined by 1) what's in clusterstate.json and 2) check that the node hosting the replica is live. If the node is not live, the state reported in clusterstate.json can be stale for some time. It has always worked this way in SolrCloud. Thus, AutoAddReplicas needs to consult live nodes prior to thinking a node is live. kill -9 doesn't change the replica state in clusterstate.json - Key: SOLR-6923 URL: https://issues.apache.org/jira/browse/SOLR-6923 Project: Solr Issue Type: Bug Reporter: Varun Thacker - I did the following {code} ./solr start -e cloud -noprompt kill -9 pid-of-node2 //Not the node which is running ZK {code} - /live_nodes reflects that the node is gone. - This is the only message which gets logged on the node1 server after killing node2 {code} 45812 [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:9983] WARN org.apache.zookeeper.server.NIOServerCnxn – caught end of stream exception EndOfStreamException: Unable to read additional data from client sessionid 0x14ac40f26660001, likely client has closed socket at org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228) at org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208) at java.lang.Thread.run(Thread.java:745) {code} - The graph shows the node2 as 'Gone' state - clusterstate.json keeps showing the replica as 'active' {code} {collection1:{ shards:{shard1:{ range:8000-7fff, state:active, replicas:{ core_node1:{ state:active, core:collection1, node_name:169.254.113.194:8983_solr, base_url:http://169.254.113.194:8983/solr;, leader:true}, core_node2:{ state:active, core:collection1, node_name:169.254.113.194:8984_solr, base_url:http://169.254.113.194:8984/solr, maxShardsPerNode:1, router:{name:compositeId}, replicationFactor:1, autoAddReplicas:false, autoCreated:true}} {code} One immediate problem I can see is that AutoAddReplicas doesn't work since the clusterstate.json never changes. There might be more features which are affected by this. On first thought I think we can handle this - The shard leader could listen to changes on /live_nodes and if it has replicas that were on that node, mark it as 'down' in the clusterstate.json? -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-4839) Jetty 9
[ https://issues.apache.org/jira/browse/SOLR-4839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14268311#comment-14268311 ] Steve Rowe commented on SOLR-4839: -- [~markrmil...@gmail.com], the cloudera repo's copy of the new restlet servlet jar (upgraded by this issue) is instead some kind of error html page: http://repository.cloudera.com/artifactory/repo/org/restlet/jee/org.restlet.ext.servlet/2.3.0/org.restlet.ext.servlet-2.3.0.jar - can you get somebody to clean that up? The workaround in the meantime is to comment out the cloudera maven repo in {{lucene/ivy-settings.xml}}, do a resolve to download the proper artifact from the restlet maven repo, then un-comment it. Jetty 9 --- Key: SOLR-4839 URL: https://issues.apache.org/jira/browse/SOLR-4839 Project: Solr Issue Type: Improvement Reporter: Bill Bell Assignee: Shalin Shekhar Mangar Fix For: 5.0, Trunk Attachments: SOLR-4839-fix-eclipse.patch, SOLR-4839.patch, SOLR-4839.patch, SOLR-4839.patch, SOLR-4839.patch, SOLR-4839.patch, SOLR-4839.patch, SOLR-4839.patch, SOLR-4839.patch, SOLR-4839.patch Implement Jetty 9 -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-6839) Direct routing with CloudSolrServer will ignore the Overwrite document option.
[ https://issues.apache.org/jira/browse/SOLR-6839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14268273#comment-14268273 ] Shalin Shekhar Mangar commented on SOLR-6839: - +1 LGTM Direct routing with CloudSolrServer will ignore the Overwrite document option. -- Key: SOLR-6839 URL: https://issues.apache.org/jira/browse/SOLR-6839 Project: Solr Issue Type: Bug Reporter: Mark Miller Assignee: Mark Miller Fix For: 5.0, Trunk Attachments: SOLR-6839.patch -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-6872) Starting techproduct example fails on Trunk with Version is too old for PackedInts
[ https://issues.apache.org/jira/browse/SOLR-6872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14268287#comment-14268287 ] Hoss Man commented on SOLR-6872: can you still reproduce this? i suspect this was caused by having an existing index (from a previous run of bin/solr -e techproducts after the code changed. (in general, for trunk, if you are going to svn up recompile, you need to ant clean to blow away any existing indexes) Starting techproduct example fails on Trunk with Version is too old for PackedInts Key: SOLR-6872 URL: https://issues.apache.org/jira/browse/SOLR-6872 Project: Solr Issue Type: Bug Affects Versions: Trunk Reporter: Alexandre Rafalovitch Priority: Blocker Fix For: Trunk {quote} bin/solr -e techproducts {quote} causes: {quote} ... Caused by: java.lang.ExceptionInInitializerError at org.apache.lucene.codecs.lucene50.Lucene50PostingsWriter.lt;initgt;(Lucene50PostingsWriter.java:111) at org.apache.lucene.codecs.lucene50.Lucene50PostingsFormat.fieldsConsumer(Lucene50PostingsFormat.java:429) at org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsWriter.write(PerFieldPostingsFormat.java:196) at org.apache.lucene.index.FreqProxTermsWriter.flush(FreqProxTermsWriter.java:107) at org.apache.lucene.index.DefaultIndexingChain.flush(DefaultIndexingChain.java:112) at org.apache.lucene.index.DocumentsWriterPerThread.flush(DocumentsWriterPerThread.java:420) at org.apache.lucene.index.DocumentsWriter.doFlush(DocumentsWriter.java:504) at org.apache.lucene.index.DocumentsWriter.flushAllThreads(DocumentsWriter.java:614) at org.apache.lucene.index.IndexWriter.prepareCommitInternal(IndexWriter.java:2714) Caused by: java.lang.IllegalArgumentException: Version is too old, should be at least 2 (got 0) at org.apache.lucene.util.packed.PackedInts.checkVersion(PackedInts.java:77) at org.apache.lucene.util.packed.PackedInts.getDecoder(PackedInts.java:742) {quote} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-6925) Back out all changes having to do with SOLR-5287
[ https://issues.apache.org/jira/browse/SOLR-6925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Erick Erickson updated SOLR-6925: - Attachment: SOLR-6925.patch Back out all changes having to do with SOLR-5287 Key: SOLR-6925 URL: https://issues.apache.org/jira/browse/SOLR-6925 Project: Solr Issue Type: Bug Affects Versions: 5.0, Trunk Reporter: Erick Erickson Assignee: Erick Erickson Priority: Blocker Attachments: SOLR-6925.patch Should have something today/tomorrow. The history here is that I had this bright idea to edit files directly from the admin UI, especially schema.xml and solrxconifg.xml. Brilliant I sez to myself... except it's a significant security hole and I'm really glad that was pointed out before we released it in 4x. So we pulled it completely from 4.x and made it something in 5.x (then trunk) that you could enable (disabled by default) if you wanted to live dangerously and we'd deal with it later. Well it's later. Given all the work for managed schemas and the like in the interim, I think this is cruft that should be removed completely from current trunk and 5x. Marking it as a blocker so we don't release 5x with this in it or we'll have back-compat issues. Should have a fix in very quickly. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-6925) Back out all changes having to do with SOLR-5287
[ https://issues.apache.org/jira/browse/SOLR-6925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14268352#comment-14268352 ] Erick Erickson commented on SOLR-6925: -- I had three non-reproducible test failures that I want to look at some more, but they're things that are failing currently I think. [~steffkes] I left changes in the javascript code (e.g. SOLR-5446 and SOLR-5456) on the theory that I didn't know enough about javascript to risk changing them and the back-end code is gone anyway so I don't think leaving those changes alone is a problem. Just FYI in case you're interested. [~anshumg] I assumed on the dev list that next thursday as a target for 5.0 RC0 was the 15th, correct? If I can satisfy myself that the test failures don't have anything to do with this patch, I'll commit tomorrow sometime (or perhaps tonight even). Back out all changes having to do with SOLR-5287 Key: SOLR-6925 URL: https://issues.apache.org/jira/browse/SOLR-6925 Project: Solr Issue Type: Bug Affects Versions: 5.0, Trunk Reporter: Erick Erickson Assignee: Erick Erickson Priority: Blocker Attachments: SOLR-6925.patch Should have something today/tomorrow. The history here is that I had this bright idea to edit files directly from the admin UI, especially schema.xml and solrxconifg.xml. Brilliant I sez to myself... except it's a significant security hole and I'm really glad that was pointed out before we released it in 4x. So we pulled it completely from 4.x and made it something in 5.x (then trunk) that you could enable (disabled by default) if you wanted to live dangerously and we'd deal with it later. Well it's later. Given all the work for managed schemas and the like in the interim, I think this is cruft that should be removed completely from current trunk and 5x. Marking it as a blocker so we don't release 5x with this in it or we'll have back-compat issues. Should have a fix in very quickly. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-6643) Core load silently aborted if missing schema has depenencies - LinkageErrors swollowed
[ https://issues.apache.org/jira/browse/SOLR-6643?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hoss Man updated SOLR-6643: --- Attachment: SOLR-6643.patch bq. I think it's because CoreContainer.create() is catching Exceptions, not Errors, when updating the coreInitFailure map. Sure - but the piece i couldn't make sense of is why/where testJavaLangErrorFromHandlerOnStartup passed (and those Errors ere in coreInitFailures) but testJavaLangErrorFromSchemaOnStartup didn't. digging arround a bit more, it looks like this is because SolrCore() is wrapping _some_ types of Throwable in SolrException (shudder) but the IndexSchema already exists before the SolreCore constructor is called, and any Errors that come from it don't get similar wrapping. bq. Maybe the best solution is for SolrResourceLoader to try and catch LinkageErrors and rethrow it as a SolrException. Catching classloader problems is I think within the resource loader's remit (unlike out of memory errors, etc). maybe - but that's a slippery slope i'd rather avoid -- i'm catching re-throwing Errors is one thing, *wrapping* Errors in Exceptions is something i'm very much not a fan of. i think a safer (and more all encompasing) fix would be for CoreContainer to handle wraping Errors in SolrException - not for the purpose of re-throwing, but just for tracking in coreInitFailures. that way even for things like OOM or IOError during core init, we still have a note about it in coreInitFailures. --- Attaching an updated patch that goes this direction - still running tests, but review/comments appreciated Core load silently aborted if missing schema has depenencies - LinkageErrors swollowed -- Key: SOLR-6643 URL: https://issues.apache.org/jira/browse/SOLR-6643 Project: Solr Issue Type: Bug Components: Schema and Analysis Affects Versions: 4.10.1 Reporter: Jan Høydahl Priority: Minor Labels: logging Attachments: SOLR-6643.patch, SOLR-6643.patch *How to reproduce* # Start with standard collection1 config # Add a field type to schema using the ICU contrib, no need for a field {code:XML} fieldType name=text_icu class=solr.TextField analyzertokenizer class=solr.ICUTokenizerFactory//analyzer /fieldType {code} # {{cd example}} # {{mkdir solr/lib}} # {{cp ../contrib/analysis-extras/lucene-libs/lucene-analyzers-icu-4.10.1.jar solr/lib/}} # {{bin/solr -f}} # Core is not loaded, and no messages in log after this line {code} ... INFO org.apache.solr.schema.IndexSchema – [collection1] Schema name=example {code} Note that we did *not* add the dependency libs from {{analysis-extras/lib}}, so we'd expect a {{ClassNotFoundException}}, but some way the initialization of schema aborts silently. The ICUTokenizerFactory is instansiated by reflection and I suspect that some exception is swallowed in {{AbstractPluginLoader#create()}} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-trunk-Windows (64bit/jdk1.8.0_20) - Build # 4540 - Still Failing!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/4540/ Java: 64bit/jdk1.8.0_20 -XX:+UseCompressedOops -XX:+UseG1GC (asserts: false) 2 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.update.processor.UUIDUpdateProcessorFallbackTest Error Message: Suite timeout exceeded (= 720 msec). Stack Trace: java.lang.Exception: Suite timeout exceeded (= 720 msec). at __randomizedtesting.SeedInfo.seed([65D1A753EA7FAF3A]:0) FAILED: org.apache.solr.cloud.ReplicationFactorTest.testDistribSearch Error Message: org.apache.solr.client.solrj.SolrServerException: IOException occured when talking to server at: http://127.0.0.1:51173/m_/vh/repfacttest_c8n_1x3_shard1_replica2 Stack Trace: org.apache.solr.client.solrj.SolrServerException: org.apache.solr.client.solrj.SolrServerException: IOException occured when talking to server at: http://127.0.0.1:51173/m_/vh/repfacttest_c8n_1x3_shard1_replica2 at org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:581) at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:890) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:793) at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:736) at org.apache.solr.cloud.ReplicationFactorTest.testRf3(ReplicationFactorTest.java:277) at org.apache.solr.cloud.ReplicationFactorTest.doTest(ReplicationFactorTest.java:123) at org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:868) at sun.reflect.GeneratedMethodAccessor81.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:483) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836) at com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738) at com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at
[JENKINS] Lucene-Solr-4.10-Linux (32bit/jdk1.8.0_40-ea-b09) - Build # 218 - Failure!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.10-Linux/218/ Java: 32bit/jdk1.8.0_40-ea-b09 -server -XX:+UseSerialGC (asserts: true) 1 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.cloud.MultiThreadedOCPTest Error Message: 1 thread leaked from SUITE scope at org.apache.solr.cloud.MultiThreadedOCPTest: 1) Thread[id=2104, name=OverseerThreadFactory-950-thread-5, state=TIMED_WAITING, group=Overseer collection creation process.] at java.lang.Thread.sleep(Native Method) at org.apache.solr.cloud.OverseerCollectionProcessor.waitForCoreNodeName(OverseerCollectionProcessor.java:1627) at org.apache.solr.cloud.OverseerCollectionProcessor.splitShard(OverseerCollectionProcessor.java:1509) at org.apache.solr.cloud.OverseerCollectionProcessor.processMessage(OverseerCollectionProcessor.java:563) at org.apache.solr.cloud.OverseerCollectionProcessor$Runner.run(OverseerCollectionProcessor.java:2629) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Stack Trace: com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE scope at org.apache.solr.cloud.MultiThreadedOCPTest: 1) Thread[id=2104, name=OverseerThreadFactory-950-thread-5, state=TIMED_WAITING, group=Overseer collection creation process.] at java.lang.Thread.sleep(Native Method) at org.apache.solr.cloud.OverseerCollectionProcessor.waitForCoreNodeName(OverseerCollectionProcessor.java:1627) at org.apache.solr.cloud.OverseerCollectionProcessor.splitShard(OverseerCollectionProcessor.java:1509) at org.apache.solr.cloud.OverseerCollectionProcessor.processMessage(OverseerCollectionProcessor.java:563) at org.apache.solr.cloud.OverseerCollectionProcessor$Runner.run(OverseerCollectionProcessor.java:2629) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) at __randomizedtesting.SeedInfo.seed([EDDB1E78339685A8]:0) Build Log: [...truncated 11384 lines...] [junit4] Suite: org.apache.solr.cloud.MultiThreadedOCPTest [junit4] 2 Creating dataDir: /mnt/ssd/jenkins/workspace/Lucene-Solr-4.10-Linux/solr/build/solr-core/test/J0/./solr.cloud.MultiThreadedOCPTest-EDDB1E78339685A8-001/init-core-data-001 [junit4] 2 609223 T1917 oas.SolrTestCaseJ4.buildSSLConfig Randomized ssl (true) and clientAuth (false) [junit4] 2 609223 T1917 oas.BaseDistributedSearchTestCase.initHostContext Setting hostContext system property: /cvm/ss [junit4] 2 609225 T1917 oas.SolrTestCaseJ4.setUp ###Starting testDistribSearch [junit4] 2 609225 T1917 oasc.ZkTestServer.run STARTING ZK TEST SERVER [junit4] 1 client port:0.0.0.0/0.0.0.0:0 [junit4] 2 609226 T1918 oasc.ZkTestServer$ZKServerMain.runFromConfig Starting server [junit4] 2 609326 T1917 oasc.ZkTestServer.run start zk server on port:59055 [junit4] 2 609327 T1917 oascc.ConnectionManager.waitForConnected Waiting for client to connect to ZooKeeper [junit4] 2 609328 T1924 oascc.ConnectionManager.process Watcher org.apache.solr.common.cloud.ConnectionManager@1dab5b5 name:ZooKeeperConnection Watcher:127.0.0.1:59055 got event WatchedEvent state:SyncConnected type:None path:null path:null type:None [junit4] 2 609329 T1917 oascc.ConnectionManager.waitForConnected Client is connected to ZooKeeper [junit4] 2 609329 T1917 oascc.SolrZkClient.makePath makePath: /solr [junit4] 2 609331 T1917 oascc.ConnectionManager.waitForConnected Waiting for client to connect to ZooKeeper [junit4] 2 609332 T1926 oascc.ConnectionManager.process Watcher org.apache.solr.common.cloud.ConnectionManager@7daca4 name:ZooKeeperConnection Watcher:127.0.0.1:59055/solr got event WatchedEvent state:SyncConnected type:None path:null path:null type:None [junit4] 2 609332 T1917 oascc.ConnectionManager.waitForConnected Client is connected to ZooKeeper [junit4] 2 609332 T1917 oascc.SolrZkClient.makePath makePath: /collections/collection1 [junit4] 2 609334 T1917 oascc.SolrZkClient.makePath makePath: /collections/collection1/shards [junit4] 2 609335 T1917 oascc.SolrZkClient.makePath makePath: /collections/control_collection [junit4] 2 609336 T1917 oascc.SolrZkClient.makePath makePath: /collections/control_collection/shards [junit4] 2 609337 T1917 oasc.AbstractZkTestCase.putConfig put /mnt/ssd/jenkins/workspace/Lucene-Solr-4.10-Linux/solr/core/src/test-files/solr/collection1/conf/solrconfig-tlog.xml to /configs/conf1/solrconfig.xml [junit4] 2 609337 T1917 oascc.SolrZkClient.makePath makePath: /configs/conf1/solrconfig.xml [junit4] 2 609338 T1917
[jira] [Assigned] (SOLR-6367) empty tolg on HDFS when hard crash - no docs to replay on recovery
[ https://issues.apache.org/jira/browse/SOLR-6367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hoss Man reassigned SOLR-6367: -- Assignee: Mark Miller miller: can you take a look at Venkata's proposed solution? is this a viable easy win for 5.0? (the HdfsTransactionLog code in question follows the same flushBuffer() instead of flush() model used in the TransactionLog parent class -- but is flush() more appropriate in this case because of how FSDataOutputStream is wrapped in the HDFS case? ) empty tolg on HDFS when hard crash - no docs to replay on recovery -- Key: SOLR-6367 URL: https://issues.apache.org/jira/browse/SOLR-6367 Project: Solr Issue Type: Bug Reporter: Hoss Man Assignee: Mark Miller Filing this bug based on an email to solr-user@lucene from Tom Chen (Fri, 18 Jul 2014)... {panel} Reproduce steps: 1) Setup Solr to run on HDFS like this: {noformat} java -Dsolr.directoryFactory=HdfsDirectoryFactory -Dsolr.lock.type=hdfs -Dsolr.hdfs.home=hdfs://host:port/path {noformat} For the purpose of this testing, turn off the default auto commit in solrconfig.xml, i.e. comment out autoCommit like this: {code} !-- autoCommit maxTime${solr.autoCommit.maxTime:15000}/maxTime openSearcherfalse/openSearcher /autoCommit -- {code} 2) Add a document without commit: {{curl http://localhost:8983/solr/collection1/update?commit=false; -H Content-type:text/xml; charset=utf-8 --data-binary @solr.xml}} 3) Solr generate empty tlog file (0 file size, the last one ends with 6): {noformat} [hadoop@hdtest042 exampledocs]$ hadoop fs -ls /path/collection1/core_node1/data/tlog Found 5 items -rw-r--r-- 1 hadoop hadoop667 2014-07-18 08:47 /path/collection1/core_node1/data/tlog/tlog.001 -rw-r--r-- 1 hadoop hadoop 67 2014-07-18 08:47 /path/collection1/core_node1/data/tlog/tlog.003 -rw-r--r-- 1 hadoop hadoop667 2014-07-18 08:47 /path/collection1/core_node1/data/tlog/tlog.004 -rw-r--r-- 1 hadoop hadoop 0 2014-07-18 09:02 /path/collection1/core_node1/data/tlog/tlog.005 -rw-r--r-- 1 hadoop hadoop 0 2014-07-18 09:02 /path/collection1/core_node1/data/tlog/tlog.006 {noformat} 4) Simulate Solr crash by killing the process with -9 option. 5) restart the Solr process. Observation is that uncommitted document are not replayed, files in tlog directory are cleaned up. Hence uncommitted document(s) is lost. Am I missing anything or this is a bug? BTW, additional observations: a) If in step 4) Solr is stopped gracefully (i.e. without -9 option), non-empty tlog file is geneated and after re-starting Solr, uncommitted document is replayed as expected. b) If Solr doesn't run on HDFS (i.e. on local file system), this issue is not observed either. {panel} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-4839) Jetty 9
[ https://issues.apache.org/jira/browse/SOLR-4839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14268711#comment-14268711 ] Steve Rowe commented on SOLR-4839: -- Currently {{ant example}} (soon to be renamed to {{ant server}} or maybe removed entirely - see SOLR-6926) deletes the exploded war from {{server/solr-webapp/}} (while leaving the parent dir intact). I was curious whether this is necessary - perhaps Jetty is smart enough to do (recursive) timestamp comparison, re-exploding the war if its timestamp is newer than any of its exploded contents. TL;DR: Jetty apparently is *not* smart enough to re-explode the war when it's newer than its previously exploded contents, so we should continue to purge the exploded war as part of {{ant example}} (or {{ant server}} or whatever we end up doing to build the war). Here's how I tested: I set {{persistTempDirectory=true}} (see above), then: {noformat} ant clean example bin/solr start bin/solr stop cp -r server/solr-webapp{,.save} ant clean example # purges the exploded war from server/solr-webapp/ rmdir server/solr-webapp cp -r server/solr-webapp{.save,} bin/solr start bin/solr stop diff -r server/solr-webapp* {noformat} {{diff}}'s output was empty (no differences) - so when the war is newer than the exploded contents, Jetty does not re-explode the war. To confirm that if Jetty had exploded the war, there would be difference, I did the following (without first running {{ant example}}): {noformat} rm -rf server/solr-webapp/* bin/solr start bin/solr stop diff -r server/solr-webapp* {noformat} This time, there were differences in three files: {noformat} diff -r server/solr-webapp/webapp/META-INF/MANIFEST.MF server/solr-webapp.save/webapp/META-INF/MANIFEST.MF 4,5c4,5 Implementation-Version: 6.0.0-SNAPSHOT 1650174 - sarowe - 2015-01-07 2 0:24:25 --- Implementation-Version: 6.0.0-SNAPSHOT 1650174 - sarowe - 2015-01-07 1 8:33:45 Binary files server/solr-webapp/webapp/WEB-INF/lib/solr-core-6.0.0-SNAPSHOT.jar and server/solr-webapp.save/webapp/WEB-INF/lib/solr-core-6.0.0-SNAPSHOT.jar differ Binary files server/solr-webapp/webapp/WEB-INF/lib/solr-solrj-6.0.0-SNAPSHOT.jar and server/solr-webapp.save/webapp/WEB-INF/lib/solr-solrj-6.0.0-SNAPSHOT.jar differ {noformat} Jetty 9 --- Key: SOLR-4839 URL: https://issues.apache.org/jira/browse/SOLR-4839 Project: Solr Issue Type: Improvement Reporter: Bill Bell Assignee: Shalin Shekhar Mangar Fix For: 5.0, Trunk Attachments: SOLR-4839-fix-eclipse.patch, SOLR-4839.patch, SOLR-4839.patch, SOLR-4839.patch, SOLR-4839.patch, SOLR-4839.patch, SOLR-4839.patch, SOLR-4839.patch, SOLR-4839.patch, SOLR-4839.patch Implement Jetty 9 -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-6648) AnalyzingInfixLookupFactory always highlights suggestions
[ https://issues.apache.org/jira/browse/SOLR-6648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14268745#comment-14268745 ] Tomás Fernández Löbbe commented on SOLR-6648: - Thanks for the patch [~boonious], would you add a test case? AnalyzingInfixLookupFactory always highlights suggestions - Key: SOLR-6648 URL: https://issues.apache.org/jira/browse/SOLR-6648 Project: Solr Issue Type: Sub-task Affects Versions: 4.9, 4.9.1, 4.10, 4.10.1 Reporter: Varun Thacker Assignee: Tomás Fernández Löbbe Labels: suggester Fix For: 5.0, Trunk Attachments: SOLR-6648-v4.10.3.patch, SOLR-6648.patch When using AnalyzingInfixLookupFactory suggestions always return with the match term as highlighted and 'allTermsRequired' is always set to true. We should be able to configure those. Steps to reproduce - schema additions {code} searchComponent name=suggest class=solr.SuggestComponent lst name=suggester str name=namemySuggester/str str name=lookupImplAnalyzingInfixLookupFactory/str str name=dictionaryImplDocumentDictionaryFactory/str str name=fieldsuggestField/str str name=weightFieldweight/str str name=suggestAnalyzerFieldTypetextSuggest/str /lst /searchComponent requestHandler name=/suggest class=solr.SearchHandler startup=lazy lst name=defaults str name=suggesttrue/str str name=suggest.count10/str /lst arr name=components strsuggest/str /arr /requestHandler {code} solrconfig changes - {code} fieldType class=solr.TextField name=textSuggest positionIncrementGap=100 analyzer tokenizer class=solr.StandardTokenizerFactory/ filter class=solr.StandardFilterFactory/ filter class=solr.LowerCaseFilterFactory/ /analyzer /fieldType field name=suggestField type=textSuggest indexed=true stored=true/ {code} Add 3 documents - {code} curl http://localhost:8983/solr/update/json?commit=true -H 'Content-type:application/json' -d ' [ {id : 1, suggestField : bass fishing}, {id : 2, suggestField : sea bass}, {id : 3, suggestField : sea bass fishing} ] ' {code} Query - {code} http://localhost:8983/solr/collection1/suggest?suggest.build=truesuggest.dictionary=mySuggesterq=basswt=jsonindent=on {code} Response {code} { responseHeader:{ status:0, QTime:25}, command:build, suggest:{mySuggester:{ bass:{ numFound:3, suggestions:[{ term:bbass/b fishing, weight:0, payload:}, { term:sea bbass/b, weight:0, payload:}, { term:sea bbass/b fishing, weight:0, payload:}] {code} The problem is in SolrSuggester Line 200 where we say lookup.lookup() This constructor does not take allTermsRequired and doHighlight since it's only tuneable to AnalyzingInfixSuggester and not the other lookup implementations. If different Lookup implementations have different params as their constructors, these sort of issues will always keep happening. Maybe we should not keep it generic and do instanceof checks and set params accordingly? -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.8.0_20) - Build # 11868 - Failure!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/11868/ Java: 64bit/jdk1.8.0_20 -XX:+UseCompressedOops -XX:+UseG1GC (asserts: false) All tests passed Build Log: [...truncated 19451 lines...] check-licenses: [echo] License check under: /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene [licenses] MISSING sha1 checksum file for: /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/replicator/lib/javax.servlet-api-3.1.0.jar [licenses] EXPECTED sha1 checksum file : /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/licenses/javax.servlet-api-3.1.0.jar.sha1 [...truncated 1 lines...] BUILD FAILED /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:519: The following error occurred while executing this line: /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:90: The following error occurred while executing this line: /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/build.xml:84: The following error occurred while executing this line: /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/tools/custom-tasks.xml:62: License check failed. Check the logs. If you recently modified ivy-versions.properties or any module's ivy.xml, make sure you run ant clean-jars jar-checksums before running precommit. Total time: 69 minutes 0 seconds Build step 'Invoke Ant' marked build as failure [description-setter] Description set: Java: 64bit/jdk1.8.0_20 -XX:+UseCompressedOops -XX:+UseG1GC (asserts: false) Archiving artifacts Recording test results Email was triggered for: Failure - Any Sending email for trigger: Failure - Any - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-6761) Ability to ignore commit and optimize requests from clients when running in SolrCloud mode.
[ https://issues.apache.org/jira/browse/SOLR-6761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14267807#comment-14267807 ] ASF subversion and git services commented on SOLR-6761: --- Commit 1650097 from [~thelabdude] in branch 'dev/branches/branch_5x' [ https://svn.apache.org/r1650097 ] SOLR-6761: Ability to ignore commit and optimize requests from clients when running in SolrCloud mode. Ability to ignore commit and optimize requests from clients when running in SolrCloud mode. --- Key: SOLR-6761 URL: https://issues.apache.org/jira/browse/SOLR-6761 Project: Solr Issue Type: New Feature Components: SolrCloud, SolrJ Reporter: Timothy Potter Assignee: Timothy Potter Fix For: 5.0, Trunk Attachments: SOLR-6761.patch, SOLR-6761.patch In most SolrCloud environments, it's advisable to only rely on auto-commits (soft and hard) configured in solrconfig.xml and not send explicit commit requests from client applications. In fact, I've seen cases where improperly coded client applications can send commit requests too frequently, which can lead to harming the cluster's health. As a system administrator, I'd like the ability to disallow commit requests from client applications. Ideally, I could configure the updateHandler to ignore the requests and return an HTTP response code of my choosing as I may not want to break existing client applications by returning an error. In other words, I may want to just return 200 vs. 405. The same goes for optimize requests. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-6761) Ability to ignore commit and optimize requests from clients when running in SolrCloud mode.
[ https://issues.apache.org/jira/browse/SOLR-6761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Timothy Potter resolved SOLR-6761. -- Resolution: Fixed Fix Version/s: Trunk 5.0 Ability to ignore commit and optimize requests from clients when running in SolrCloud mode. --- Key: SOLR-6761 URL: https://issues.apache.org/jira/browse/SOLR-6761 Project: Solr Issue Type: New Feature Components: SolrCloud, SolrJ Reporter: Timothy Potter Assignee: Timothy Potter Fix For: 5.0, Trunk Attachments: SOLR-6761.patch, SOLR-6761.patch In most SolrCloud environments, it's advisable to only rely on auto-commits (soft and hard) configured in solrconfig.xml and not send explicit commit requests from client applications. In fact, I've seen cases where improperly coded client applications can send commit requests too frequently, which can lead to harming the cluster's health. As a system administrator, I'd like the ability to disallow commit requests from client applications. Ideally, I could configure the updateHandler to ignore the requests and return an HTTP response code of my choosing as I may not want to break existing client applications by returning an error. In other words, I may want to just return 200 vs. 405. The same goes for optimize requests. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-6761) Ability to ignore commit and optimize requests from clients when running in SolrCloud mode.
[ https://issues.apache.org/jira/browse/SOLR-6761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14267827#comment-14267827 ] Alexandre Rafalovitch commented on SOLR-6761: - Just to clarify, the implementation itself does not care whether this is Cloud mode or not. You are leaving that for the sysadmin to set with the enable property, right? So, one could wire it up in a standalone mode, if they wanted to. Nothing prevents them. If so, maybe the description (in Readme) should say that it allows rejecting commits/optimize and something like primarily useful for SolrCloud mode. Ability to ignore commit and optimize requests from clients when running in SolrCloud mode. --- Key: SOLR-6761 URL: https://issues.apache.org/jira/browse/SOLR-6761 Project: Solr Issue Type: New Feature Components: SolrCloud, SolrJ Reporter: Timothy Potter Assignee: Timothy Potter Fix For: 5.0, Trunk Attachments: SOLR-6761.patch, SOLR-6761.patch In most SolrCloud environments, it's advisable to only rely on auto-commits (soft and hard) configured in solrconfig.xml and not send explicit commit requests from client applications. In fact, I've seen cases where improperly coded client applications can send commit requests too frequently, which can lead to harming the cluster's health. As a system administrator, I'd like the ability to disallow commit requests from client applications. Ideally, I could configure the updateHandler to ignore the requests and return an HTTP response code of my choosing as I may not want to break existing client applications by returning an error. In other words, I may want to just return 200 vs. 405. The same goes for optimize requests. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-4839) Jetty 9
[ https://issues.apache.org/jira/browse/SOLR-4839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14267976#comment-14267976 ] Mark Miller commented on SOLR-4839: --- bq. What was the complain? Maybe you aren't using Java 8 on trunk? I see the same thing, and everything is set to and using Java 8. It says the super type methods being overriden do not even exist. Jetty 9 --- Key: SOLR-4839 URL: https://issues.apache.org/jira/browse/SOLR-4839 Project: Solr Issue Type: Improvement Reporter: Bill Bell Assignee: Shalin Shekhar Mangar Fix For: 5.0, Trunk Attachments: SOLR-4839-fix-eclipse.patch, SOLR-4839.patch, SOLR-4839.patch, SOLR-4839.patch, SOLR-4839.patch, SOLR-4839.patch, SOLR-4839.patch, SOLR-4839.patch, SOLR-4839.patch Implement Jetty 9 -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-6925) Back out all changes having to do with SOLR-5287
Erick Erickson created SOLR-6925: Summary: Back out all changes having to do with SOLR-5287 Key: SOLR-6925 URL: https://issues.apache.org/jira/browse/SOLR-6925 Project: Solr Issue Type: Bug Affects Versions: 5.0, Trunk Reporter: Erick Erickson Assignee: Erick Erickson Priority: Blocker Should have something today/tomorrow. The history here is that I had this bright idea to edit files directly from the admin UI, especially schema.xml and solrxconifg.xml. Brilliant I sez to myself... except it's a significant security hole and I'm really glad that was pointed out before we released it in 4x. So we pulled it completely from 4.x and made it something in 5.x (then trunk) that you could enable (disabled by default) if you wanted to live dangerously and we'd deal with it later. Well it's later. Given all the work for managed schemas and the like in the interim, I think this is cruft that should be removed completely from current trunk and 5x. Marking it as a blocker so we don't release 5x with this in it or we'll have back-compat issues. Should have a fix in very quickly. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Assigned] (SOLR-5523) Implement proper security when writing config files to Solr
[ https://issues.apache.org/jira/browse/SOLR-5523?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Erick Erickson reassigned SOLR-5523: Assignee: Erick Erickson Implement proper security when writing config files to Solr --- Key: SOLR-5523 URL: https://issues.apache.org/jira/browse/SOLR-5523 Project: Solr Issue Type: Bug Affects Versions: Trunk Reporter: Erick Erickson Assignee: Erick Erickson Priority: Blocker Follow up on SOLR-5518 and SOLR-5287. We need to add proper security for writing files to Solr. I can't pursue this for some time. If we decide to pull this out, we need to ust remove EditFileRequestHandler, that should do it. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-6839) Direct routing with CloudSolrServer will ignore the Overwrite document option.
[ https://issues.apache.org/jira/browse/SOLR-6839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14268023#comment-14268023 ] Mark Miller commented on SOLR-6839: --- This is kind of an ugly bug given the performance issues that seem to come from using updateDocument - see SOLR-6838. Direct routing with CloudSolrServer will ignore the Overwrite document option. -- Key: SOLR-6839 URL: https://issues.apache.org/jira/browse/SOLR-6839 Project: Solr Issue Type: Bug Reporter: Mark Miller Assignee: Mark Miller Fix For: 5.0, Trunk Attachments: SOLR-6839.patch -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-6839) Direct routing with CloudSolrServer will ignore the Overwrite document option.
[ https://issues.apache.org/jira/browse/SOLR-6839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Miller updated SOLR-6839: -- Attachment: SOLR-6839.patch Test+Fix attatched. Direct routing with CloudSolrServer will ignore the Overwrite document option. -- Key: SOLR-6839 URL: https://issues.apache.org/jira/browse/SOLR-6839 Project: Solr Issue Type: Bug Reporter: Mark Miller Assignee: Mark Miller Fix For: 5.0, Trunk Attachments: SOLR-6839.patch -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-4509) Disable HttpClient stale check for performance and fewer spurious connection errors.
[ https://issues.apache.org/jira/browse/SOLR-4509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14268037#comment-14268037 ] Mark Miller commented on SOLR-4509: --- The final 1% of this patch has gotten a bit complicated. This patch has also grown to the point where it is difficult to maintain. I'm going to spin off and commit a few sub patches. First, adding a limited retry and second, closing all httpclient instances properly. That and a few other fixes / changes I've made along the way will significantly reduce the size and overhead of this patch. Disable HttpClient stale check for performance and fewer spurious connection errors. Key: SOLR-4509 URL: https://issues.apache.org/jira/browse/SOLR-4509 Project: Solr Issue Type: Improvement Components: search Environment: 5 node SmartOS cluster (all nodes living in same global zone - i.e. same physical machine) Reporter: Ryan Zezeski Assignee: Mark Miller Priority: Minor Fix For: 5.0, Trunk Attachments: IsStaleTime.java, SOLR-4509-4_4_0.patch, SOLR-4509.patch, SOLR-4509.patch, SOLR-4509.patch, SOLR-4509.patch, SOLR-4509.patch, SOLR-4509.patch, SOLR-4509.patch, SOLR-4509.patch, baremetal-stale-nostale-med-latency.dat, baremetal-stale-nostale-med-latency.svg, baremetal-stale-nostale-throughput.dat, baremetal-stale-nostale-throughput.svg By disabling the Apache HTTP Client stale check I've witnessed a 2-4x increase in throughput and reduction of over 100ms. This patch was made in the context of a project I'm leading, called Yokozuna, which relies on distributed search. Here's the patch on Yokozuna: https://github.com/rzezeski/yokozuna/pull/26 Here's a write-up I did on my findings: http://www.zinascii.com/2013/solr-distributed-search-and-the-stale-check.html I'm happy to answer any questions or make changes to the patch to make it acceptable. ReviewBoard: https://reviews.apache.org/r/28393/ -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-5.x-Linux (32bit/jdk1.8.0_20) - Build # 11697 - Still Failing!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/11697/ Java: 32bit/jdk1.8.0_20 -client -XX:+UseConcMarkSweepGC (asserts: false) 2 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.core.TestBadConfig Error Message: Suite timeout exceeded (= 720 msec). Stack Trace: java.lang.Exception: Suite timeout exceeded (= 720 msec). at __randomizedtesting.SeedInfo.seed([3D72423A673AD680]:0) FAILED: org.apache.solr.core.TestBadConfig.testMissingScriptFile Error Message: Test abandoned because suite timeout was reached. Stack Trace: java.lang.Exception: Test abandoned because suite timeout was reached. at __randomizedtesting.SeedInfo.seed([3D72423A673AD680]:0) Build Log: [...truncated 10121 lines...] [junit4] Suite: org.apache.solr.core.TestBadConfig [junit4] 2 Creating dataDir: /mnt/ssd/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J1/temp/solr.core.TestBadConfig 3D72423A673AD680-001/init-core-data-001 [junit4] 2 9837 T42 oas.SolrTestCaseJ4.buildSSLConfig Randomized ssl (true) and clientAuth (false) [junit4] 2 9841 T42 oas.SolrTestCaseJ4.setUp ###Starting testMultipleIndexConfigs [junit4] 2 9842 T42 oas.SolrTestCaseJ4.initCore initCore [junit4] 2 9842 T42 oasc.SolrResourceLoader.init new SolrResourceLoader for directory: '/mnt/ssd/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/test-files/solr/collection1/' [junit4] 2 9843 T42 oasc.SolrResourceLoader.replaceClassLoader Adding 'file:/mnt/ssd/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/test-files/solr/collection1/lib/.svn/' to classloader [junit4] 2 9843 T42 oasc.SolrResourceLoader.replaceClassLoader Adding 'file:/mnt/ssd/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/test-files/solr/collection1/lib/classes/' to classloader [junit4] 2 9844 T42 oasc.SolrResourceLoader.replaceClassLoader Adding 'file:/mnt/ssd/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/test-files/solr/collection1/lib/README' to classloader [junit4] 2 9882 T42 oasc.SolrConfig.refreshRequestParams current version of requestparams : -1 [junit4] 2 9884 T42 oas.SolrTestCaseJ4.deleteCore ###deleteCore [junit4] 2 9885 T42 oas.SolrTestCaseJ4.tearDown ###Ending testMultipleIndexConfigs [junit4] 2 9889 T42 oas.SolrTestCaseJ4.setUp ###Starting testUnsetSysProperty [junit4] 2 9890 T42 oas.SolrTestCaseJ4.initCore initCore [junit4] 2 9891 T42 oasc.SolrResourceLoader.init new SolrResourceLoader for directory: '/mnt/ssd/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/test-files/solr/collection1/' [junit4] 2 9891 T42 oasc.SolrResourceLoader.replaceClassLoader Adding 'file:/mnt/ssd/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/test-files/solr/collection1/lib/.svn/' to classloader [junit4] 2 9892 T42 oasc.SolrResourceLoader.replaceClassLoader Adding 'file:/mnt/ssd/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/test-files/solr/collection1/lib/classes/' to classloader [junit4] 2 9892 T42 oasc.SolrResourceLoader.replaceClassLoader Adding 'file:/mnt/ssd/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/test-files/solr/collection1/lib/README' to classloader [junit4] 2 9915 T42 oas.SolrTestCaseJ4.deleteCore ###deleteCore [junit4] 2 9915 T42 oas.SolrTestCaseJ4.tearDown ###Ending testUnsetSysProperty [junit4] 2 9921 T42 oas.SolrTestCaseJ4.setUp ###Starting testMultipleCFS [junit4] 2 9921 T42 oas.SolrTestCaseJ4.initCore initCore [junit4] 2 9922 T42 oasc.SolrResourceLoader.init new SolrResourceLoader for directory: '/mnt/ssd/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/test-files/solr/collection1/' [junit4] 2 9922 T42 oasc.SolrResourceLoader.replaceClassLoader Adding 'file:/mnt/ssd/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/test-files/solr/collection1/lib/.svn/' to classloader [junit4] 2 9923 T42 oasc.SolrResourceLoader.replaceClassLoader Adding 'file:/mnt/ssd/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/test-files/solr/collection1/lib/classes/' to classloader [junit4] 2 9923 T42 oasc.SolrResourceLoader.replaceClassLoader Adding 'file:/mnt/ssd/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/test-files/solr/collection1/lib/README' to classloader [junit4] 2 9943 T42 oasc.SolrConfig.refreshRequestParams current version of requestparams : -1 [junit4] 2 9946 T42 oas.SolrTestCaseJ4.deleteCore ###deleteCore [junit4] 2 9947 T42 oas.SolrTestCaseJ4.tearDown ###Ending testMultipleCFS [junit4] 2 9953 T42 oas.SolrTestCaseJ4.setUp ###Starting testMissingScriptFile [junit4] 2 10908 T42 oas.SolrTestCaseJ4.initCore initCore [junit4] 2 10909 T42 oasc.SolrResourceLoader.init new SolrResourceLoader for directory: '/mnt/ssd/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/test-files/solr/collection1/' [junit4] 2 10910 T42 oasc.SolrResourceLoader.replaceClassLoader
[jira] [Updated] (SOLR-6648) AnalyzingInfixLookupFactory always highlights suggestions
[ https://issues.apache.org/jira/browse/SOLR-6648?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Boon Low updated SOLR-6648: --- Attachment: SOLR-6648-v4.10.3.patch patch updated in accordance with LUCENE-6149 and for v.4.10.3 AnalyzingInfixLookupFactory always highlights suggestions - Key: SOLR-6648 URL: https://issues.apache.org/jira/browse/SOLR-6648 Project: Solr Issue Type: Sub-task Affects Versions: 4.9, 4.9.1, 4.10, 4.10.1 Reporter: Varun Thacker Assignee: Tomás Fernández Löbbe Labels: suggester Fix For: 5.0, Trunk Attachments: SOLR-6648-v4.10.3.patch, SOLR-6648.patch When using AnalyzingInfixLookupFactory suggestions always return with the match term as highlighted and 'allTermsRequired' is always set to true. We should be able to configure those. Steps to reproduce - schema additions {code} searchComponent name=suggest class=solr.SuggestComponent lst name=suggester str name=namemySuggester/str str name=lookupImplAnalyzingInfixLookupFactory/str str name=dictionaryImplDocumentDictionaryFactory/str str name=fieldsuggestField/str str name=weightFieldweight/str str name=suggestAnalyzerFieldTypetextSuggest/str /lst /searchComponent requestHandler name=/suggest class=solr.SearchHandler startup=lazy lst name=defaults str name=suggesttrue/str str name=suggest.count10/str /lst arr name=components strsuggest/str /arr /requestHandler {code} solrconfig changes - {code} fieldType class=solr.TextField name=textSuggest positionIncrementGap=100 analyzer tokenizer class=solr.StandardTokenizerFactory/ filter class=solr.StandardFilterFactory/ filter class=solr.LowerCaseFilterFactory/ /analyzer /fieldType field name=suggestField type=textSuggest indexed=true stored=true/ {code} Add 3 documents - {code} curl http://localhost:8983/solr/update/json?commit=true -H 'Content-type:application/json' -d ' [ {id : 1, suggestField : bass fishing}, {id : 2, suggestField : sea bass}, {id : 3, suggestField : sea bass fishing} ] ' {code} Query - {code} http://localhost:8983/solr/collection1/suggest?suggest.build=truesuggest.dictionary=mySuggesterq=basswt=jsonindent=on {code} Response {code} { responseHeader:{ status:0, QTime:25}, command:build, suggest:{mySuggester:{ bass:{ numFound:3, suggestions:[{ term:bbass/b fishing, weight:0, payload:}, { term:sea bbass/b, weight:0, payload:}, { term:sea bbass/b fishing, weight:0, payload:}] {code} The problem is in SolrSuggester Line 200 where we say lookup.lookup() This constructor does not take allTermsRequired and doHighlight since it's only tuneable to AnalyzingInfixSuggester and not the other lookup implementations. If different Lookup implementations have different params as their constructors, these sort of issues will always keep happening. Maybe we should not keep it generic and do instanceof checks and set params accordingly? -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org