[jira] [Updated] (SOLR-11198) downconfig downloads empty file as folder
[ https://issues.apache.org/jira/browse/SOLR-11198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Erick Erickson updated SOLR-11198: -- Attachment: SOLR-11198.patch OK, false alarm. bin/solr, zkcli and ZkCLI all look like they go through ZkMaintenanceUtils so the same fix should work for them all. The fix isn't even a one-liner in ZkMaintnanceUtils, there was a lot more work in the tests. Not unusual. Anyway, I tested with bin/solr, zkcli.sh and ZkCLI (mac) but I can't test it on Windows. Isabelle, if you're feeling brave could you download the patch, compile and give it a whirl and let me know? BTW, the start scripts may be easier to use, try "bin/solr.cmd zk -help". The upconfig/downconfig are modeled closely on the ZkCLI, but don't require all the classpath etc. I have yet to precommit and run all the tests, but certainly all the tests in SolrCLIZkUtilsTest run successfully. > downconfig downloads empty file as folder > - > > Key: SOLR-11198 > URL: https://issues.apache.org/jira/browse/SOLR-11198 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: 6.6 > Environment: Windows 7 >Reporter: Isabelle Giguere >Assignee: Erick Erickson >Priority: Minor > Attachments: SOLR-11198.patch > > > With Solr 6.6.0, when downloading a config from Zookeeper (3.4.10), if a file > is empty, it is downloaded as a folder (on Windows, at least). > A Zookeeper browser (Eclipse: Zookeeper Explorer) shows the file as a file, > however, in ZK. > Noticed because we keep an empty synonyms.txt file in the Solr config > provided with our product, in case a client would want to use it. > The workaround is simple, since the file allows comments: just add a comment, > so it is not empty. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: ant precommit failing due to the solr dev guide
Sorry, I only tested the previous fix on a small file. I've tried a different approach (removed the trailing ".*" in the problematic regex, and used lookingAt() instead of matches() with this regex), which passed all of "-validate-source-patterns" with all .adoc files. Please try again. -- Steve www.lucidworks.com On Sat, Aug 5, 2017 at 8:28 PM, Karl Wrightwrote: > Hi Steve, it still did not work for me I'm afraid. Same exact problem > after a "git pull" on master. > > Karl > > > On Sat, Aug 5, 2017 at 6:55 PM, Steve Rowe wrote: > >> I was able to reproduce on Windows 10 after running “git config --global >> core.autocrlf true” and deleting all files except .git/, then running “git >> reset --hard”. >> >> The issue appears to be that String.split(“\n\r?") returns lines with >> trailing carriage returns, which causes regexes that expect to consume a >> whole line using a trailing “.*” to fail to match, since “.” doesn’t match >> a carriage return (without the DOTALL option). >> >> I committed a fix: I add a call to trim() on each line coming out of >> split. Worked for me. >> >> Karl, please let me know if this doesn’t fix it for you. >> >> -- >> Steve >> www.lucidworks.com >> >> > On Aug 4, 2017, at 8:44 PM, Karl Wright wrote: >> > >> > I think you need to configure your git to checkout with native line >> endings too to make it happen. >> > >> > Karl >> > >> > >> > On Fri, Aug 4, 2017 at 8:13 PM, Steve Rowe wrote: >> > I have a Windows 10 box, I’ll see if I can reproduce. >> > >> > -- >> > Steve >> > www.lucidworks.com >> > >> > > On Aug 4, 2017, at 5:02 AM, Uwe Schindler wrote: >> > > >> > > Hi, >> > > >> > > yes you’re right: Jenkins and also my computer uses Unix linefeeds. >> So I think Steve’s script has a bug with newlines, although I think the >> regex is correct, but maybe it’s a side-effect of another regex (I don’t >> fully understand what the check should do!). >> > > >> > > Uwe >> > > >> > > - >> > > Uwe Schindler >> > > Achterdiek 19, D-28357 Bremen >> > > http://www.thetaphi.de >> > > eMail: u...@thetaphi.de >> > > >> > > From: Karl Wright [mailto:daddy...@gmail.com] >> > > Sent: Friday, August 4, 2017 12:20 AM >> > > To: Lucene/Solr dev >> > > Subject: Re: ant precommit failing due to the solr dev guide >> > > >> > > _144 also doesn't work for me. >> > > >> > > Looking at one of the .adoc files, the checkout has CR/LF at the end >> of the line, right after the "->" eg: >> > > >> > > >> > > >> > > Is your git configured to checkout in native format? >> > > >> > > Karl >> > > >> > > >> > > On Thu, Aug 3, 2017 at 5:42 PM, Karl Wright >> wrote: >> > >> 1.8.0_45 didn't work either; downloading _144 now (will take a >> while). >> > >> >> > >> Karl >> > >> >> > >> >> > >> On Thu, Aug 3, 2017 at 5:09 PM, Karl Wright >> wrote: >> > >>> Thanks, I'll update. >> > >>> >> > >>> Karl >> > >>> >> > >>> >> > >>> On Thu, Aug 3, 2017 at 12:30 PM, Uwe Schindler >> wrote: >> > Oh, I think I know: >> > Java 8 update 5: Please update and try again. Such old versions >> had problems in String#split(), I don’t exactly remember but they were able >> to return some duplicate/empty tokens. >> > >> > Uwe >> > >> > - >> > Uwe Schindler >> > Achterdiek 19, D-28357 Bremen >> > http://www.thetaphi.de >> > eMail: u...@thetaphi.de >> > >> > From: Uwe Schindler [mailto:u...@thetaphi.de] >> > Sent: Thursday, August 3, 2017 6:28 PM >> > To: 'dev@lucene.apache.org' >> > Subject: RE: ant precommit failing due to the solr dev guide >> > >> > I see no problems on windows jenkins and no problems on my local >> computer. >> > >> > Steve’s script has a regex for matching newlines, but this one >> looks correct. Would it be possible to check, how the newlines look like on >> your *.adoc files (e.g., post a hexdump)? >> > >> > Uwe >> > >> > - >> > Uwe Schindler >> > Achterdiek 19, D-28357 Bremen >> > http://www.thetaphi.de >> > eMail: u...@thetaphi.de >> > >> > From: Uwe Schindler [mailto:u...@thetaphi.de] >> > Sent: Thursday, August 3, 2017 5:33 PM >> > To: dev@lucene.apache.org >> > Subject: RE: ant precommit failing due to the solr dev guide >> > >> > Hi, >> > >> > Could be an newline issue in the groovy script… On windows some >> regex using \n or similar won’t match…. >> > >> > I will check on my system. >> > >> > - >> > Uwe Schindler >> > Achterdiek 19, D-28357 Bremen >> > http://www.thetaphi.de >> > eMail: u...@thetaphi.de >> > >> > From: Karl Wright [mailto:daddy...@gmail.com] >> > Sent: Thursday, August 3, 2017 5:08 PM >> > To: Lucene/Solr dev
[JENKINS-EA] Lucene-Solr-master-Windows (32bit/jdk-9-ea+178) - Build # 6805 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/6805/ Java: 32bit/jdk-9-ea+178 -server -XX:+UseParallelGC --illegal-access=deny 1 tests failed. FAILED: org.apache.lucene.replicator.IndexReplicationClientTest.testConsistencyOnExceptions Error Message: Could not remove the following files (in the order of attempts): C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\replicator\test\J0\temp\lucene.replicator.IndexReplicationClientTest_F00C98E97CB2DD2D-001\replicationClientTest-002\1: java.nio.file.DirectoryNotEmptyException: C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\replicator\test\J0\temp\lucene.replicator.IndexReplicationClientTest_F00C98E97CB2DD2D-001\replicationClientTest-002\1 Stack Trace: java.io.IOException: Could not remove the following files (in the order of attempts): C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\replicator\test\J0\temp\lucene.replicator.IndexReplicationClientTest_F00C98E97CB2DD2D-001\replicationClientTest-002\1: java.nio.file.DirectoryNotEmptyException: C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\replicator\test\J0\temp\lucene.replicator.IndexReplicationClientTest_F00C98E97CB2DD2D-001\replicationClientTest-002\1 at __randomizedtesting.SeedInfo.seed([F00C98E97CB2DD2D:7F827F496EDE2ED2]:0) at org.apache.lucene.util.IOUtils.rm(IOUtils.java:329) at org.apache.lucene.replicator.PerSessionDirectoryFactory.cleanupSession(PerSessionDirectoryFactory.java:58) at org.apache.lucene.replicator.ReplicationClient.doUpdate(ReplicationClient.java:259) at org.apache.lucene.replicator.ReplicationClient.updateNow(ReplicationClient.java:401) at org.apache.lucene.replicator.IndexReplicationClientTest.testConsistencyOnExceptions(IndexReplicationClientTest.java:218) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:564) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at
[JENKINS] Lucene-Solr-7.x-Solaris (64bit/jdk1.8.0) - Build # 95 - Still unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Solaris/95/ Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseSerialGC 1 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation Error Message: 2 threads leaked from SUITE scope at org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation: 1) Thread[id=12310, name=jetty-launcher-1871-thread-1-EventThread, state=TIMED_WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation] at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037) at java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328) at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277) at org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:323) at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:105) at org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:288) at org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279) at org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:41) at org.apache.curator.framework.recipes.shared.SharedValue.readValue(SharedValue.java:244) at org.apache.curator.framework.recipes.shared.SharedValue.access$100(SharedValue.java:44) at org.apache.curator.framework.recipes.shared.SharedValue$1.process(SharedValue.java:61) at org.apache.curator.framework.imps.NamespaceWatcher.process(NamespaceWatcher.java:67) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:530) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:505) 2) Thread[id=12312, name=jetty-launcher-1871-thread-2-EventThread, state=TIMED_WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation] at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037) at java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328) at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277) at org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:323) at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:105) at org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:288) at org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279) at org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:41) at org.apache.curator.framework.recipes.shared.SharedValue.readValue(SharedValue.java:244) at org.apache.curator.framework.recipes.shared.SharedValue.access$100(SharedValue.java:44) at org.apache.curator.framework.recipes.shared.SharedValue$1.process(SharedValue.java:61) at org.apache.curator.framework.imps.NamespaceWatcher.process(NamespaceWatcher.java:67) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:530) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:505) Stack Trace: com.carrotsearch.randomizedtesting.ThreadLeakError: 2 threads leaked from SUITE scope at org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation: 1) Thread[id=12310, name=jetty-launcher-1871-thread-1-EventThread, state=TIMED_WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation] at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037) at java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328) at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277) at org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:323) at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:105) at org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:288) at org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279) at
[jira] [Commented] (SOLR-10821) Write documentation for the autoscaling APIs and policy/preferences syntax for Solr 7.0
[ https://issues.apache.org/jira/browse/SOLR-10821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16115590#comment-16115590 ] Shalin Shekhar Mangar commented on SOLR-10821: -- Thanks Shawn! > Write documentation for the autoscaling APIs and policy/preferences syntax > for Solr 7.0 > --- > > Key: SOLR-10821 > URL: https://issues.apache.org/jira/browse/SOLR-10821 > Project: Solr > Issue Type: Sub-task > Security Level: Public(Default Security Level. Issues are Public) > Components: documentation >Reporter: Shalin Shekhar Mangar >Assignee: Shalin Shekhar Mangar > Labels: autoscaling > Fix For: 7.0 > > > We need to document the following: > # set-policy > # set-cluster-preferences > # set-cluster-policy > # Autoscaling configuration read API > # Autoscaling diagnostics API > # policy and preference rule syntax -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[GitHub] lucene-solr pull request #228: Branch 6 5
GitHub user AdityaParameshwara opened a pull request: https://github.com/apache/lucene-solr/pull/228 Branch 6 5 You can merge this pull request into a Git repository by running: $ git pull https://github.com/apache/lucene-solr branch_6_5 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/lucene-solr/pull/228.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #228 commit 1e4463e3a13c5ab71a8df7c48f5106ad38c25bbf Author: Mike McCandlessDate: 2017-02-18T10:56:22Z add example of DrillSidways with range facets commit 3e63a8dc075b72ecb31d5bc104e14b490af48db2 Author: Mike McCandless Date: 2017-02-18T13:31:26Z add missing javadoc commit 33845f73721c6090163ff869a669557350b8a233 Author: Cao Manh Dat Date: 2017-02-18T23:57:27Z SOLR-9966: Convert/migrate tests using EasyMock to Mockito commit e9d3bdd02ada23ad09bcc6fc7ff3661880dd45bc Author: Ishan Chattopadhyaya Date: 2017-01-26T01:23:13Z SOLR-5944: In-place updates of Numeric DocValues Conflicts: solr/CHANGES.txt solr/core/src/java/org/apache/solr/handler/component/RealTimeGetComponent.java solr/core/src/java/org/apache/solr/search/SolrIndexSearcher.java solr/core/src/test-files/solr/collection1/conf/schema.xml solr/core/src/test/org/apache/solr/cloud/TestSegmentSorting.java commit c392d760bd10812a74cc0c5b06ac7617d544743c Author: Ishan Chattopadhyaya Date: 2017-02-07T11:40:52Z SOLR-10079: Test fix for TestInPlaceUpdatesDistrib, using clearIndex() commit 2cd35288ec7e19af93df6275e8d028de0777bd1e Author: Ishan Chattopadhyaya Date: 2017-02-19T01:42:28Z SOLR-10159: When DBQ is reordered with an in-place update, upon whose updated value the DBQ is based on, the DBQ fails due to excessive caching in DeleteByQueryWrapper commit 4de140bf8b5e621ad70a07cb272ddc7135346eaf Author: Ishan Chattopadhyaya Date: 2017-02-19T02:20:01Z SOLR-5944: Use SolrCmdDistributor's synchronous mode for in-place updates commit 476cea57e8e55832878c3b4c8efe1cf6f113b3c4 Author: Ishan Chattopadhyaya Date: 2017-02-19T03:51:35Z SOLR-5944: Cleanup comments and logging, use NoMergePolicy instead of LogDocMergePolicy Conflicts: solr/core/src/java/org/apache/solr/update/DirectUpdateHandler2.java commit ad9195d757c298a241ef2488b4b17623a44afdd7 Author: yonik Date: 2017-02-16T03:51:21Z SOLR-10114: add _version_ field to child documents, fix reordered-dbq to not drop child docs commit 5c76710f08225d6909c96e584888bb6f036b4cfe Author: yonik Date: 2017-02-16T17:45:32Z SOLR-10114: test cleanup commit 98133b21961c7c9672bcd85d2a2713e46f3242db Author: yonik Date: 2017-02-16T20:12:35Z SOLR-10114: fix flakey TestRecovery commit 76ca4f07a17387f1839e42be9e8e581c94988c80 Author: Cao Manh Dat Date: 2017-02-19T08:19:57Z SOLR-9966: Fix previous commit bug commit 0689b2a7fd05c86c7fc2f1d1adffdd631d671ba1 Author: Ishan Chattopadhyaya Date: 2017-02-19T08:33:32Z SOLR-5944: Suppress PointFields for TestSegmentSorting commit 9a7a05d58d19d3f1051d75ef73ba144742cef934 Author: Ishan Chattopadhyaya Date: 2017-02-19T08:33:44Z Merge branch 'branch_6x' of https://git-wip-us.apache.org/repos/asf/lucene-solr into branch_6x commit 7ae5babe1119bef9fa0d6d02b2668a57ca12c21f Author: Ishan Chattopadhyaya Date: 2017-02-19T08:39:15Z Remove unused imports commit 31211d088f9666439017f9a8cdcc5be7d132f751 Author: Cao Manh Dat Date: 2017-02-19T09:52:32Z SOLR-9966: Fix ant precommit by removing cglib commit 54f90d9f4333b6aade8fdb5bbd5b1a4863ba5d9f Author: Cao Manh Dat Date: 2017-02-19T09:54:04Z Merge branch 'branch_6x' of https://git-wip-us.apache.org/repos/asf/lucene-solr into branch_6x commit 8a965def6619ab57cdb9418e4158e69479020428 Author: Uwe Schindler Date: 2017-02-19T10:40:54Z SOLR-9966: Fix check-licenses precommit commit ccd9733fa24d9c783b544e2cb4a0278ba064e3fc Author: Uwe Schindler Date: 2017-02-19T11:20:56Z SOLR-9966: Do test-ignore properly commit f5b324ce6b4c912591d720184d724499575055f2 Author: Chris Hostetter Date: 2017-02-20T01:09:35Z Fix (generic) return type w/o this change class won't compile using jdk9-ea157 (cherry picked from commit 5fb94cee68b29bae394fa95753de555cf9ac10ff) commit ea51810733da15ccbc526698742e659c19382fd3 Author: Christine Poerschke Date: 2017-02-20T10:54:09Z SOLR-10142:
[JENKINS] Lucene-Solr-7.x-Linux (32bit/jdk1.8.0_141) - Build # 209 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/209/ Java: 32bit/jdk1.8.0_141 -client -XX:+UseConcMarkSweepGC 1 tests failed. FAILED: org.apache.lucene.search.suggest.document.TestSuggestField.testRealisticKeys Error Message: input automaton is too large: 1001 Stack Trace: java.lang.IllegalArgumentException: input automaton is too large: 1001 at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1298) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at
Re: ant precommit failing due to the solr dev guide
Hi Steve, it still did not work for me I'm afraid. Same exact problem after a "git pull" on master. Karl On Sat, Aug 5, 2017 at 6:55 PM, Steve Rowewrote: > I was able to reproduce on Windows 10 after running “git config --global > core.autocrlf true” and deleting all files except .git/, then running “git > reset --hard”. > > The issue appears to be that String.split(“\n\r?") returns lines with > trailing carriage returns, which causes regexes that expect to consume a > whole line using a trailing “.*” to fail to match, since “.” doesn’t match > a carriage return (without the DOTALL option). > > I committed a fix: I add a call to trim() on each line coming out of > split. Worked for me. > > Karl, please let me know if this doesn’t fix it for you. > > -- > Steve > www.lucidworks.com > > > On Aug 4, 2017, at 8:44 PM, Karl Wright wrote: > > > > I think you need to configure your git to checkout with native line > endings too to make it happen. > > > > Karl > > > > > > On Fri, Aug 4, 2017 at 8:13 PM, Steve Rowe wrote: > > I have a Windows 10 box, I’ll see if I can reproduce. > > > > -- > > Steve > > www.lucidworks.com > > > > > On Aug 4, 2017, at 5:02 AM, Uwe Schindler wrote: > > > > > > Hi, > > > > > > yes you’re right: Jenkins and also my computer uses Unix linefeeds. So > I think Steve’s script has a bug with newlines, although I think the regex > is correct, but maybe it’s a side-effect of another regex (I don’t fully > understand what the check should do!). > > > > > > Uwe > > > > > > - > > > Uwe Schindler > > > Achterdiek 19, D-28357 Bremen > > > http://www.thetaphi.de > > > eMail: u...@thetaphi.de > > > > > > From: Karl Wright [mailto:daddy...@gmail.com] > > > Sent: Friday, August 4, 2017 12:20 AM > > > To: Lucene/Solr dev > > > Subject: Re: ant precommit failing due to the solr dev guide > > > > > > _144 also doesn't work for me. > > > > > > Looking at one of the .adoc files, the checkout has CR/LF at the end > of the line, right after the "->" eg: > > > > > > > > > > > > Is your git configured to checkout in native format? > > > > > > Karl > > > > > > > > > On Thu, Aug 3, 2017 at 5:42 PM, Karl Wright > wrote: > > >> 1.8.0_45 didn't work either; downloading _144 now (will take a while). > > >> > > >> Karl > > >> > > >> > > >> On Thu, Aug 3, 2017 at 5:09 PM, Karl Wright > wrote: > > >>> Thanks, I'll update. > > >>> > > >>> Karl > > >>> > > >>> > > >>> On Thu, Aug 3, 2017 at 12:30 PM, Uwe Schindler > wrote: > > Oh, I think I know: > > Java 8 update 5: Please update and try again. Such old versions had > problems in String#split(), I don’t exactly remember but they were able to > return some duplicate/empty tokens. > > > > Uwe > > > > - > > Uwe Schindler > > Achterdiek 19, D-28357 Bremen > > http://www.thetaphi.de > > eMail: u...@thetaphi.de > > > > From: Uwe Schindler [mailto:u...@thetaphi.de] > > Sent: Thursday, August 3, 2017 6:28 PM > > To: 'dev@lucene.apache.org' > > Subject: RE: ant precommit failing due to the solr dev guide > > > > I see no problems on windows jenkins and no problems on my local > computer. > > > > Steve’s script has a regex for matching newlines, but this one > looks correct. Would it be possible to check, how the newlines look like on > your *.adoc files (e.g., post a hexdump)? > > > > Uwe > > > > - > > Uwe Schindler > > Achterdiek 19, D-28357 Bremen > > http://www.thetaphi.de > > eMail: u...@thetaphi.de > > > > From: Uwe Schindler [mailto:u...@thetaphi.de] > > Sent: Thursday, August 3, 2017 5:33 PM > > To: dev@lucene.apache.org > > Subject: RE: ant precommit failing due to the solr dev guide > > > > Hi, > > > > Could be an newline issue in the groovy script… On windows some > regex using \n or similar won’t match…. > > > > I will check on my system. > > > > - > > Uwe Schindler > > Achterdiek 19, D-28357 Bremen > > http://www.thetaphi.de > > eMail: u...@thetaphi.de > > > > From: Karl Wright [mailto:daddy...@gmail.com] > > Sent: Thursday, August 3, 2017 5:08 PM > > To: Lucene/Solr dev > > Subject: Re: ant precommit failing due to the solr dev guide > > > > Sure -- this is Windows 10, an older JDK 8: C:\Program > Files\Java\jdk1.8.0_05 > > > > Anything else you are interested in? > > > > Karl > > > > > > On Thu, Aug 3, 2017 at 11:04 AM, Steve Rowe > wrote: > > > Hi Karl, > > > > > > I looked at a couple of the errors, and they were all in > "[source]" sections, which should be exempted from the “unescaped symbol” > check, which is
[JENKINS-EA] Lucene-Solr-master-Linux (32bit/jdk-9-ea+178) - Build # 20266 - Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/20266/ Java: 32bit/jdk-9-ea+178 -client -XX:+UseConcMarkSweepGC --illegal-access=deny 1 tests failed. FAILED: org.apache.lucene.search.suggest.analyzing.AnalyzingSuggesterTest.testRandomRealisticKeys Error Message: input automaton is too large: 1001 Stack Trace: java.lang.IllegalArgumentException: input automaton is too large: 1001 at __randomizedtesting.SeedInfo.seed([25553F0906EA672F:A791C99CD35393E4]:0) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1298) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at
[JENKINS-EA] Lucene-Solr-7.x-Windows (32bit/jdk-9-ea+178) - Build # 98 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Windows/98/ Java: 32bit/jdk-9-ea+178 -server -XX:+UseSerialGC --illegal-access=deny 1 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.common.util.TestNamedListCodec Error Message: The test or suite printed 10306 bytes to stdout and stderr, even though the limit was set to 8192 bytes. Increase the limit with @Limit, ignore it completely with @SuppressSysoutChecks or run with -Dtests.verbose=true Stack Trace: java.lang.AssertionError: The test or suite printed 10306 bytes to stdout and stderr, even though the limit was set to 8192 bytes. Increase the limit with @Limit, ignore it completely with @SuppressSysoutChecks or run with -Dtests.verbose=true at __randomizedtesting.SeedInfo.seed([795AC7F8AC81103A]:0) at org.apache.lucene.util.TestRuleLimitSysouts.afterIfSuccessful(TestRuleLimitSysouts.java:211) at com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterIfSuccessful(TestRuleAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:37) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.base/java.lang.Thread.run(Thread.java:844) Build Log: [...truncated 13720 lines...] [junit4] Suite: org.apache.solr.common.util.TestNamedListCodec [junit4] 2> SLF4J: Class path contains multiple SLF4J bindings. [junit4] 2> SLF4J: Found binding in [jar:file:/C:/Users/jenkins/workspace/Lucene-Solr-7.x-Windows/solr/solrj/test-lib/slf4j-log4j12-1.7.7.jar!/org/slf4j/impl/StaticLoggerBinder.class] [junit4] 2> SLF4J: Found binding in [jar:file:/C:/Users/jenkins/workspace/Lucene-Solr-7.x-Windows/solr/core/lib/slf4j-log4j12-1.7.7.jar!/org/slf4j/impl/StaticLoggerBinder.class] [junit4] 2> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. [junit4] 2> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] [junit4] 2> 0INFO (TEST-TestNamedListCodec.testRandom-seed#[795AC7F8AC81103A]) [] o.a.s.u.Java9InitHack Adding temporary workaround for Hadoop's Shell class to allow running on Java 9 (please ignore any warnings/failures). [junit4] 2> 14 ERROR (TEST-TestNamedListCodec.testRandom-seed#[795AC7F8AC81103A]) [] o.a.h.u.Shell Failed to locate the winutils binary in the hadoop binary path [junit4] 2> java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries. [junit4] 2>at org.apache.hadoop.util.Shell.getQualifiedBinPath(Shell.java:356) [junit4] 2>at org.apache.hadoop.util.Shell.getWinUtilsPath(Shell.java:371) [junit4] 2>at org.apache.hadoop.util.Shell.(Shell.java:364) [junit4] 2>at java.base/java.lang.Class.forName0(Native Method) [junit4] 2>at java.base/java.lang.Class.forName(Class.java:292) [junit4] 2>at org.apache.solr.util.Java9InitHack.initPrivileged(Java9InitHack.java:65) [junit4] 2>at java.base/java.security.AccessController.doPrivileged(Native Method) [junit4] 2>at org.apache.solr.util.Java9InitHack.initJava9(Java9InitHack.java:55) [junit4] 2>at org.apache.solr.SolrTestCaseJ4.(SolrTestCaseJ4.java:168) [junit4] 2>at org.apache.solr.common.util.TestNamedListCodec.testRandom(TestNamedListCodec.java:269) [junit4] 2>at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) [junit4] 2>at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) [junit4] 2>at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) [junit4] 2>at java.base/java.lang.reflect.Method.invoke(Method.java:564) [junit4] 2>at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713) [junit4] 2>at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907) [junit4] 2>at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943) [junit4] 2>at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957) [junit4] 2>at
Re: ant precommit failing due to the solr dev guide
I was able to reproduce on Windows 10 after running “git config --global core.autocrlf true” and deleting all files except .git/, then running “git reset --hard”. The issue appears to be that String.split(“\n\r?") returns lines with trailing carriage returns, which causes regexes that expect to consume a whole line using a trailing “.*” to fail to match, since “.” doesn’t match a carriage return (without the DOTALL option). I committed a fix: I add a call to trim() on each line coming out of split. Worked for me. Karl, please let me know if this doesn’t fix it for you. -- Steve www.lucidworks.com > On Aug 4, 2017, at 8:44 PM, Karl Wrightwrote: > > I think you need to configure your git to checkout with native line endings > too to make it happen. > > Karl > > > On Fri, Aug 4, 2017 at 8:13 PM, Steve Rowe wrote: > I have a Windows 10 box, I’ll see if I can reproduce. > > -- > Steve > www.lucidworks.com > > > On Aug 4, 2017, at 5:02 AM, Uwe Schindler wrote: > > > > Hi, > > > > yes you’re right: Jenkins and also my computer uses Unix linefeeds. So I > > think Steve’s script has a bug with newlines, although I think the regex is > > correct, but maybe it’s a side-effect of another regex (I don’t fully > > understand what the check should do!). > > > > Uwe > > > > - > > Uwe Schindler > > Achterdiek 19, D-28357 Bremen > > http://www.thetaphi.de > > eMail: u...@thetaphi.de > > > > From: Karl Wright [mailto:daddy...@gmail.com] > > Sent: Friday, August 4, 2017 12:20 AM > > To: Lucene/Solr dev > > Subject: Re: ant precommit failing due to the solr dev guide > > > > _144 also doesn't work for me. > > > > Looking at one of the .adoc files, the checkout has CR/LF at the end of the > > line, right after the "->" eg: > > > > > > > > Is your git configured to checkout in native format? > > > > Karl > > > > > > On Thu, Aug 3, 2017 at 5:42 PM, Karl Wright wrote: > >> 1.8.0_45 didn't work either; downloading _144 now (will take a while). > >> > >> Karl > >> > >> > >> On Thu, Aug 3, 2017 at 5:09 PM, Karl Wright wrote: > >>> Thanks, I'll update. > >>> > >>> Karl > >>> > >>> > >>> On Thu, Aug 3, 2017 at 12:30 PM, Uwe Schindler wrote: > Oh, I think I know: > Java 8 update 5: Please update and try again. Such old versions had > problems in String#split(), I don’t exactly remember but they were able > to return some duplicate/empty tokens. > > Uwe > > - > Uwe Schindler > Achterdiek 19, D-28357 Bremen > http://www.thetaphi.de > eMail: u...@thetaphi.de > > From: Uwe Schindler [mailto:u...@thetaphi.de] > Sent: Thursday, August 3, 2017 6:28 PM > To: 'dev@lucene.apache.org' > Subject: RE: ant precommit failing due to the solr dev guide > > I see no problems on windows jenkins and no problems on my local > computer. > > Steve’s script has a regex for matching newlines, but this one looks > correct. Would it be possible to check, how the newlines look like on > your *.adoc files (e.g., post a hexdump)? > > Uwe > > - > Uwe Schindler > Achterdiek 19, D-28357 Bremen > http://www.thetaphi.de > eMail: u...@thetaphi.de > > From: Uwe Schindler [mailto:u...@thetaphi.de] > Sent: Thursday, August 3, 2017 5:33 PM > To: dev@lucene.apache.org > Subject: RE: ant precommit failing due to the solr dev guide > > Hi, > > Could be an newline issue in the groovy script… On windows some regex > using \n or similar won’t match…. > > I will check on my system. > > - > Uwe Schindler > Achterdiek 19, D-28357 Bremen > http://www.thetaphi.de > eMail: u...@thetaphi.de > > From: Karl Wright [mailto:daddy...@gmail.com] > Sent: Thursday, August 3, 2017 5:08 PM > To: Lucene/Solr dev > Subject: Re: ant precommit failing due to the solr dev guide > > Sure -- this is Windows 10, an older JDK 8: C:\Program > Files\Java\jdk1.8.0_05 > > Anything else you are interested in? > > Karl > > > On Thu, Aug 3, 2017 at 11:04 AM, Steve Rowe wrote: > > Hi Karl, > > > > I looked at a couple of the errors, and they were all in "[source]" > > sections, which should be exempted from the “unescaped symbol” check, > > which is performed in the "-validate-source-patterns” target in the > > top-level build.xml. The groovy method > > “checkForUnescapedSymbolSubstitutions” is where this “[source]” section > > exemption is supposed to happen. > > > > The “-validate-source-patterns” target depends on “resolve-groovy”, > > which pins the version, so I don’t think this
[JENKINS-EA] Lucene-Solr-master-Windows (64bit/jdk-9-ea+178) - Build # 6804 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/6804/ Java: 64bit/jdk-9-ea+178 -XX:-UseCompressedOops -XX:+UseParallelGC --illegal-access=deny 1 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.update.DataDrivenBlockJoinTest Error Message: Could not remove the following files (in the order of attempts): C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J0\temp\solr.update.DataDrivenBlockJoinTest_FABCEB6E476F67D7-001\init-core-data-001: java.nio.file.AccessDeniedException: C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J0\temp\solr.update.DataDrivenBlockJoinTest_FABCEB6E476F67D7-001\init-core-data-001 C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J0\temp\solr.update.DataDrivenBlockJoinTest_FABCEB6E476F67D7-001: java.nio.file.DirectoryNotEmptyException: C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J0\temp\solr.update.DataDrivenBlockJoinTest_FABCEB6E476F67D7-001 Stack Trace: java.io.IOException: Could not remove the following files (in the order of attempts): C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J0\temp\solr.update.DataDrivenBlockJoinTest_FABCEB6E476F67D7-001\init-core-data-001: java.nio.file.AccessDeniedException: C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J0\temp\solr.update.DataDrivenBlockJoinTest_FABCEB6E476F67D7-001\init-core-data-001 C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J0\temp\solr.update.DataDrivenBlockJoinTest_FABCEB6E476F67D7-001: java.nio.file.DirectoryNotEmptyException: C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J0\temp\solr.update.DataDrivenBlockJoinTest_FABCEB6E476F67D7-001 at __randomizedtesting.SeedInfo.seed([FABCEB6E476F67D7]:0) at org.apache.lucene.util.IOUtils.rm(IOUtils.java:329) at org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:216) at com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterAlways(TestRuleAdapter.java:31) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.base/java.lang.Thread.run(Thread.java:844) Build Log: [...truncated 11 lines...] ERROR: Error fetching remote repo 'origin' hudson.plugins.git.GitException: Failed to fetch from git://git.apache.org/lucene-solr.git at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:817) at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1084) at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1115) at hudson.scm.SCM.checkout(SCM.java:495) at hudson.model.AbstractProject.checkout(AbstractProject.java:1212) at hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:560) at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86) at hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:485) at hudson.model.Run.execute(Run.java:1735) at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43) at hudson.model.ResourceController.execute(ResourceController.java:97) at hudson.model.Executor.run(Executor.java:415) Caused by: hudson.plugins.git.GitException: org.eclipse.jgit.api.errors.TransportException: git://git.apache.org/lucene-solr.git: Connection refused: connect at org.jenkinsci.plugins.gitclient.JGitAPIImpl$2.execute(JGitAPIImpl.java:624) at org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler$1.call(RemoteGitImpl.java:153) at org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler$1.call(RemoteGitImpl.java:146) at hudson.remoting.UserRequest.perform(UserRequest.java:181) at hudson.remoting.UserRequest.perform(UserRequest.java:52) at hudson.remoting.Request$2.run(Request.java:336) at hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:68) at java.util.concurrent.FutureTask.run(Unknown
[JENKINS] Lucene-Solr-7.x-Linux (32bit/jdk1.8.0_141) - Build # 208 - Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/208/ Java: 32bit/jdk1.8.0_141 -client -XX:+UseParallelGC 1 tests failed. FAILED: org.apache.lucene.search.suggest.document.TestSuggestField.testRealisticKeys Error Message: input automaton is too large: 1001 Stack Trace: java.lang.IllegalArgumentException: input automaton is too large: 1001 at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1298) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at
[jira] [Commented] (LUCENE-7918) Give access to members of a composite shape
[ https://issues.apache.org/jira/browse/LUCENE-7918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16115488#comment-16115488 ] Karl Wright commented on LUCENE-7918: - [~ivera], didn't pass documentation-lint. This is what it said: {code} [exec] file:///build/docs/spatial3d/org/apache/lucene/spatial3d/geom/GeoBaseCompositeAreaShape.html [exec] BROKEN LINK: file:///build/docs/core/org/apache/lucene/spatial3d.geom.GeoBaseCompositeShape.html [exec] BROKEN LINK: file:///build/docs/core/org/apache/lucene/spatial3d.geom.GeoBaseCompositeShape.html [exec] BROKEN LINK: file:///build/docs/core/org/apache/lucene/spatial3d.geom.GeoBaseCompositeShape.html [exec] BROKEN LINK: file:///build/docs/core/org/apache/lucene/spatial3d.geom.GeoBaseCompositeShape.html [exec] BROKEN LINK: file:///build/docs/core/org/apache/lucene/spatial3d.geom.GeoBaseCompositeShape.html [exec] BROKEN LINK: file:///build/docs/core/org/apache/lucene/spatial3d.geom.GeoBaseCompositeShape.html [exec] [exec] file:///build/docs/spatial3d/org/apache/lucene/spatial3d/geom/GeoCompositePolygon.html [exec] BROKEN LINK: file:///build/docs/core/org/apache/lucene/spatial3d.geom.GeoBaseCompositeShape.html [exec] BROKEN LINK: file:///build/docs/core/org/apache/lucene/spatial3d.geom.GeoBaseCompositeShape.html [exec] BROKEN LINK: file:///build/docs/core/org/apache/lucene/spatial3d.geom.GeoBaseCompositeShape.html [exec] BROKEN LINK: file:///build/docs/core/org/apache/lucene/spatial3d.geom.GeoBaseCompositeShape.html [exec] BROKEN LINK: file:///build/docs/core/org/apache/lucene/spatial3d.geom.GeoBaseCompositeShape.html [exec] BROKEN LINK: file:///build/docs/core/org/apache/lucene/spatial3d.geom.GeoBaseCompositeShape.html [exec] [exec] file:///build/docs/spatial3d/org/apache/lucene/spatial3d/geom/GeoCompositeAreaShape.html [exec] BROKEN LINK: file:///build/docs/core/org/apache/lucene/spatial3d.geom.GeoBaseCompositeShape.html [exec] BROKEN LINK: file:///build/docs/core/org/apache/lucene/spatial3d.geom.GeoBaseCompositeShape.html [exec] BROKEN LINK: file:///build/docs/core/org/apache/lucene/spatial3d.geom.GeoBaseCompositeShape.html [exec] BROKEN LINK: file:///build/docs/core/org/apache/lucene/spatial3d.geom.GeoBaseCompositeShape.html [exec] BROKEN LINK: file:///build/docs/core/org/apache/lucene/spatial3d.geom.GeoBaseCompositeShape.html [exec] BROKEN LINK: file:///build/docs/core/org/apache/lucene/spatial3d.geom.GeoBaseCompositeShape.html [exec] [exec] file:///build/docs/spatial3d/org/apache/lucene/spatial3d/geom/GeoCompositeMembershipShape.html [exec] BROKEN LINK: file:///build/docs/core/org/apache/lucene/spatial3d.geom.GeoBaseCompositeShape.html [exec] BROKEN LINK: file:///build/docs/core/org/apache/lucene/spatial3d.geom.GeoBaseCompositeShape.html [exec] BROKEN LINK: file:///build/docs/core/org/apache/lucene/spatial3d.geom.GeoBaseCompositeShape.html [exec] BROKEN LINK: file:///build/docs/core/org/apache/lucene/spatial3d.geom.GeoBaseCompositeShape.html [exec] BROKEN LINK: file:///build/docs/core/org/apache/lucene/spatial3d.geom.GeoBaseCompositeShape.html [exec] BROKEN LINK: file:///build/docs/core/org/apache/lucene/spatial3d.geom.GeoBaseCompositeShape.html [exec] [exec] Broken javadocs links were found! Common root causes: [exec] * A typo of some sort for manually created links. [exec] * Public methods referencing non-public classes in their signature. {code} Can you fix and resubmit your patch? > Give access to members of a composite shape > --- > > Key: LUCENE-7918 > URL: https://issues.apache.org/jira/browse/LUCENE-7918 > Project: Lucene - Core > Issue Type: Improvement > Components: modules/spatial3d >Reporter: Ignacio Vera >Assignee: Karl Wright > Attachments: LUCENE-7918.patch > > > Hi [~daddywri], > I hope this is my last point in my wish list. In order to serialize objects I > need to access the members of a composite geoshape. This is currently not > possible so I was wondering if it is possible to add to more methods to the > class GeoCompositeMembershipShape: > public int size() > public GeoMembershipShape getShape(int index) > Thanks, > Ignacio -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-7918) Give access to members of a composite shape
[ https://issues.apache.org/jira/browse/LUCENE-7918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16115482#comment-16115482 ] Karl Wright commented on LUCENE-7918: - I'm also seeing build warnings compiling this code, e.g.: {code} [javac] C:\wipgit\lucene4\lucene-solr\lucene\spatial3d\src\test\org\apache\lucene\spatial3d\geom\GeoPolygonTest.java:978: warning: [cast] redundant cast to GeoPolygon [javac] GeoPolygon polygon = (GeoPolygon)((GeoCompositePolygon)GeoPolygo nFactory.makeGeoPolygon(PlanetModel.SPHERE, points)).shapes.get(0); [javac] ^ [javac] C:\wipgit\lucene4\lucene-solr\lucene\spatial3d\src\test\org\apache\lucene\spatial3d\geom\GeoPolygonTest.java:997: warning: [cast] redundant cast to GeoPolygon [javac] GeoPolygon polygon = (GeoPolygon)((GeoCompositePolygon)GeoPolygonFactory.makeGeoPolygon(PlanetModel.SPHERE, points,Collections.singletonList(hole))).shapes.get(0); [javac] ^ [javac] C:\wipgit\lucene4\lucene-solr\lucene\spatial3d\src\test\org\apache\l ucene\spatial3d\geom\GeoPolygonTest.java:1009: warning: [cast] redundant cast to GeoPolygon [javac] GeoPolygon polygon = (GeoPolygon)((GeoCompositePolygon)GeoPolygonFactory.makeGeoPolygon(PlanetModel.SPHERE, points)).shapes.get(0); [javac] ^ [javac] C:\wipgit\lucene4\lucene-solr\lucene\spatial3d\src\test\org\apache\lucene\spatial3d\geom\GeoPolygonTest.java:1028: warning: [cast] redundant cast to GeoPolygon [javac] GeoPolygon polygon = (GeoPolygon)((GeoCompositePolygon)GeoPolygonFactory.makeGeoPolygon(PlanetModel.SPHERE, points,Collections.singletonList(hole))).shapes.get(0); [javac] ^ [javac] 4 warnings {code} I'll fix those if it passes precommit. > Give access to members of a composite shape > --- > > Key: LUCENE-7918 > URL: https://issues.apache.org/jira/browse/LUCENE-7918 > Project: Lucene - Core > Issue Type: Improvement > Components: modules/spatial3d >Reporter: Ignacio Vera >Assignee: Karl Wright > Attachments: LUCENE-7918.patch > > > Hi [~daddywri], > I hope this is my last point in my wish list. In order to serialize objects I > need to access the members of a composite geoshape. This is currently not > possible so I was wondering if it is possible to add to more methods to the > class GeoCompositeMembershipShape: > public int size() > public GeoMembershipShape getShape(int index) > Thanks, > Ignacio -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-7918) Give access to members of a composite shape
[ https://issues.apache.org/jira/browse/LUCENE-7918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16115481#comment-16115481 ] Karl Wright commented on LUCENE-7918: - [~ivera], I've been looking at the build.xml code for the top-level project. I think we can do a fair approximation of "ant precommit" on the lucene side of the tree by doing the following: {code} cd lucene ant documentation-lint validate {code} This works for me generally. Would you like to give it a try on your new code? I will give it a try tomorrow if I don't hear from you. Thanks! > Give access to members of a composite shape > --- > > Key: LUCENE-7918 > URL: https://issues.apache.org/jira/browse/LUCENE-7918 > Project: Lucene - Core > Issue Type: Improvement > Components: modules/spatial3d >Reporter: Ignacio Vera >Assignee: Karl Wright > Attachments: LUCENE-7918.patch > > > Hi [~daddywri], > I hope this is my last point in my wish list. In order to serialize objects I > need to access the members of a composite geoshape. This is currently not > possible so I was wondering if it is possible to add to more methods to the > class GeoCompositeMembershipShape: > public int size() > public GeoMembershipShape getShape(int index) > Thanks, > Ignacio -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-10821) Write documentation for the autoscaling APIs and policy/preferences syntax for Solr 7.0
[ https://issues.apache.org/jira/browse/SOLR-10821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16115479#comment-16115479 ] Shawn Heisey commented on SOLR-10821: - Looks like the new docs haven't been backported to 7x or 7_0 yet, so I haven't committed anything there. > Write documentation for the autoscaling APIs and policy/preferences syntax > for Solr 7.0 > --- > > Key: SOLR-10821 > URL: https://issues.apache.org/jira/browse/SOLR-10821 > Project: Solr > Issue Type: Sub-task > Security Level: Public(Default Security Level. Issues are Public) > Components: documentation >Reporter: Shalin Shekhar Mangar >Assignee: Shalin Shekhar Mangar > Labels: autoscaling > Fix For: 7.0 > > > We need to document the following: > # set-policy > # set-cluster-preferences > # set-cluster-policy > # Autoscaling configuration read API > # Autoscaling diagnostics API > # policy and preference rule syntax -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-10821) Write documentation for the autoscaling APIs and policy/preferences syntax for Solr 7.0
[ https://issues.apache.org/jira/browse/SOLR-10821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16115477#comment-16115477 ] ASF subversion and git services commented on SOLR-10821: Commit 3e7adf4cdb72de155d92924ee91ac862e932f3a7 in lucene-solr's branch refs/heads/master from [~elyograg] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=3e7adf4 ] SOLR-10821: fix precommit on new ref guide content - change tabs to spaces > Write documentation for the autoscaling APIs and policy/preferences syntax > for Solr 7.0 > --- > > Key: SOLR-10821 > URL: https://issues.apache.org/jira/browse/SOLR-10821 > Project: Solr > Issue Type: Sub-task > Security Level: Public(Default Security Level. Issues are Public) > Components: documentation >Reporter: Shalin Shekhar Mangar >Assignee: Shalin Shekhar Mangar > Labels: autoscaling > Fix For: 7.0 > > > We need to document the following: > # set-policy > # set-cluster-preferences > # set-cluster-policy > # Autoscaling configuration read API > # Autoscaling diagnostics API > # policy and preference rule syntax -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-10821) Write documentation for the autoscaling APIs and policy/preferences syntax for Solr 7.0
[ https://issues.apache.org/jira/browse/SOLR-10821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16115476#comment-16115476 ] Shawn Heisey commented on SOLR-10821: - Noticed that the new solrloud-autoscaling-api doc file failed precommit because it contains tabs. Will push the trivial fix. > Write documentation for the autoscaling APIs and policy/preferences syntax > for Solr 7.0 > --- > > Key: SOLR-10821 > URL: https://issues.apache.org/jira/browse/SOLR-10821 > Project: Solr > Issue Type: Sub-task > Security Level: Public(Default Security Level. Issues are Public) > Components: documentation >Reporter: Shalin Shekhar Mangar >Assignee: Shalin Shekhar Mangar > Labels: autoscaling > Fix For: 7.0 > > > We need to document the following: > # set-policy > # set-cluster-preferences > # set-cluster-policy > # Autoscaling configuration read API > # Autoscaling diagnostics API > # policy and preference rule syntax -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-11198) downconfig downloads empty file as folder
[ https://issues.apache.org/jira/browse/SOLR-11198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16115470#comment-16115470 ] Erick Erickson edited comment on SOLR-11198 at 8/5/17 5:43 PM: --- WHOA! I got all confused between Isabelle's original comment and Cassandra's, Isabelle uses ZkCLI and Cassandra uses bin/solr. I think the problem may be common to both, but my comments below are about the bin/solr version. I think I see the problem and that this particular issue should be fixed, i.e. empty znodes with no children should be files locally rather than directories. I don't think I want to try anything fancier. If a user copies down from ZK and the ZK state changes (i.e. a node with data gets children etc.) then erroring out and recommending that they use a clean local directory seems reasonable. The rest of this is details you can read through if you want to or wonder what the heck the second point means. I think the problem is in ZkMaintenanceUtils.downloadFromZk. This code {code} if (children.size() == 0) { // If we didn't copy data down, then we also didn't create the file. But we still need a marker on the local // disk so create a dir. if (copyDataDown(zkClient, zkPath, file.toFile()) == 0) { Files.createDirectories(file); } {code} copyDataDown returns, well, zero if there's no data in the znode. I changed this code a while back because recursive copy wasn't working correctly (SOLR-10108) and I suspect it was introduced then. And it went in Solr 6.6 so it fits Isabelle's testing, including the bit about putting a comment in the stopwords.txt makes it a file rather than a directory.. zNodes can have both data and children. The logic says, in effect, "if the znode has no data and no children it'll be mapped into an empty directory". There's logic in there that if a znode has both data and children, it's made into a directory with a special file containing its data (zknode.data). The simple fix would be just to decide the other way, i.e. a znode with no data and no children would become an empty file locally rather than a directory. That actually seems OK since I don't think ZK cares. The more I think about this the more I think the correct behavior is the easy fix above. ZK doesn't care; a znode can have data added and children added at will so if we make the local node an empty text file it'd be copied back up as a znode just like any other, albeit one without data or children, but that's OK as far as ZK is concerned. That doesn't preclude someone adding children to the ZK node after pushing it back up via another mechanism. There'll still be an edge case where - someone copies an empty znode from ZK and it becomes a file - the ZK node gets children - the person tries the copy again from ZK to local Since the copy down now already has a text file for that znode and then tries to make a directory there it'll probably error out. At least it better. I think we can live with that, it would be a good thing to add a test though. I'm torn about whether to try to "do the right thing" in the case above. First, it appears to much of an edge case. One could write something like: if (the local file is zero length) { remove the file create a directory in it's place } Which seems relatively safe. Except that doesn't handle the case where - a znode exists with data - downconfig copies it locally as a text file - the znode gets children through some other mechanism - downconfig is run again In that case "the right thing" would be to move the data to the zknode.data and make the local node into a directory and continue. Probably could do this "en passant" by just deleting the text file and continuing, the directory would be add it in its place and children would be added. Deleting data on the local disk makes me nervous though. I'll assign it to myself, if anyone else wants to take it feel free. was (Author: erickerickson): I think I see the problem and that this particular issue should be fixed, i.e. empty znodes with no children should be files locally rather than directories. I don't think I want to try anything fancier. If a user copies down from ZK and the ZK state changes (i.e. a node with data gets children etc.) then erroring out and recommending that they use a clean local directory seems reasonable. The rest of this is details you can read through if you want to or wonder what the heck the second point means. I think the problem is in ZkMaintenanceUtils.downloadFromZk. This code {code} if (children.size() == 0) { // If we didn't copy data down, then we also didn't create the file. But we still need a marker on the local // disk so create a dir. if (copyDataDown(zkClient, zkPath, file.toFile()) == 0) { Files.createDirectories(file); } {code} copyDataDown
[jira] [Commented] (SOLR-11198) downconfig downloads empty file as folder
[ https://issues.apache.org/jira/browse/SOLR-11198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16115470#comment-16115470 ] Erick Erickson commented on SOLR-11198: --- I think I see the problem and that this particular issue should be fixed, i.e. empty znodes with no children should be files locally rather than directories. I don't think I want to try anything fancier. If a user copies down from ZK and the ZK state changes (i.e. a node with data gets children etc.) then erroring out and recommending that they use a clean local directory seems reasonable. The rest of this is details you can read through if you want to or wonder what the heck the second point means. I think the problem is in ZkMaintenanceUtils.downloadFromZk. This code {code} if (children.size() == 0) { // If we didn't copy data down, then we also didn't create the file. But we still need a marker on the local // disk so create a dir. if (copyDataDown(zkClient, zkPath, file.toFile()) == 0) { Files.createDirectories(file); } {code} copyDataDown returns, well, zero if there's no data in the znode. I changed this code a while back because recursive copy wasn't working correctly (SOLR-10108) and I suspect it was introduced then. And it went in Solr 6.6 so it fits Isabelle's testing, including the bit about putting a comment in the stopwords.txt makes it a file rather than a directory.. zNodes can have both data and children. The logic says, in effect, "if the znode has no data and no children it'll be mapped into an empty directory". There's logic in there that if a znode has both data and children, it's made into a directory with a special file containing its data (zknode.data). The simple fix would be just to decide the other way, i.e. a znode with no data and no children would become an empty file locally rather than a directory. That actually seems OK since I don't think ZK cares. The more I think about this the more I think the correct behavior is the easy fix above. ZK doesn't care; a znode can have data added and children added at will so if we make the local node an empty text file it'd be copied back up as a znode just like any other, albeit one without data or children, but that's OK as far as ZK is concerned. That doesn't preclude someone adding children to the ZK node after pushing it back up via another mechanism. There'll still be an edge case where - someone copies an empty znode from ZK and it becomes a file - the ZK node gets children - the person tries the copy again from ZK to local Since the copy down now already has a text file for that znode and then tries to make a directory there it'll probably error out. At least it better. I think we can live with that, it would be a good thing to add a test though. I'm torn about whether to try to "do the right thing" in the case above. First, it appears to much of an edge case. One could write something like: if (the local file is zero length) { remove the file create a directory in it's place } Which seems relatively safe. Except that doesn't handle the case where - a znode exists with data - downconfig copies it locally as a text file - the znode gets children through some other mechanism - downconfig is run again In that case "the right thing" would be to move the data to the zknode.data and make the local node into a directory and continue. Probably could do this "en passant" by just deleting the text file and continuing, the directory would be add it in its place and children would be added. Deleting data on the local disk makes me nervous though. I'll assign it to myself, if anyone else wants to take it feel free. > downconfig downloads empty file as folder > - > > Key: SOLR-11198 > URL: https://issues.apache.org/jira/browse/SOLR-11198 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: 6.6 > Environment: Windows 7 >Reporter: Isabelle Giguere >Priority: Minor > > With Solr 6.6.0, when downloading a config from Zookeeper (3.4.10), if a file > is empty, it is downloaded as a folder (on Windows, at least). > A Zookeeper browser (Eclipse: Zookeeper Explorer) shows the file as a file, > however, in ZK. > Noticed because we keep an empty synonyms.txt file in the Solr config > provided with our product, in case a client would want to use it. > The workaround is simple, since the file allows comments: just add a comment, > so it is not empty. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Assigned] (SOLR-11198) downconfig downloads empty file as folder
[ https://issues.apache.org/jira/browse/SOLR-11198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Erick Erickson reassigned SOLR-11198: - Assignee: Erick Erickson > downconfig downloads empty file as folder > - > > Key: SOLR-11198 > URL: https://issues.apache.org/jira/browse/SOLR-11198 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: 6.6 > Environment: Windows 7 >Reporter: Isabelle Giguere >Assignee: Erick Erickson >Priority: Minor > > With Solr 6.6.0, when downloading a config from Zookeeper (3.4.10), if a file > is empty, it is downloaded as a folder (on Windows, at least). > A Zookeeper browser (Eclipse: Zookeeper Explorer) shows the file as a file, > however, in ZK. > Noticed because we keep an empty synonyms.txt file in the Solr config > provided with our product, in case a client would want to use it. > The workaround is simple, since the file allows comments: just add a comment, > so it is not empty. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-11190) GraphQuery not working if field has only docValues
[ https://issues.apache.org/jira/browse/SOLR-11190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Karthik Ramachandran updated SOLR-11190: Attachment: SOLR-11190.patch > GraphQuery not working if field has only docValues > -- > > Key: SOLR-11190 > URL: https://issues.apache.org/jira/browse/SOLR-11190 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: query parsers >Affects Versions: 6.6 >Reporter: Karthik Ramachandran >Assignee: Karthik Ramachandran > Attachments: SOLR-11190.patch, SOLR-11190.patch > > > Graph traversal is not working if field has only docValues since the > construction of leaf or parent node queries uses only TermQuery. > \\ \\ > {code:xml|title=managed-schema|borderStyle=solid} > > docValues="true" /> > docValues="true" /> > docValues="true" /> > docValues="true" /> > id > > precisionStep="0" positionIncrementGap="0"/> > > {code} > {code} > curl -XPOST -H 'Content-Type: application/json' > 'http://localhost:8983/solr/graph/update' --data-binary ' { > "add" : { "doc" : { "id" : "1", "name" : "Root1" } }, > "add" : { "doc" : { "id" : "2", "name" : "Root2" } }, > "add" : { "doc" : { "id" : "3", "name" : "Root3" } }, > "add" : { "doc" : { "id" : "11", "parentid" : "1", "name" : "Root1 Child1" } > }, > "add" : { "doc" : { "id" : "12", "parentid" : "1", "name" : "Root1 Child2" } > }, > "add" : { "doc" : { "id" : "13", "parentid" : "1", "name" : "Root1 Child3" } > }, > "add" : { "doc" : { "id" : "21", "parentid" : "2", "name" : "Root2 Child1" } > }, > "add" : { "doc" : { "id" : "22", "parentid" : "2", "name" : "Root2 Child2" } > }, > "add" : { "doc" : { "id" : "121", "parentid" : "12", "name" : "Root12 > Child1" } }, > "add" : { "doc" : { "id" : "122", "parentid" : "12", "name" : "Root12 > Child2" } }, > "add" : { "doc" : { "id" : "131", "parentid" : "13", "name" : "Root13 > Child1" } }, > "commit" : {} > }' > {code} > {code} > http://localhost:8983/solr/graph/select?q=*:*={!graph from=parentid > to=id}id:1 > or > http://localhost:8983/solr/graph/select?q=*:*={!graph from=id > to=parentid}id:122 > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11190) GraphQuery not working if field has only docValues
[ https://issues.apache.org/jira/browse/SOLR-11190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16115444#comment-16115444 ] Karthik Ramachandran commented on SOLR-11190: - Agreed, If the field is indexed we should use TermInSetQuery. I have fixed it, you can check it in pull request. I will update the patch soon. > GraphQuery not working if field has only docValues > -- > > Key: SOLR-11190 > URL: https://issues.apache.org/jira/browse/SOLR-11190 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: query parsers >Affects Versions: 6.6 >Reporter: Karthik Ramachandran >Assignee: Karthik Ramachandran > Attachments: SOLR-11190.patch > > > Graph traversal is not working if field has only docValues since the > construction of leaf or parent node queries uses only TermQuery. > \\ \\ > {code:xml|title=managed-schema|borderStyle=solid} > > docValues="true" /> > docValues="true" /> > docValues="true" /> > docValues="true" /> > id > > precisionStep="0" positionIncrementGap="0"/> > > {code} > {code} > curl -XPOST -H 'Content-Type: application/json' > 'http://localhost:8983/solr/graph/update' --data-binary ' { > "add" : { "doc" : { "id" : "1", "name" : "Root1" } }, > "add" : { "doc" : { "id" : "2", "name" : "Root2" } }, > "add" : { "doc" : { "id" : "3", "name" : "Root3" } }, > "add" : { "doc" : { "id" : "11", "parentid" : "1", "name" : "Root1 Child1" } > }, > "add" : { "doc" : { "id" : "12", "parentid" : "1", "name" : "Root1 Child2" } > }, > "add" : { "doc" : { "id" : "13", "parentid" : "1", "name" : "Root1 Child3" } > }, > "add" : { "doc" : { "id" : "21", "parentid" : "2", "name" : "Root2 Child1" } > }, > "add" : { "doc" : { "id" : "22", "parentid" : "2", "name" : "Root2 Child2" } > }, > "add" : { "doc" : { "id" : "121", "parentid" : "12", "name" : "Root12 > Child1" } }, > "add" : { "doc" : { "id" : "122", "parentid" : "12", "name" : "Root12 > Child2" } }, > "add" : { "doc" : { "id" : "131", "parentid" : "13", "name" : "Root13 > Child1" } }, > "commit" : {} > }' > {code} > {code} > http://localhost:8983/solr/graph/select?q=*:*={!graph from=parentid > to=id}id:1 > or > http://localhost:8983/solr/graph/select?q=*:*={!graph from=id > to=parentid}id:122 > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS-EA] Lucene-Solr-master-Linux (32bit/jdk-9-ea+178) - Build # 20264 - Still Failing!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/20264/ Java: 32bit/jdk-9-ea+178 -server -XX:+UseG1GC --illegal-access=deny 4 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.cloud.ChaosMonkeyNothingIsSafeWithPullReplicasTest Error Message: 3 threads leaked from SUITE scope at org.apache.solr.cloud.ChaosMonkeyNothingIsSafeWithPullReplicasTest: 1) Thread[id=376, name=Connection evictor, state=TIMED_WAITING, group=TGRP-ChaosMonkeyNothingIsSafeWithPullReplicasTest] at java.base@9/java.lang.Thread.sleep(Native Method) at app//org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66) at java.base@9/java.lang.Thread.run(Thread.java:844)2) Thread[id=378, name=TEST-ChaosMonkeyNothingIsSafeWithPullReplicasTest.test-seed#[A42CCC23A8D2990A]-EventThread, state=WAITING, group=TGRP-ChaosMonkeyNothingIsSafeWithPullReplicasTest] at java.base@9/jdk.internal.misc.Unsafe.park(Native Method) at java.base@9/java.util.concurrent.locks.LockSupport.park(LockSupport.java:194) at java.base@9/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2062) at java.base@9/java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:435) at app//org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:501)3) Thread[id=377, name=TEST-ChaosMonkeyNothingIsSafeWithPullReplicasTest.test-seed#[A42CCC23A8D2990A]-SendThread(127.0.0.1:46689), state=TIMED_WAITING, group=TGRP-ChaosMonkeyNothingIsSafeWithPullReplicasTest] at java.base@9/java.lang.Thread.sleep(Native Method) at app//org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:101) at app//org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:997) at app//org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1060) Stack Trace: com.carrotsearch.randomizedtesting.ThreadLeakError: 3 threads leaked from SUITE scope at org.apache.solr.cloud.ChaosMonkeyNothingIsSafeWithPullReplicasTest: 1) Thread[id=376, name=Connection evictor, state=TIMED_WAITING, group=TGRP-ChaosMonkeyNothingIsSafeWithPullReplicasTest] at java.base@9/java.lang.Thread.sleep(Native Method) at app//org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66) at java.base@9/java.lang.Thread.run(Thread.java:844) 2) Thread[id=378, name=TEST-ChaosMonkeyNothingIsSafeWithPullReplicasTest.test-seed#[A42CCC23A8D2990A]-EventThread, state=WAITING, group=TGRP-ChaosMonkeyNothingIsSafeWithPullReplicasTest] at java.base@9/jdk.internal.misc.Unsafe.park(Native Method) at java.base@9/java.util.concurrent.locks.LockSupport.park(LockSupport.java:194) at java.base@9/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2062) at java.base@9/java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:435) at app//org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:501) 3) Thread[id=377, name=TEST-ChaosMonkeyNothingIsSafeWithPullReplicasTest.test-seed#[A42CCC23A8D2990A]-SendThread(127.0.0.1:46689), state=TIMED_WAITING, group=TGRP-ChaosMonkeyNothingIsSafeWithPullReplicasTest] at java.base@9/java.lang.Thread.sleep(Native Method) at app//org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:101) at app//org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:997) at app//org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1060) at __randomizedtesting.SeedInfo.seed([A42CCC23A8D2990A]:0) FAILED: junit.framework.TestSuite.org.apache.solr.cloud.ChaosMonkeyNothingIsSafeWithPullReplicasTest Error Message: There are still zombie threads that couldn't be terminated:1) Thread[id=377, name=TEST-ChaosMonkeyNothingIsSafeWithPullReplicasTest.test-seed#[A42CCC23A8D2990A]-SendThread(127.0.0.1:46689), state=TIMED_WAITING, group=TGRP-ChaosMonkeyNothingIsSafeWithPullReplicasTest] at java.base@9/java.lang.Thread.sleep(Native Method) at app//org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:101) at app//org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:997) at app//org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1060) Stack Trace: com.carrotsearch.randomizedtesting.ThreadLeakError: There are still zombie threads that couldn't be terminated: 1) Thread[id=377, name=TEST-ChaosMonkeyNothingIsSafeWithPullReplicasTest.test-seed#[A42CCC23A8D2990A]-SendThread(127.0.0.1:46689), state=TIMED_WAITING, group=TGRP-ChaosMonkeyNothingIsSafeWithPullReplicasTest] at java.base@9/java.lang.Thread.sleep(Native Method) at
[jira] [Commented] (SOLR-10796) TestPointFields: increase randomized testing of non-trivial values
[ https://issues.apache.org/jira/browse/SOLR-10796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16115423#comment-16115423 ] ASF subversion and git services commented on SOLR-10796: Commit cff5e985835759b4fcb64629ddca817fa6e17944 in lucene-solr's branch refs/heads/master from [~steve_rowe] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=cff5e98 ] SOLR-10796: TestPointFields.testDoublePointFieldRangeFacet(): Guard against converting a double-valued '-Infinity' to BigDecimal > TestPointFields: increase randomized testing of non-trivial values > --- > > Key: SOLR-10796 > URL: https://issues.apache.org/jira/browse/SOLR-10796 > Project: Solr > Issue Type: Sub-task > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Steve Rowe >Assignee: Steve Rowe > Fix For: 7.0, master (8.0), 7.1 > > Attachments: SOLR-10796-part2.patch, SOLR-10796.patch, > SOLR-10796.patch, SOLR-10796.patch > > > A lot of TestPointFields code just uses positive nums, or only ranodmizes > values between -100 and 100, etc. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-10796) TestPointFields: increase randomized testing of non-trivial values
[ https://issues.apache.org/jira/browse/SOLR-10796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16115421#comment-16115421 ] ASF subversion and git services commented on SOLR-10796: Commit ec99019b3660cacdb10bb8f81923ad4111f99d7e in lucene-solr's branch refs/heads/branch_7_0 from [~steve_rowe] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=ec99019 ] SOLR-10796: TestPointFields.testDoublePointFieldRangeFacet(): Guard against converting a double-valued '-Infinity' to BigDecimal > TestPointFields: increase randomized testing of non-trivial values > --- > > Key: SOLR-10796 > URL: https://issues.apache.org/jira/browse/SOLR-10796 > Project: Solr > Issue Type: Sub-task > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Steve Rowe >Assignee: Steve Rowe > Fix For: 7.0, master (8.0), 7.1 > > Attachments: SOLR-10796-part2.patch, SOLR-10796.patch, > SOLR-10796.patch, SOLR-10796.patch > > > A lot of TestPointFields code just uses positive nums, or only ranodmizes > values between -100 and 100, etc. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-10796) TestPointFields: increase randomized testing of non-trivial values
[ https://issues.apache.org/jira/browse/SOLR-10796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16115422#comment-16115422 ] ASF subversion and git services commented on SOLR-10796: Commit 23541b75c1a2452a94c1e26be76cc295470e4462 in lucene-solr's branch refs/heads/branch_7x from [~steve_rowe] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=23541b7 ] SOLR-10796: TestPointFields.testDoublePointFieldRangeFacet(): Guard against converting a double-valued '-Infinity' to BigDecimal > TestPointFields: increase randomized testing of non-trivial values > --- > > Key: SOLR-10796 > URL: https://issues.apache.org/jira/browse/SOLR-10796 > Project: Solr > Issue Type: Sub-task > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Steve Rowe >Assignee: Steve Rowe > Fix For: 7.0, master (8.0), 7.1 > > Attachments: SOLR-10796-part2.patch, SOLR-10796.patch, > SOLR-10796.patch, SOLR-10796.patch > > > A lot of TestPointFields code just uses positive nums, or only ranodmizes > values between -100 and 100, etc. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-7655) Speed up geo-distance queries that match most documents
[ https://issues.apache.org/jira/browse/LUCENE-7655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16115415#comment-16115415 ] David Smiley commented on LUCENE-7655: -- Cool [~maciej.zasada]; thanks for contributing. > Speed up geo-distance queries that match most documents > --- > > Key: LUCENE-7655 > URL: https://issues.apache.org/jira/browse/LUCENE-7655 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Adrien Grand >Priority: Minor > > I think the same optimization that was applied in LUCENE-7641 would also work > with geo-distance queries? -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11023) Need SortedNumerics/Points version of EnumField
[ https://issues.apache.org/jira/browse/SOLR-11023?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16115407#comment-16115407 ] ASF subversion and git services commented on SOLR-11023: Commit 5d632c0a0e8769b512a365a98d348dd3d5ef0bbc in lucene-solr's branch refs/heads/branch_7x from [~steve_rowe] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=5d632c0 ] SOLR-11023: add docValues="true" to an enum field declaration in schema.xml, so that EnumFieldType, which requires docValues, stops causing TestDistributedSearch to fail > Need SortedNumerics/Points version of EnumField > --- > > Key: SOLR-11023 > URL: https://issues.apache.org/jira/browse/SOLR-11023 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Hoss Man >Assignee: Steve Rowe >Priority: Blocker > Labels: numeric-tries-to-points > Fix For: 7.0, master (8.0), 7.1 > > Attachments: SOLR-11023.patch, SOLR-11023.patch, SOLR-11023.patch, > SOLR-11023.patch, SOLR-11023.patch, SOLR-11023.patch > > > although it's not a subclass of TrieField, EnumField does use > "LegacyIntField" to index the int value associated with each of the enum > values, in addition to using SortedSetDocValuesField when {{docValues="true" > multivalued="true"}}. > I have no idea if Points would be better/worse then Terms for low cardinality > usecases like EnumField, but either way we should think about a new variant > of EnumField that doesn't depend on > LegacyIntField/LegacyNumericUtils.intToPrefixCoded and uses > SortedNumericDocValues. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11023) Need SortedNumerics/Points version of EnumField
[ https://issues.apache.org/jira/browse/SOLR-11023?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16115406#comment-16115406 ] ASF subversion and git services commented on SOLR-11023: Commit c58bbaa6cabe91c3823d2e9c6395379d987fec60 in lucene-solr's branch refs/heads/branch_7_0 from [~steve_rowe] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=c58bbaa ] SOLR-11023: add docValues="true" to an enum field declaration in schema.xml, so that EnumFieldType, which requires docValues, stops causing TestDistributedSearch to fail > Need SortedNumerics/Points version of EnumField > --- > > Key: SOLR-11023 > URL: https://issues.apache.org/jira/browse/SOLR-11023 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Hoss Man >Assignee: Steve Rowe >Priority: Blocker > Labels: numeric-tries-to-points > Fix For: 7.0, master (8.0), 7.1 > > Attachments: SOLR-11023.patch, SOLR-11023.patch, SOLR-11023.patch, > SOLR-11023.patch, SOLR-11023.patch, SOLR-11023.patch > > > although it's not a subclass of TrieField, EnumField does use > "LegacyIntField" to index the int value associated with each of the enum > values, in addition to using SortedSetDocValuesField when {{docValues="true" > multivalued="true"}}. > I have no idea if Points would be better/worse then Terms for low cardinality > usecases like EnumField, but either way we should think about a new variant > of EnumField that doesn't depend on > LegacyIntField/LegacyNumericUtils.intToPrefixCoded and uses > SortedNumericDocValues. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11023) Need SortedNumerics/Points version of EnumField
[ https://issues.apache.org/jira/browse/SOLR-11023?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16115408#comment-16115408 ] ASF subversion and git services commented on SOLR-11023: Commit 3f9e748202ab8619af83f093ba4739f5a1e5c57b in lucene-solr's branch refs/heads/master from [~steve_rowe] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=3f9e748 ] SOLR-11023: add docValues="true" to an enum field declaration in schema.xml, so that EnumFieldType, which requires docValues, stops causing TestDistributedSearch to fail > Need SortedNumerics/Points version of EnumField > --- > > Key: SOLR-11023 > URL: https://issues.apache.org/jira/browse/SOLR-11023 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Hoss Man >Assignee: Steve Rowe >Priority: Blocker > Labels: numeric-tries-to-points > Fix For: 7.0, master (8.0), 7.1 > > Attachments: SOLR-11023.patch, SOLR-11023.patch, SOLR-11023.patch, > SOLR-11023.patch, SOLR-11023.patch, SOLR-11023.patch > > > although it's not a subclass of TrieField, EnumField does use > "LegacyIntField" to index the int value associated with each of the enum > values, in addition to using SortedSetDocValuesField when {{docValues="true" > multivalued="true"}}. > I have no idea if Points would be better/worse then Terms for low cardinality > usecases like EnumField, but either way we should think about a new variant > of EnumField that doesn't depend on > LegacyIntField/LegacyNumericUtils.intToPrefixCoded and uses > SortedNumericDocValues. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS-EA] Lucene-Solr-7.0-Linux (64bit/jdk-9-ea+178) - Build # 149 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.0-Linux/149/ Java: 64bit/jdk-9-ea+178 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC --illegal-access=deny 1 tests failed. FAILED: org.apache.solr.TestDistributedSearch.test Error Message: Error from server at http://127.0.0.1:33231//collection1: ERROR: [doc=1] Error adding field 'severity'='Not Available' msg=EnumFieldType requires docValues="true". Stack Trace: org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error from server at http://127.0.0.1:33231//collection1: ERROR: [doc=1] Error adding field 'severity'='Not Available' msg=EnumFieldType requires docValues="true". at __randomizedtesting.SeedInfo.seed([E4BC061C0FD40A16:6CE839C6A12867EE]:0) at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:627) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:253) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:242) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:178) at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:173) at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:138) at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:152) at org.apache.solr.BaseDistributedSearchTestCase.indexDoc(BaseDistributedSearchTestCase.java:483) at org.apache.solr.BaseDistributedSearchTestCase.indexr(BaseDistributedSearchTestCase.java:465) at org.apache.solr.TestDistributedSearch.test(TestDistributedSearch.java:1052) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:564) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsRepeatStatement.callStatement(BaseDistributedSearchTestCase.java:1019) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at
[jira] [Updated] (LUCENE-7827) disable "textgrams" when minPrefixChars=0 AnalyzingInfixSuggester
[ https://issues.apache.org/jira/browse/LUCENE-7827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mikhail Khludnev updated LUCENE-7827: - Attachment: LUCENE-7827.patch Since I've made many {{private}} members {{protected}}, {{precommit}} requires to copy-paste javadocs for them [^LUCENE-7827.patch]. I fill like I'm doing something wrong. > disable "textgrams" when minPrefixChars=0 AnalyzingInfixSuggester > -- > > Key: LUCENE-7827 > URL: https://issues.apache.org/jira/browse/LUCENE-7827 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Mikhail Khludnev >Priority: Minor > Attachments: LUCENE-7827.patch, LUCENE-7827.patch, LUCENE-7827.patch, > LUCENE-7827.patch > > > The current code allows to set minPrefixChars=0, but it creates an > unnecessary {{textgrams}} field, and might bring significant footprint. > Bypassing it keeps existing tests green. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS-EA] Lucene-Solr-7.x-Linux (32bit/jdk-9-ea+178) - Build # 206 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/206/ Java: 32bit/jdk-9-ea+178 -server -XX:+UseG1GC --illegal-access=deny 1 tests failed. FAILED: org.apache.solr.TestDistributedSearch.test Error Message: Error from server at http://127.0.0.1:44859/_u/collection1: ERROR: [doc=1] Error adding field 'severity'='Not Available' msg=EnumFieldType requires docValues="true". Stack Trace: org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error from server at http://127.0.0.1:44859/_u/collection1: ERROR: [doc=1] Error adding field 'severity'='Not Available' msg=EnumFieldType requires docValues="true". at __randomizedtesting.SeedInfo.seed([215739FB99696194:A903062137950C6C]:0) at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:627) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:253) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:242) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:178) at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:173) at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:138) at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:152) at org.apache.solr.BaseDistributedSearchTestCase.indexDoc(BaseDistributedSearchTestCase.java:483) at org.apache.solr.BaseDistributedSearchTestCase.indexr(BaseDistributedSearchTestCase.java:465) at org.apache.solr.TestDistributedSearch.test(TestDistributedSearch.java:1052) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:564) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsRepeatStatement.callStatement(BaseDistributedSearchTestCase.java:1019) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at
[JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_141) - Build # 20263 - Still Failing!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/20263/ Java: 32bit/jdk1.8.0_141 -client -XX:+UseConcMarkSweepGC 2 tests failed. FAILED: org.apache.lucene.search.suggest.analyzing.AnalyzingSuggesterTest.testRandomRealisticKeys Error Message: input automaton is too large: 1001 Stack Trace: java.lang.IllegalArgumentException: input automaton is too large: 1001 at __randomizedtesting.SeedInfo.seed([27A8048A90BE2A6:80BE76DD7CB2166D]:0) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1298) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306) at
[JENKINS-EA] Lucene-Solr-7.0-Linux (64bit/jdk-9-ea+178) - Build # 148 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.0-Linux/148/ Java: 64bit/jdk-9-ea+178 -XX:-UseCompressedOops -XX:+UseSerialGC --illegal-access=deny 3 tests failed. FAILED: org.apache.solr.handler.admin.SegmentsInfoRequestHandlerTest.testSegmentInfosVersion Error Message: Exception during query Stack Trace: java.lang.RuntimeException: Exception during query at __randomizedtesting.SeedInfo.seed([7F06C8897A71D6E8:87D85D61021D07BB]:0) at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:885) at org.apache.solr.handler.admin.SegmentsInfoRequestHandlerTest.testSegmentInfosVersion(SegmentsInfoRequestHandlerTest.java:68) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:564) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.base/java.lang.Thread.run(Thread.java:844) Caused by: java.lang.RuntimeException: REQUEST FAILED: xpath=2=count(//lst[@name='segments']/lst/str[@name='version'][.='7.0.0']) xml response was:
[jira] [Commented] (LUCENE-7919) excessive use of notifyAll
[ https://issues.apache.org/jira/browse/LUCENE-7919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16115349#comment-16115349 ] Michael McCandless commented on LUCENE-7919: I agree {{notifyAll}} was not necessary here; we've already replaced that with a {{notify}} in LUCENE-7868, which will be released in 7.0. > excessive use of notifyAll > -- > > Key: LUCENE-7919 > URL: https://issues.apache.org/jira/browse/LUCENE-7919 > Project: Lucene - Core > Issue Type: Bug > Components: core/index >Affects Versions: 6.6 >Reporter: Guoqiang Jiang > > I am using Elasticsearch and have a write heavy scene. When tuning with > jstack, I found a significant proportion of thread stacks similar to the > followings: > {code:java} > "elasticsearch[test][bulk][T#23]" #126 daemon prio=5 os_prio=0 > tid=0x7f68f804 nid=0x6b1 runnable [0x7f6918ce9000] >java.lang.Thread.State: RUNNABLE > at java.lang.Object.notifyAll(Native Method) > at > org.apache.lucene.index.DocumentsWriterPerThreadPool.release(DocumentsWriterPerThreadPool.java:213) > - locked <0xea02b6d0> (a > org.apache.lucene.index.DocumentsWriterPerThreadPool) > at > org.apache.lucene.index.DocumentsWriter.updateDocument(DocumentsWriter.java:496) > at > org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1571) > at > org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:1316) > at > org.elasticsearch.index.engine.InternalEngine.index(InternalEngine.java:663) > at > org.elasticsearch.index.engine.InternalEngine.indexIntoLucene(InternalEngine.java:607) > at > org.elasticsearch.index.engine.InternalEngine.index(InternalEngine.java:505) > at org.elasticsearch.index.shard.IndexShard.index(IndexShard.java:556) > at org.elasticsearch.index.shard.IndexShard.index(IndexShard.java:545) > at > org.elasticsearch.action.bulk.TransportShardBulkAction.executeIndexRequestOnPrimary(TransportShardBulkAction.java:484) > at > org.elasticsearch.action.bulk.TransportShardBulkAction.executeBulkItemRequest(TransportShardBulkAction.java:143) > at > org.elasticsearch.action.bulk.TransportShardBulkAction.shardOperationOnPrimary(TransportShardBulkAction.java:113) > at > org.elasticsearch.action.bulk.TransportShardBulkAction.shardOperationOnPrimary(TransportShardBulkAction.java:69) > at > org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryShardReference.perform(TransportReplicationAction.java:939) > at > org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryShardReference.perform(TransportReplicationAction.java:908) > at > org.elasticsearch.action.support.replication.ReplicationOperation.execute(ReplicationOperation.java:113) > at > org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncPrimaryAction.onResponse(TransportReplicationAction.java:322) > at > org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncPrimaryAction.onResponse(TransportReplicationAction.java:264) > at > org.elasticsearch.action.support.replication.TransportReplicationAction$1.onResponse(TransportReplicationAction.java:888) > at > org.elasticsearch.action.support.replication.TransportReplicationAction$1.onResponse(TransportReplicationAction.java:885) > at > org.elasticsearch.index.shard.IndexShardOperationsLock.acquire(IndexShardOperationsLock.java:147) > at > org.elasticsearch.index.shard.IndexShard.acquirePrimaryOperationLock(IndexShard.java:1657) > at > org.elasticsearch.action.support.replication.TransportReplicationAction.acquirePrimaryShardReference(TransportReplicationAction.java:897) > at > org.elasticsearch.action.support.replication.TransportReplicationAction.access$400(TransportReplicationAction.java:93) > at > org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncPrimaryAction.doRun(TransportReplicationAction.java:281) > at > org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) > at > org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryOperationTransportHandler.messageReceived(TransportReplicationAction.java:260) > at > org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryOperationTransportHandler.messageReceived(TransportReplicationAction.java:252) > at > org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:69) > at > org.elasticsearch.transport.TransportService$7.doRun(TransportService.java:644) > at >
[jira] [Commented] (SOLR-11200) provide a config to enable disable ConcurrentMergeSchedule.doAutoIOThrottle
[ https://issues.apache.org/jira/browse/SOLR-11200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16115348#comment-16115348 ] Amrit Sarkar commented on SOLR-11200: - Ah! I don't this will serve our purpose in bulk indexing, logs :: {code} mergeScheduler=ConcurrentMergeScheduler: maxThreadCount=5, maxMergeCount=15, ioThrottle=false 2017-08-05 09:14:03.005 INFO (qtp1205044462-19) [c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.u.LoggingInfoStream [MergeScheduler][qtp1205044462-19]: updateMergeThreads ioThrottle=false targetMBPerSec=10240.0 MB/sec mergeScheduler=ConcurrentMergeScheduler: maxThreadCount=5, maxMergeCount=15, ioThrottle=false 2017-08-05 09:15:51.196 INFO (qtp1205044462-69) [c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.u.LoggingInfoStream [MergeScheduler][qtp1205044462-69]: updateMergeThreads ioThrottle=false targetMBPerSec=20.0 MB/sec 2017-08-05 09:15:56.711 INFO (Lucene Merge Thread #0) [c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.u.LoggingInfoStream [MergeScheduler][Lucene Merge Thread #0]: updateMergeThreads ioThrottle=false targetMBPerSec=20.0 MB/sec 2017-08-05 09:16:10.752 INFO (qtp1205044462-17) [c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.u.LoggingInfoStream [MergeScheduler][qtp1205044462-17]: updateMergeThreads ioThrottle=false targetMBPerSec=20.0 MB/sec 2017-08-05 09:16:18.229 INFO (Lucene Merge Thread #1) [c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.u.LoggingInfoStream [MergeScheduler][Lucene Merge Thread #1]: updateMergeThreads ioThrottle=false targetMBPerSec=20.0 MB/sec 2017-08-05 09:16:26.516 INFO (qtp1205044462-69) [c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.u.LoggingInfoStream [MergeScheduler][qtp1205044462-69]: updateMergeThreads ioThrottle=false targetMBPerSec=20.0 MB/sec 2017-08-05 09:16:35.551 INFO (Lucene Merge Thread #2) [c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.u.LoggingInfoStream [MergeScheduler][Lucene Merge Thread #2]: updateMergeThreads ioThrottle=false targetMBPerSec=20.0 MB/sec 2017-08-05 09:16:38.580 INFO (qtp1205044462-18) [c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.u.LoggingInfoStream [MergeScheduler][qtp1205044462-18]: updateMergeThreads ioThrottle=false targetMBPerSec=20.0 MB/sec 2017-08-05 09:16:49.397 INFO (Lucene Merge Thread #3) [c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.u.LoggingInfoStream [MergeScheduler][Lucene Merge Thread #3]: updateMergeThreads ioThrottle=false targetMBPerSec=20.0 MB/sec 2017-08-05 09:16:56.630 INFO (qtp1205044462-15) [c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.u.LoggingInfoStream [MergeScheduler][qtp1205044462-15]: updateMergeThreads ioThrottle=false targetMBPerSec=20.0 MB/sec {code} See the {{targetMBPerSec}} is initialised to {{10gbps}}, but then it falls back to default {{20mbps}}, instead of maintaining at 10gbps. Maybe {{SolrIndexConfig#buildMergeSchedule}} is not the right place to do it. I will look more. > provide a config to enable disable ConcurrentMergeSchedule.doAutoIOThrottle > --- > > Key: SOLR-11200 > URL: https://issues.apache.org/jira/browse/SOLR-11200 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Nawab Zada Asad iqbal >Priority: Minor > Attachments: SOLR-11200.patch, SOLR-11200.patch > > > This config can be useful while bulk indexing. Lucene introduced it > https://issues.apache.org/jira/browse/LUCENE-6119 . -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (LUCENE-7919) excessive use of notifyAll
Guoqiang Jiang created LUCENE-7919: -- Summary: excessive use of notifyAll Key: LUCENE-7919 URL: https://issues.apache.org/jira/browse/LUCENE-7919 Project: Lucene - Core Issue Type: Bug Components: core/index Affects Versions: 6.6 Reporter: Guoqiang Jiang I am using Elasticsearch and have a write heavy scene. When tuning with jstack, I found a significant proportion of thread stacks similar to the followings: {code:java} "elasticsearch[test][bulk][T#23]" #126 daemon prio=5 os_prio=0 tid=0x7f68f804 nid=0x6b1 runnable [0x7f6918ce9000] java.lang.Thread.State: RUNNABLE at java.lang.Object.notifyAll(Native Method) at org.apache.lucene.index.DocumentsWriterPerThreadPool.release(DocumentsWriterPerThreadPool.java:213) - locked <0xea02b6d0> (a org.apache.lucene.index.DocumentsWriterPerThreadPool) at org.apache.lucene.index.DocumentsWriter.updateDocument(DocumentsWriter.java:496) at org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1571) at org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:1316) at org.elasticsearch.index.engine.InternalEngine.index(InternalEngine.java:663) at org.elasticsearch.index.engine.InternalEngine.indexIntoLucene(InternalEngine.java:607) at org.elasticsearch.index.engine.InternalEngine.index(InternalEngine.java:505) at org.elasticsearch.index.shard.IndexShard.index(IndexShard.java:556) at org.elasticsearch.index.shard.IndexShard.index(IndexShard.java:545) at org.elasticsearch.action.bulk.TransportShardBulkAction.executeIndexRequestOnPrimary(TransportShardBulkAction.java:484) at org.elasticsearch.action.bulk.TransportShardBulkAction.executeBulkItemRequest(TransportShardBulkAction.java:143) at org.elasticsearch.action.bulk.TransportShardBulkAction.shardOperationOnPrimary(TransportShardBulkAction.java:113) at org.elasticsearch.action.bulk.TransportShardBulkAction.shardOperationOnPrimary(TransportShardBulkAction.java:69) at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryShardReference.perform(TransportReplicationAction.java:939) at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryShardReference.perform(TransportReplicationAction.java:908) at org.elasticsearch.action.support.replication.ReplicationOperation.execute(ReplicationOperation.java:113) at org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncPrimaryAction.onResponse(TransportReplicationAction.java:322) at org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncPrimaryAction.onResponse(TransportReplicationAction.java:264) at org.elasticsearch.action.support.replication.TransportReplicationAction$1.onResponse(TransportReplicationAction.java:888) at org.elasticsearch.action.support.replication.TransportReplicationAction$1.onResponse(TransportReplicationAction.java:885) at org.elasticsearch.index.shard.IndexShardOperationsLock.acquire(IndexShardOperationsLock.java:147) at org.elasticsearch.index.shard.IndexShard.acquirePrimaryOperationLock(IndexShard.java:1657) at org.elasticsearch.action.support.replication.TransportReplicationAction.acquirePrimaryShardReference(TransportReplicationAction.java:897) at org.elasticsearch.action.support.replication.TransportReplicationAction.access$400(TransportReplicationAction.java:93) at org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncPrimaryAction.doRun(TransportReplicationAction.java:281) at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryOperationTransportHandler.messageReceived(TransportReplicationAction.java:260) at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryOperationTransportHandler.messageReceived(TransportReplicationAction.java:252) at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:69) at org.elasticsearch.transport.TransportService$7.doRun(TransportService.java:644) at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:638) at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) {code} After reading the code related with DocumentsWriterPerThreadPool, I think the notifyAll is useless. This is a relatively expensive operation, and should be avoided if possible. -- This message
[JENKINS-EA] Lucene-Solr-7.x-Linux (64bit/jdk-9-ea+178) - Build # 205 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/205/ Java: 64bit/jdk-9-ea+178 -XX:+UseCompressedOops -XX:+UseParallelGC --illegal-access=deny 2 tests failed. FAILED: org.apache.solr.update.HardAutoCommitTest.testCommitWithin Error Message: Exception during query Stack Trace: java.lang.RuntimeException: Exception during query at __randomizedtesting.SeedInfo.seed([B6ECC9258FCA2FBC:C3EA65D0CE4C1A9]:0) at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:886) at org.apache.solr.update.HardAutoCommitTest.testCommitWithin(HardAutoCommitTest.java:100) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:564) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.base/java.lang.Thread.run(Thread.java:844) Caused by: java.lang.RuntimeException: REQUEST FAILED: xpath=//result[@numFound=1] xml response was: 00 request was:q=id:529==0=20=2.2 at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:879) ... 39 more FAILED: org.apache.solr.TestDistributedSearch.test Error Message: Error from server at
[jira] [Updated] (SOLR-11200) provide a config to enable disable ConcurrentMergeSchedule.doAutoIOThrottle
[ https://issues.apache.org/jira/browse/SOLR-11200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Amrit Sarkar updated SOLR-11200: Attachment: SOLR-11200.patch Patch attached :: {code} modified: solr/core/src/java/org/apache/solr/update/SolrIndexConfig.java new file: solr/core/src/test-files/solr/collection1/conf/solrconfig-concurrentmergescheduler.xml modified: solr/core/src/test/org/apache/solr/update/SolrIndexConfigTest.java {code} Had to create {{solrconfig-concurrentmergescheduler.xml}} as other solr-configs in test are getting used in other tests. > provide a config to enable disable ConcurrentMergeSchedule.doAutoIOThrottle > --- > > Key: SOLR-11200 > URL: https://issues.apache.org/jira/browse/SOLR-11200 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Nawab Zada Asad iqbal >Priority: Minor > Attachments: SOLR-11200.patch, SOLR-11200.patch > > > This config can be useful while bulk indexing. Lucene introduced it > https://issues.apache.org/jira/browse/LUCENE-6119 . -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS-EA] Lucene-Solr-master-Linux (32bit/jdk-9-ea+178) - Build # 20262 - Failure!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/20262/ Java: 32bit/jdk-9-ea+178 -server -XX:+UseConcMarkSweepGC --illegal-access=deny 1 tests failed. FAILED: org.apache.solr.TestDistributedSearch.test Error Message: Error from server at http://127.0.0.1:41183/b_r/x/collection1: ERROR: [doc=1] Error adding field 'severity'='Not Available' msg=EnumFieldType requires docValues="true". Stack Trace: org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error from server at http://127.0.0.1:41183/b_r/x/collection1: ERROR: [doc=1] Error adding field 'severity'='Not Available' msg=EnumFieldType requires docValues="true". at __randomizedtesting.SeedInfo.seed([5FA99E14ACD87BE9:D7FDA1CE02241611]:0) at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:627) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:253) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:242) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:178) at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:173) at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:138) at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:152) at org.apache.solr.BaseDistributedSearchTestCase.indexDoc(BaseDistributedSearchTestCase.java:483) at org.apache.solr.BaseDistributedSearchTestCase.indexr(BaseDistributedSearchTestCase.java:465) at org.apache.solr.TestDistributedSearch.test(TestDistributedSearch.java:1052) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:564) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsRepeatStatement.callStatement(BaseDistributedSearchTestCase.java:1019) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at
[jira] [Updated] (SOLR-11200) provide a config to enable disable ConcurrentMergeSchedule.doAutoIOThrottle
[ https://issues.apache.org/jira/browse/SOLR-11200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Amrit Sarkar updated SOLR-11200: Attachment: SOLR-11200.patch [~varunthacker] [~niqbal], {{isAutoIOThrottle}} will be a suitable name? Patch attached. I will finish the tests too if we agree on the param name. > provide a config to enable disable ConcurrentMergeSchedule.doAutoIOThrottle > --- > > Key: SOLR-11200 > URL: https://issues.apache.org/jira/browse/SOLR-11200 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Nawab Zada Asad iqbal >Priority: Minor > Attachments: SOLR-11200.patch > > > This config can be useful while bulk indexing. Lucene introduced it > https://issues.apache.org/jira/browse/LUCENE-6119 . -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org