[jira] [Commented] (SOLR-8151) OverseerCollectionMessageHandler shouldn't be logging informative data as WARN
[ https://issues.apache.org/jira/browse/SOLR-8151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14952768#comment-14952768 ] ASF subversion and git services commented on SOLR-8151: --- Commit 1708047 from [~romseygeek] in branch 'dev/branches/branch_5x' [ https://svn.apache.org/r1708047 ] SOLR-8151: Don't log OverseerCollectionMessageHandler info as WARN > OverseerCollectionMessageHandler shouldn't be logging informative data as WARN > -- > > Key: SOLR-8151 > URL: https://issues.apache.org/jira/browse/SOLR-8151 > Project: Solr > Issue Type: Bug >Affects Versions: 5.3, Trunk >Reporter: Alan Woodward >Assignee: Alan Woodward >Priority: Minor > Fix For: 5.4 > > > This ends up filling the logs with WARN messages whenever you do any > collection administration. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-8151) OverseerCollectionMessageHandler shouldn't be logging informative data as WARN
[ https://issues.apache.org/jira/browse/SOLR-8151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14952767#comment-14952767 ] ASF subversion and git services commented on SOLR-8151: --- Commit 1708046 from [~romseygeek] in branch 'dev/trunk' [ https://svn.apache.org/r1708046 ] SOLR-8151: Don't log OverseerCollectionMessageHandler info as WARN > OverseerCollectionMessageHandler shouldn't be logging informative data as WARN > -- > > Key: SOLR-8151 > URL: https://issues.apache.org/jira/browse/SOLR-8151 > Project: Solr > Issue Type: Bug >Affects Versions: 5.3, Trunk >Reporter: Alan Woodward >Assignee: Alan Woodward >Priority: Minor > Fix For: 5.4 > > > This ends up filling the logs with WARN messages whenever you do any > collection administration. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-8151) OverseerCollectionMessageHandler shouldn't be logging informative data as WARN
[ https://issues.apache.org/jira/browse/SOLR-8151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alan Woodward resolved SOLR-8151. - Resolution: Fixed > OverseerCollectionMessageHandler shouldn't be logging informative data as WARN > -- > > Key: SOLR-8151 > URL: https://issues.apache.org/jira/browse/SOLR-8151 > Project: Solr > Issue Type: Bug >Affects Versions: 5.3, Trunk >Reporter: Alan Woodward >Assignee: Alan Woodward >Priority: Minor > Fix For: 5.4 > > > This ends up filling the logs with WARN messages whenever you do any > collection administration. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS-EA] Lucene-Solr-trunk-Linux (64bit/jdk1.9.0-ea-b83) - Build # 14500 - Still Failing!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/14500/ Java: 64bit/jdk1.9.0-ea-b83 -XX:+UseCompressedOops -XX:+UseParallelGC -XX:CompileCommand=exclude,org.apache.directory.api.ldap.model.name.Dn::rdnOidToName 3 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.cloud.SaslZkACLProviderTest Error Message: 5 threads leaked from SUITE scope at org.apache.solr.cloud.SaslZkACLProviderTest: 1) Thread[id=9130, name=groupCache.data, state=TIMED_WAITING, group=TGRP-SaslZkACLProviderTest] at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:746)2) Thread[id=9128, name=changePwdReplayCache.data, state=TIMED_WAITING, group=TGRP-SaslZkACLProviderTest] at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:746)3) Thread[id=9127, name=apacheds, state=WAITING, group=TGRP-SaslZkACLProviderTest] at java.lang.Object.wait(Native Method) at java.lang.Object.wait(Object.java:516) at java.util.TimerThread.mainLoop(Timer.java:526) at java.util.TimerThread.run(Timer.java:505)4) Thread[id=9131, name=ou=system.data, state=TIMED_WAITING, group=TGRP-SaslZkACLProviderTest] at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:746)5) Thread[id=9129, name=kdcReplayCache.data, state=TIMED_WAITING, group=TGRP-SaslZkACLProviderTest] at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:746) Stack Trace: com.carrotsearch.randomizedtesting.ThreadLeakError: 5 threads leaked from SUITE scope at org.apache.solr.cloud.SaslZkACLProviderTest: 1) Thread[id=9130, name=groupCache.data, state=TIMED_WAITING, group=TGRP-SaslZkACLProviderTest] at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at
[jira] [Created] (SOLR-8156) RequestHandlerBase.handleRequest logs stacktraces caused by user requests
Alan Woodward created SOLR-8156: --- Summary: RequestHandlerBase.handleRequest logs stacktraces caused by user requests Key: SOLR-8156 URL: https://issues.apache.org/jira/browse/SOLR-8156 Project: Solr Issue Type: Bug Reporter: Alan Woodward Priority: Minor Bad user requests (eg syntax errors in queries) fill up solr logs with stacktraces, which makes tracking down actual errors much more difficult. Error logging is handled in both RequestHandlerBase and HttpSolrCall at the moment. HttpSolrCall tries to be a bit cleverer about it, only logging stacktraces for server errors. I suggest we just remove the logging from RHB entirely. This should also clear up some cases where errors get logged twice. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-5.x-Linux (32bit/jdk1.8.0_60) - Build # 14211 - Failure!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/14211/ Java: 32bit/jdk1.8.0_60 -client -XX:+UseG1GC 1 tests failed. FAILED: org.apache.solr.cloud.SolrCloudExampleTest.testLoadDocsIntoGettingStartedCollection Error Message: Delete action failed! Stack Trace: java.lang.AssertionError: Delete action failed! at __randomizedtesting.SeedInfo.seed([775294BA16C1AF36:6431A6D527AE1690]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.apache.solr.cloud.SolrCloudExampleTest.doTestDeleteAction(SolrCloudExampleTest.java:169) at org.apache.solr.cloud.SolrCloudExampleTest.testLoadDocsIntoGettingStartedCollection(SolrCloudExampleTest.java:145) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1665) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:864) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:900) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:914) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:963) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:938) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:873) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:775) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:809) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:820) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at
[jira] [Commented] (SOLR-8157) Dead link to replicas in AngularUI
[ https://issues.apache.org/jira/browse/SOLR-8157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14952932#comment-14952932 ] Upayavira commented on SOLR-8157: - Good observations. I've hit that issue a couple of times - where the UI is not "self-aware" and thus redirects to the old one when jumping between nodes. The cloud tab suffers from the same issue. What we could do is centralise the config for the root UI path (/index.html) such that we can easily change it when we change URLs. We should, of course, add the # as you mention, too! > Dead link to replicas in AngularUI > -- > > Key: SOLR-8157 > URL: https://issues.apache.org/jira/browse/SOLR-8157 > Project: Solr > Issue Type: Bug > Components: UI >Reporter: Jan Høydahl >Priority: Minor > Labels: angularjs > > Dead link to shard replica admin UI - missing # in URL. > Reproduce: > # Start Solr in cloud mode {{bin/solr start -e cloud -noprompt}} > # Go to Angular UI, collection overview: >http://localhost:8983/solr/index.html#/gettingstarted/collection-overview > # For one of the shards, expand one of its replicas > # Click the core name, e.g. >http://192.168.127.63:8983/solr/gettingstarted_shard1_replica2 > This link is not valid. It should have had a {{#}} after {{solr/}} > Another issue is that it points to the OLD UI, perhaps it should stay in the > new? -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Assigned] (SOLR-8157) Dead link to replicas in AngularUI
[ https://issues.apache.org/jira/browse/SOLR-8157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Upayavira reassigned SOLR-8157: --- Assignee: Upayavira > Dead link to replicas in AngularUI > -- > > Key: SOLR-8157 > URL: https://issues.apache.org/jira/browse/SOLR-8157 > Project: Solr > Issue Type: Bug > Components: UI >Reporter: Jan Høydahl >Assignee: Upayavira >Priority: Minor > Labels: angularjs > > Dead link to shard replica admin UI - missing # in URL. > Reproduce: > # Start Solr in cloud mode {{bin/solr start -e cloud -noprompt}} > # Go to Angular UI, collection overview: >http://localhost:8983/solr/index.html#/gettingstarted/collection-overview > # For one of the shards, expand one of its replicas > # Click the core name, e.g. >http://192.168.127.63:8983/solr/gettingstarted_shard1_replica2 > This link is not valid. It should have had a {{#}} after {{solr/}} > Another issue is that it points to the OLD UI, perhaps it should stay in the > new? -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Geospatial indexing of 2laksh polygons in lucene in 2 mins
Hi, I am trying to find intersecting geo-hashes(upto precision length 6) of around 2lakhs polygons. I have tried using postgis st_geohash and st_intersect but it is very slow for my use-case. I need to index 2lakhs polygons and find their intersecting geohashes in 2 mins. I read it that its possible to do so using lucene. http://opensourceconnections.com/blog/2014/04/11/indexing-polygons-in-lucene-with-accuracy/ Kindly, tell me how to do it or point me in the right direction. Regards, Swarn Avinash Kumar
[jira] [Updated] (LUCENE-6829) OfflineSorter should use Directory API
[ https://issues.apache.org/jira/browse/LUCENE-6829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael McCandless updated LUCENE-6829: --- Attachment: LUCENE-6829.patch New patch, I think it's closer: * The default temp dir logic is now private to Hunspell * I added Directory.createTempOutput, and also added IndexOutput.getName (so you can ask what temp name was picked). I use a seed'd random instance to generate the name candidates, retrying until I get one that didn't already exist. * Simplified the OfflineSorter API: the sort method now owns creating a temp file (sorted), and then returns its name * Fixed the formatting disaster from TestRandomChains (I blame emacs) * I cutover to TrackingDirectory in OfflineSorter to manage "deleting temp files on exception", and simply the try/finally/success horror show * I changed TrackingDiretoryWrapper.getCreatedFiles to make a clone first (it had a TODO about it, and I also hit a cryptic ConcurrentModificationExc because it didn't clone), and I added an explicit clearCreatedFiles, used by IW > OfflineSorter should use Directory API > -- > > Key: LUCENE-6829 > URL: https://issues.apache.org/jira/browse/LUCENE-6829 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Michael McCandless >Assignee: Michael McCandless > Fix For: Trunk, 5.4 > > Attachments: LUCENE-6829.patch, LUCENE-6829.patch, LUCENE-6829.patch > > > I think this is a blocker for LUCENE-6825, because the block KD-tree makes > heavy use of OfflineSorter and we don't want to fill up tmp space ... > This should be a straightforward cutover, but there are some challenges, e.g. > the test was failing because virus checker blocked deleting of files. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.8.0_60) - Build # 14501 - Still Failing!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/14501/ Java: 64bit/jdk1.8.0_60 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC 1 tests failed. FAILED: org.apache.solr.cloud.SolrCloudExampleTest.testLoadDocsIntoGettingStartedCollection Error Message: Delete action failed! Stack Trace: java.lang.AssertionError: Delete action failed! at __randomizedtesting.SeedInfo.seed([C9E361FC02B3D588:DA80539333DC6C2E]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.apache.solr.cloud.SolrCloudExampleTest.doTestDeleteAction(SolrCloudExampleTest.java:169) at org.apache.solr.cloud.SolrCloudExampleTest.testLoadDocsIntoGettingStartedCollection(SolrCloudExampleTest.java:145) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1665) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:864) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:900) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:914) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:963) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:938) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:873) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:775) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:809) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:820) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at
[jira] [Commented] (LUCENE-6276) Add matchCost() api to TwoPhaseDocIdSetIterator
[ https://issues.apache.org/jira/browse/LUCENE-6276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14952926#comment-14952926 ] Adrien Grand commented on LUCENE-6276: -- I think it would make more sense to sum up {{totalTermFreq/docFreq}} for each term instead of {{totalTermFreq/conjunctionDISI.cost()}}, so that we get the average number of positions per document? But otherwise I think you got the intention right. Something else to be careful with is that {{TermStatistics.totalTermFreq()}} may return -1, so we need a fallback for that case. Maybe we could just assume 1 position per document? A related question is what definition we should give to {{matchCost()}}. The patch does not have the issue yet since it only deals with phrase queries, but eventually we should be able to compare the cost of eg. a phrase query against a doc values range query even though they perform very different computations. Maybe the javadocs of matchCost could suggest a scale of costs of operations that implementors of matchCost() could use in order to compute the cost of matching the two-phase iterator. It could be something like 1 for nextDoc(), nextPosition(), comparisons and basic arithmetic operations and eg. 10 for advance()? > Add matchCost() api to TwoPhaseDocIdSetIterator > --- > > Key: LUCENE-6276 > URL: https://issues.apache.org/jira/browse/LUCENE-6276 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Robert Muir > Attachments: LUCENE-6276-ExactPhraseOnly.patch > > > We could add a method like TwoPhaseDISI.matchCost() defined as something like > estimate of nanoseconds or similar. > ConjunctionScorer could use this method to sort its 'twoPhaseIterators' array > so that cheaper ones are called first. Today it has no idea if one scorer is > a simple phrase scorer on a short field vs another that might do some geo > calculation or more expensive stuff. > PhraseScorers could implement this based on index statistics (e.g. > totalTermFreq/maxDoc) -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.8.0_60) - Build # 14502 - Still Failing!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/14502/ Java: 64bit/jdk1.8.0_60 -XX:+UseCompressedOops -XX:+UseParallelGC 1 tests failed. FAILED: org.apache.solr.cloud.BasicDistributedZkTest.test Error Message: commitWithin did not work on node: http://127.0.0.1:51027/dc/tt/collection1 expected:<68> but was:<67> Stack Trace: java.lang.AssertionError: commitWithin did not work on node: http://127.0.0.1:51027/dc/tt/collection1 expected:<68> but was:<67> at __randomizedtesting.SeedInfo.seed([942B661DA64FD383:1C7F59C708B3BE7B]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.failNotEquals(Assert.java:647) at org.junit.Assert.assertEquals(Assert.java:128) at org.junit.Assert.assertEquals(Assert.java:472) at org.apache.solr.cloud.BasicDistributedZkTest.test(BasicDistributedZkTest.java:333) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1665) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:864) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:900) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:914) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:963) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:938) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:873) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:775) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:809) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:820) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55) at
[jira] [Created] (SOLR-8157) Dead link to replicas in AngularUI
Jan Høydahl created SOLR-8157: - Summary: Dead link to replicas in AngularUI Key: SOLR-8157 URL: https://issues.apache.org/jira/browse/SOLR-8157 Project: Solr Issue Type: Bug Components: UI Reporter: Jan Høydahl Priority: Minor Dead link to shard replica admin UI - missing # in URL. Reproduce: # Start Solr in cloud mode {{bin/solr start -e cloud -noprompt}} # Go to Angular UI, collection overview: http://localhost:8983/solr/index.html#/gettingstarted/collection-overview # For one of the shards, expand one of its replicas # Click the core name, e.g. http://192.168.127.63:8983/solr/gettingstarted_shard1_replica2 This link is not valid. It should have had a {{#}} after {{solr/}} Another issue is that it points to the OLD UI, perhaps it should stay in the new? -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (LUCENE-6835) Directory.deleteFile should "own" retrying deletions on Windows
Michael McCandless created LUCENE-6835: -- Summary: Directory.deleteFile should "own" retrying deletions on Windows Key: LUCENE-6835 URL: https://issues.apache.org/jira/browse/LUCENE-6835 Project: Lucene - Core Issue Type: Improvement Reporter: Michael McCandless Fix For: Trunk, 5.4 Rob's idea: Today, we have hairy logic in IndexFileDeleter to deal with Windows file systems that cannot delete still open files. And with LUCENE-6829, where OfflineSorter now must deal with the situation too ... I worked around it by fixing all tests to disable the virus checker. I think it makes more sense to push this "platform specific problem" lower in the stack, into Directory? I.e., its deleteFile method would catch the access denied, and then retry the deletion later. Then we could re-enable virus checker on all these tests, simplify IndexFileDeleter, etc. Maybe in the future we could further push this down, into WindowsDirectory, and fix FSDirectory.open to return WindowsDirectory on windows ... -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-8157) Dead link to replicas in AngularUI
[ https://issues.apache.org/jira/browse/SOLR-8157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14952936#comment-14952936 ] Jan Høydahl commented on SOLR-8157: --- The patch is simple in {{collection_overview.html}} {code} Core: {{replica.core}} {code} But probably worth it to centralize the full root path! > Dead link to replicas in AngularUI > -- > > Key: SOLR-8157 > URL: https://issues.apache.org/jira/browse/SOLR-8157 > Project: Solr > Issue Type: Bug > Components: UI >Reporter: Jan Høydahl >Assignee: Upayavira >Priority: Minor > Labels: angularjs > > Dead link to shard replica admin UI - missing # in URL. > Reproduce: > # Start Solr in cloud mode {{bin/solr start -e cloud -noprompt}} > # Go to Angular UI, collection overview: >http://localhost:8983/solr/index.html#/gettingstarted/collection-overview > # For one of the shards, expand one of its replicas > # Click the core name, e.g. >http://192.168.127.63:8983/solr/gettingstarted_shard1_replica2 > This link is not valid. It should have had a {{#}} after {{solr/}} > Another issue is that it points to the OLD UI, perhaps it should stay in the > new? -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (LUCENE-6301) Deprecate Filter
[ https://issues.apache.org/jira/browse/LUCENE-6301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adrien Grand resolved LUCENE-6301. -- Resolution: Fixed Fix Version/s: (was: 5.2) 5.4 > Deprecate Filter > > > Key: LUCENE-6301 > URL: https://issues.apache.org/jira/browse/LUCENE-6301 > Project: Lucene - Core > Issue Type: Task >Reporter: Adrien Grand >Assignee: Adrien Grand > Fix For: Trunk, 5.4 > > Attachments: LUCENE-6301.patch, LUCENE-6301.patch > > > It will still take time to completely remove Filter, but I think we should > start deprecating it now to state our intention and encourage users to move > to queries as soon as possible? -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS-EA] Lucene-Solr-5.x-Linux (32bit/jdk1.9.0-ea-b83) - Build # 14214 - Failure!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/14214/ Java: 32bit/jdk1.9.0-ea-b83 -server -XX:+UseParallelGC -XX:CompileCommand=exclude,org.apache.directory.api.ldap.model.name.Dn::rdnOidToName 1 tests failed. FAILED: org.apache.solr.cloud.SolrCloudExampleTest.testLoadDocsIntoGettingStartedCollection Error Message: Delete action failed! Stack Trace: java.lang.AssertionError: Delete action failed! at __randomizedtesting.SeedInfo.seed([535A3F7E13F99B33:40390D1122962295]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.apache.solr.cloud.SolrCloudExampleTest.doTestDeleteAction(SolrCloudExampleTest.java:169) at org.apache.solr.cloud.SolrCloudExampleTest.testLoadDocsIntoGettingStartedCollection(SolrCloudExampleTest.java:145) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:519) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1665) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:864) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:900) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:914) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:963) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:938) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:873) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:775) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:809) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:820) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55) at
[JENKINS] Lucene-Solr-Tests-trunk-Java8 - Build # 478 - Still Failing
Build: https://builds.apache.org/job/Lucene-Solr-Tests-trunk-Java8/478/ 2 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.search.stats.TestDistribIDF Error Message: 1 thread leaked from SUITE scope at org.apache.solr.search.stats.TestDistribIDF: 1) Thread[id=15373, name=OverseerHdfsCoreFailoverThread-94677501145907209-127.0.0.1:33247_solr-n_01, state=TIMED_WAITING, group=Overseer Hdfs SolrCore Failover Thread.] at java.lang.Thread.sleep(Native Method) at org.apache.solr.cloud.OverseerAutoReplicaFailoverThread.run(OverseerAutoReplicaFailoverThread.java:139) at java.lang.Thread.run(Thread.java:745) Stack Trace: com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE scope at org.apache.solr.search.stats.TestDistribIDF: 1) Thread[id=15373, name=OverseerHdfsCoreFailoverThread-94677501145907209-127.0.0.1:33247_solr-n_01, state=TIMED_WAITING, group=Overseer Hdfs SolrCore Failover Thread.] at java.lang.Thread.sleep(Native Method) at org.apache.solr.cloud.OverseerAutoReplicaFailoverThread.run(OverseerAutoReplicaFailoverThread.java:139) at java.lang.Thread.run(Thread.java:745) at __randomizedtesting.SeedInfo.seed([419087972ADC50CE]:0) FAILED: junit.framework.TestSuite.org.apache.solr.search.stats.TestDistribIDF Error Message: There are still zombie threads that couldn't be terminated:1) Thread[id=15373, name=OverseerHdfsCoreFailoverThread-94677501145907209-127.0.0.1:33247_solr-n_01, state=RUNNABLE, group=Overseer Hdfs SolrCore Failover Thread.] at com.carrotsearch.ant.tasks.junit4.dependencies.com.google.gson.stream.JsonWriter.open(JsonWriter.java:325) at com.carrotsearch.ant.tasks.junit4.dependencies.com.google.gson.stream.JsonWriter.beginArray(JsonWriter.java:288) at com.carrotsearch.ant.tasks.junit4.events.Serializer.flushQueue(Serializer.java:100) at com.carrotsearch.ant.tasks.junit4.events.Serializer.serialize(Serializer.java:83) at com.carrotsearch.ant.tasks.junit4.slave.SlaveMain$3$2.write(SlaveMain.java:457) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.PrintStream.flush(PrintStream.java:338) at java.io.FilterOutputStream.flush(FilterOutputStream.java:140) at java.io.PrintStream.write(PrintStream.java:482) at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:221) at sun.nio.cs.StreamEncoder.implFlushBuffer(StreamEncoder.java:291) at sun.nio.cs.StreamEncoder.implFlush(StreamEncoder.java:295) at sun.nio.cs.StreamEncoder.flush(StreamEncoder.java:141) at java.io.OutputStreamWriter.flush(OutputStreamWriter.java:229) at org.apache.log4j.helpers.QuietWriter.flush(QuietWriter.java:59) at org.apache.log4j.WriterAppender.subAppend(WriterAppender.java:324) at org.apache.log4j.WriterAppender.append(WriterAppender.java:162) at org.apache.log4j.AppenderSkeleton.doAppend(AppenderSkeleton.java:251) at org.apache.log4j.helpers.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:66) at org.apache.log4j.Category.callAppenders(Category.java:206) at org.apache.log4j.Category.forcedLog(Category.java:391) at org.apache.log4j.Category.log(Category.java:856) at org.slf4j.impl.Log4jLoggerAdapter.error(Log4jLoggerAdapter.java:497) at org.apache.solr.common.SolrException.log(SolrException.java:150) at org.apache.solr.cloud.OverseerAutoReplicaFailoverThread.run(OverseerAutoReplicaFailoverThread.java:133) at java.lang.Thread.run(Thread.java:745) Stack Trace: com.carrotsearch.randomizedtesting.ThreadLeakError: There are still zombie threads that couldn't be terminated: 1) Thread[id=15373, name=OverseerHdfsCoreFailoverThread-94677501145907209-127.0.0.1:33247_solr-n_01, state=RUNNABLE, group=Overseer Hdfs SolrCore Failover Thread.] at com.carrotsearch.ant.tasks.junit4.dependencies.com.google.gson.stream.JsonWriter.open(JsonWriter.java:325) at com.carrotsearch.ant.tasks.junit4.dependencies.com.google.gson.stream.JsonWriter.beginArray(JsonWriter.java:288) at com.carrotsearch.ant.tasks.junit4.events.Serializer.flushQueue(Serializer.java:100) at com.carrotsearch.ant.tasks.junit4.events.Serializer.serialize(Serializer.java:83) at com.carrotsearch.ant.tasks.junit4.slave.SlaveMain$3$2.write(SlaveMain.java:457) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.PrintStream.flush(PrintStream.java:338) at java.io.FilterOutputStream.flush(FilterOutputStream.java:140) at java.io.PrintStream.write(PrintStream.java:482)
[jira] [Resolved] (LUCENE-6834) Remove BoostQuery.toString()'s hack with parenthesis
[ https://issues.apache.org/jira/browse/LUCENE-6834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adrien Grand resolved LUCENE-6834. -- Resolution: Fixed > Remove BoostQuery.toString()'s hack with parenthesis > > > Key: LUCENE-6834 > URL: https://issues.apache.org/jira/browse/LUCENE-6834 > Project: Lucene - Core > Issue Type: Task >Reporter: Adrien Grand >Assignee: Adrien Grand >Priority: Minor > Fix For: 6.0 > > Attachments: LUCENE-6834.patch > > > This hack was added in order not to break the string representation of our > queries in 5.x. However I don't think we should have it in trunk. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6834) Remove BoostQuery.toString()'s hack with parenthesis
[ https://issues.apache.org/jira/browse/LUCENE-6834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14953245#comment-14953245 ] ASF subversion and git services commented on LUCENE-6834: - Commit 1708146 from [~jpountz] in branch 'dev/trunk' [ https://svn.apache.org/r1708146 ] LUCENE-6834: Removed BoostQuery.toString()'s hack with parenthesis. > Remove BoostQuery.toString()'s hack with parenthesis > > > Key: LUCENE-6834 > URL: https://issues.apache.org/jira/browse/LUCENE-6834 > Project: Lucene - Core > Issue Type: Task >Reporter: Adrien Grand >Assignee: Adrien Grand >Priority: Minor > Fix For: 6.0 > > Attachments: LUCENE-6834.patch > > > This hack was added in order not to break the string representation of our > queries in 5.x. However I don't think we should have it in trunk. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-6836) TestBlockJoinSorter test failure
[ https://issues.apache.org/jira/browse/LUCENE-6836?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adrien Grand updated LUCENE-6836: - Description: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/14504/ {noformat} java.lang.AssertionError: The top-reader used to create Weight (LeafReaderContext(SlowCompositeReaderWrapper(FCInvisibleMultiReader(FCInvisibleMultiReader(_c(6.0.0):C4293))) docBase=0 ord=0)) is not the same as the current reader's top-reader (LeafReaderContext(_c(6.0.0):C4293 docBase=0 ord=0) at __randomizedtesting.SeedInfo.seed([B655F224183AE465:3E01CDFEB6C6899D]:0) at org.apache.lucene.search.TermQuery$TermWeight.scorer(TermQuery.java:100) at org.apache.lucene.search.LRUQueryCache$CachingWrapperWeight.scorer(LRUQueryCache.java:592) at org.apache.lucene.search.AssertingWeight.scorer(AssertingWeight.java:62) at org.apache.lucene.search.AssertingWeight.scorer(AssertingWeight.java:62) at org.apache.lucene.index.TestBlockJoinSorter.test(TestBlockJoinSorter.java:73) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:519) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1665) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:864) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:900) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:914) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:873) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:775) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:809) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:820) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at java.lang.Thread.run(Thread.java:746) {noformat} This is due to changes on LUCENE-6301. I will disable this test for now while I'm working on a fix. was:
[jira] [Created] (LUCENE-6836) TestBlockJoinSorter test failure
Adrien Grand created LUCENE-6836: Summary: TestBlockJoinSorter test failure Key: LUCENE-6836 URL: https://issues.apache.org/jira/browse/LUCENE-6836 Project: Lucene - Core Issue Type: Bug Reporter: Adrien Grand Assignee: Adrien Grand Priority: Minor http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/14504/ {{java.lang.AssertionError: The top-reader used to create Weight (LeafReaderContext(SlowCompositeReaderWrapper(FCInvisibleMultiReader(FCInvisibleMultiReader(_c(6.0.0):C4293))) docBase=0 ord=0)) is not the same as the current reader's top-reader (LeafReaderContext(_c(6.0.0):C4293 docBase=0 ord=0) at __randomizedtesting.SeedInfo.seed([B655F224183AE465:3E01CDFEB6C6899D]:0) at org.apache.lucene.search.TermQuery$TermWeight.scorer(TermQuery.java:100) at org.apache.lucene.search.LRUQueryCache$CachingWrapperWeight.scorer(LRUQueryCache.java:592) at org.apache.lucene.search.AssertingWeight.scorer(AssertingWeight.java:62) at org.apache.lucene.search.AssertingWeight.scorer(AssertingWeight.java:62) at org.apache.lucene.index.TestBlockJoinSorter.test(TestBlockJoinSorter.java:73) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:519) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1665) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:864) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:900) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:914) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:873) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:775) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:809) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:820) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at
[jira] [Commented] (LUCENE-6836) TestBlockJoinSorter test failure
[ https://issues.apache.org/jira/browse/LUCENE-6836?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14953254#comment-14953254 ] ASF subversion and git services commented on LUCENE-6836: - Commit 1708150 from [~jpountz] in branch 'dev/trunk' [ https://svn.apache.org/r1708150 ] LUCENE-6836: Disable TestBlockJoinSorter. > TestBlockJoinSorter test failure > > > Key: LUCENE-6836 > URL: https://issues.apache.org/jira/browse/LUCENE-6836 > Project: Lucene - Core > Issue Type: Bug >Reporter: Adrien Grand >Assignee: Adrien Grand >Priority: Minor > > http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/14504/ > {noformat} > java.lang.AssertionError: The top-reader used to create Weight > (LeafReaderContext(SlowCompositeReaderWrapper(FCInvisibleMultiReader(FCInvisibleMultiReader(_c(6.0.0):C4293))) > docBase=0 ord=0)) is not the same as the current reader's top-reader > (LeafReaderContext(_c(6.0.0):C4293 docBase=0 ord=0) > at > __randomizedtesting.SeedInfo.seed([B655F224183AE465:3E01CDFEB6C6899D]:0) > at > org.apache.lucene.search.TermQuery$TermWeight.scorer(TermQuery.java:100) > at > org.apache.lucene.search.LRUQueryCache$CachingWrapperWeight.scorer(LRUQueryCache.java:592) > at > org.apache.lucene.search.AssertingWeight.scorer(AssertingWeight.java:62) > at > org.apache.lucene.search.AssertingWeight.scorer(AssertingWeight.java:62) > at > org.apache.lucene.index.TestBlockJoinSorter.test(TestBlockJoinSorter.java:73) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:519) > at > com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1665) > at > com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:864) > at > com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:900) > at > com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:914) > at > org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) > at > org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) > at > org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) > at > org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) > at > org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) > at > com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) > at > com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) > at > com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809) > at > com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460) > at > com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:873) > at > com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:775) > at > com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:809) > at > com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:820) > at > org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) > at > com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) > at > org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) > at > com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) > at > com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) > at > com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) > at > com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) > at > com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) > at > org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54) > at >
Re: [JENKINS-EA] Lucene-Solr-trunk-Linux (64bit/jdk1.9.0-ea-b83) - Build # 14504 - Still Failing!
I will look into it. I think it's due to the Filter removal since I had to refactor index sorting a bit. I opened an issue to track it: https://issues.apache.org/jira/browse/LUCENE-6836 Le lun. 12 oct. 2015 à 16:46, Policeman Jenkins Servera écrit : > Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/14504/ > Java: 64bit/jdk1.9.0-ea-b83 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC > -XX:CompileCommand=exclude,org.apache.directory.api.ldap.model.name.Dn::rdnOidToName > > 1 tests failed. > FAILED: org.apache.lucene.index.TestBlockJoinSorter.test > > Error Message: > The top-reader used to create Weight > (LeafReaderContext(SlowCompositeReaderWrapper(FCInvisibleMultiReader(FCInvisibleMultiReader(_c(6.0.0):C4293))) > docBase=0 ord=0)) is not the same as the current reader's top-reader > (LeafReaderContext(_c(6.0.0):C4293 docBase=0 ord=0) > > Stack Trace: > java.lang.AssertionError: The top-reader used to create Weight > (LeafReaderContext(SlowCompositeReaderWrapper(FCInvisibleMultiReader(FCInvisibleMultiReader(_c(6.0.0):C4293))) > docBase=0 ord=0)) is not the same as the current reader's top-reader > (LeafReaderContext(_c(6.0.0):C4293 docBase=0 ord=0) > at > __randomizedtesting.SeedInfo.seed([B655F224183AE465:3E01CDFEB6C6899D]:0) > at > org.apache.lucene.search.TermQuery$TermWeight.scorer(TermQuery.java:100) > at > org.apache.lucene.search.LRUQueryCache$CachingWrapperWeight.scorer(LRUQueryCache.java:592) > at > org.apache.lucene.search.AssertingWeight.scorer(AssertingWeight.java:62) > at > org.apache.lucene.search.AssertingWeight.scorer(AssertingWeight.java:62) > at > org.apache.lucene.index.TestBlockJoinSorter.test(TestBlockJoinSorter.java:73) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:519) > at > com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1665) > at > com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:864) > at > com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:900) > at > com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:914) > at > org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) > at > org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) > at > org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) > at > org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) > at > org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) > at > com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) > at > com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) > at > com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809) > at > com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460) > at > com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:873) > at > com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:775) > at > com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:809) > at > com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:820) > at > org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) > at > com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) > at > org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) > at > com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) > at > com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) > at > com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) > at > com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) > at > com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) > at > org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54) > at >
[jira] [Commented] (LUCENE-6836) TestBlockJoinSorter test failure
[ https://issues.apache.org/jira/browse/LUCENE-6836?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14953256#comment-14953256 ] ASF subversion and git services commented on LUCENE-6836: - Commit 1708152 from [~jpountz] in branch 'dev/branches/branch_5x' [ https://svn.apache.org/r1708152 ] LUCENE-6836: Disable TestBlockJoinSorter. > TestBlockJoinSorter test failure > > > Key: LUCENE-6836 > URL: https://issues.apache.org/jira/browse/LUCENE-6836 > Project: Lucene - Core > Issue Type: Bug >Reporter: Adrien Grand >Assignee: Adrien Grand >Priority: Minor > > http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/14504/ > {noformat} > java.lang.AssertionError: The top-reader used to create Weight > (LeafReaderContext(SlowCompositeReaderWrapper(FCInvisibleMultiReader(FCInvisibleMultiReader(_c(6.0.0):C4293))) > docBase=0 ord=0)) is not the same as the current reader's top-reader > (LeafReaderContext(_c(6.0.0):C4293 docBase=0 ord=0) > at > __randomizedtesting.SeedInfo.seed([B655F224183AE465:3E01CDFEB6C6899D]:0) > at > org.apache.lucene.search.TermQuery$TermWeight.scorer(TermQuery.java:100) > at > org.apache.lucene.search.LRUQueryCache$CachingWrapperWeight.scorer(LRUQueryCache.java:592) > at > org.apache.lucene.search.AssertingWeight.scorer(AssertingWeight.java:62) > at > org.apache.lucene.search.AssertingWeight.scorer(AssertingWeight.java:62) > at > org.apache.lucene.index.TestBlockJoinSorter.test(TestBlockJoinSorter.java:73) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:519) > at > com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1665) > at > com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:864) > at > com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:900) > at > com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:914) > at > org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) > at > org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) > at > org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) > at > org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) > at > org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) > at > com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) > at > com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) > at > com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809) > at > com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460) > at > com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:873) > at > com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:775) > at > com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:809) > at > com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:820) > at > org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) > at > com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) > at > org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) > at > com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) > at > com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) > at > com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) > at > com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) > at > com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) > at > org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54) > at >
[JENKINS] Lucene-Solr-5.x-Linux (32bit/jdk1.8.0_60) - Build # 14212 - Still Failing!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/14212/ Java: 32bit/jdk1.8.0_60 -server -XX:+UseG1GC 1 tests failed. FAILED: org.apache.solr.cloud.SolrCloudExampleTest.testLoadDocsIntoGettingStartedCollection Error Message: Delete action failed! Stack Trace: java.lang.AssertionError: Delete action failed! at __randomizedtesting.SeedInfo.seed([4576C273829A13AA:5615F01CB3F5AA0C]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.apache.solr.cloud.SolrCloudExampleTest.doTestDeleteAction(SolrCloudExampleTest.java:169) at org.apache.solr.cloud.SolrCloudExampleTest.testLoadDocsIntoGettingStartedCollection(SolrCloudExampleTest.java:145) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1665) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:864) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:900) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:914) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:963) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:938) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:873) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:775) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:809) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:820) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at
[jira] [Commented] (SOLR-7888) Make Lucene's AnalyzingInfixSuggester.lookup() method that takes a BooleanQuery filter parameter available in Solr
[ https://issues.apache.org/jira/browse/SOLR-7888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14953074#comment-14953074 ] ASF subversion and git services commented on SOLR-7888: --- Commit 1708103 from jan...@apache.org in branch 'dev/branches/branch_5x' [ https://svn.apache.org/r1708103 ] SOLR-7888: Analyzing suggesters can now filter suggestions by a context field (backport) > Make Lucene's AnalyzingInfixSuggester.lookup() method that takes a > BooleanQuery filter parameter available in Solr > -- > > Key: SOLR-7888 > URL: https://issues.apache.org/jira/browse/SOLR-7888 > Project: Solr > Issue Type: New Feature > Components: Suggester >Affects Versions: 5.2.1 >Reporter: Arcadius Ahouansou >Assignee: Jan Høydahl > Fix For: 5.4 > > Attachments: SOLR-7888-7963.patch, SOLR-7888.patch, SOLR-7888.patch > > > LUCENE-6464 has introduced a very flexible lookup method that takes as > parameter a BooleanQuery that is used for filtering results. > This ticket is to expose that method to Solr. > This would allow user to do: > {code} > /suggest?suggest=true=true=term=contexts:tennis > /suggest?suggest=true=true=term=contexts:golf > AND contexts:football > {code} > etc > Given that the context filtering in currently only implemented by the > {code}AnalyzingInfixSuggester{code} and by the > {code}BlendedInfixSuggester{code}, this initial implementation will support > only these 2 lookup implementations. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (LUCENE-6301) Deprecate Filter
[ https://issues.apache.org/jira/browse/LUCENE-6301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14953084#comment-14953084 ] Jack Krupansky edited comment on LUCENE-6301 at 10/12/15 12:58 PM: --- I know this change has been in progress for awhile, but it just kind of sunk for me finally and now I'm wondering what the impact on Solr will be. I mean, wasn't Filter supposed to be a big performance win over a Query since it eliminates the performance impact of scoring? If that was the case, is Lucene proving some alternate method of achieving a similar performance improvement? I think it is, but... not stated quite so explicitly. An example of the expected migration would help a lot. I think the example should be in the Lucene Javadoc - "To filter documents without the performance overhead of scoring, use the following technique..." If I understand properly, one would simply wrap the query in a BooleanQuery with a single clause that uses BooleanQuery.Clause.FILTER and that would have exactly the same effect (and performance gain) as the old Filter class. Is that statement 100% accurate? If so, it would be good to make it explicit here in Jira, in the deprecation comment in the the Filter class, and in BooleanQuery as well. Thanks! was (Author: jkrupan): I know this change has been in progress for awhile, but it just kind of sunk for me finally in and now I'm wondering what the impact on Solr will be. I mean, wasn't Filter supposed to be a big performance win over a Query since it eliminates the performance impact of scoring? If that was the case, is Lucene proving some alternate method of achieving a similar performance improvement? I think it is, but... not stated quite so explicitly. An example of the expected migration would help a lot. I think the example should be in the Lucene Javadoc - "To filter documents without the performance overhead of scoring, use the following technique..." If I understand properly, one would simply wrap the query in a BooleanQuery with a single clause that uses BooleanQuery.Clause.FILTER and that would have exactly the same effect (and performance gain) as the old Filter class. Is that statement 100% accurate? If so, it would be good to make it explicit here in Jira, in the deprecation comment in the the Filter class, and in BooleanQuery as well. Thanks! > Deprecate Filter > > > Key: LUCENE-6301 > URL: https://issues.apache.org/jira/browse/LUCENE-6301 > Project: Lucene - Core > Issue Type: Task >Reporter: Adrien Grand >Assignee: Adrien Grand > Fix For: 5.2, Trunk > > Attachments: LUCENE-6301.patch, LUCENE-6301.patch > > > It will still take time to completely remove Filter, but I think we should > start deprecating it now to state our intention and encourage users to move > to queries as soon as possible? -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6301) Deprecate Filter
[ https://issues.apache.org/jira/browse/LUCENE-6301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14953084#comment-14953084 ] Jack Krupansky commented on LUCENE-6301: I know this change has been in progress for awhile, but it just kind of sunk for me finally in and now I'm wondering what the impact on Solr will be. I mean, wasn't Filter supposed to be a big performance win over a Query since it eliminates the performance impact of scoring? If that was the case, is Lucene proving some alternate method of achieving a similar performance improvement? I think it is, but... not stated quite so explicitly. An example of the expected migration would help a lot. I think the example should be in the Lucene Javadoc - "To filter documents without the performance overhead of scoring, use the following technique..." If I understand properly, one would simply wrap the query in a BooleanQuery with a single clause that uses BooleanQuery.Clause.FILTER and that would have exactly the same effect (and performance gain) as the old Filter class. Is that statement 100% accurate? If so, it would be good to make it explicit here in Jira, in the deprecation comment in the the Filter class, and in BooleanQuery as well. Thanks! > Deprecate Filter > > > Key: LUCENE-6301 > URL: https://issues.apache.org/jira/browse/LUCENE-6301 > Project: Lucene - Core > Issue Type: Task >Reporter: Adrien Grand >Assignee: Adrien Grand > Fix For: 5.2, Trunk > > Attachments: LUCENE-6301.patch, LUCENE-6301.patch > > > It will still take time to completely remove Filter, but I think we should > start deprecating it now to state our intention and encourage users to move > to queries as soon as possible? -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.8.0_60) - Build # 14503 - Still Failing!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/14503/ Java: 32bit/jdk1.8.0_60 -client -XX:+UseParallelGC 1 tests failed. FAILED: org.apache.solr.cloud.SolrCloudExampleTest.testLoadDocsIntoGettingStartedCollection Error Message: Delete action failed! Stack Trace: java.lang.AssertionError: Delete action failed! at __randomizedtesting.SeedInfo.seed([1FF4FA9607088058:C97C8F9366739FE]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.apache.solr.cloud.SolrCloudExampleTest.doTestDeleteAction(SolrCloudExampleTest.java:169) at org.apache.solr.cloud.SolrCloudExampleTest.testLoadDocsIntoGettingStartedCollection(SolrCloudExampleTest.java:145) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1665) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:864) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:900) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:914) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:963) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:938) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:873) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:775) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:809) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:820) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at
[jira] [Commented] (SOLR-7888) Make Lucene's AnalyzingInfixSuggester.lookup() method that takes a BooleanQuery filter parameter available in Solr
[ https://issues.apache.org/jira/browse/SOLR-7888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14953126#comment-14953126 ] Jan Høydahl commented on SOLR-7888: --- Added documentation to refguide: https://cwiki.apache.org/confluence/display/solr/Suggester > Make Lucene's AnalyzingInfixSuggester.lookup() method that takes a > BooleanQuery filter parameter available in Solr > -- > > Key: SOLR-7888 > URL: https://issues.apache.org/jira/browse/SOLR-7888 > Project: Solr > Issue Type: New Feature > Components: Suggester >Affects Versions: 5.2.1 >Reporter: Arcadius Ahouansou >Assignee: Jan Høydahl > Fix For: 5.4 > > Attachments: SOLR-7888-7963.patch, SOLR-7888.patch, SOLR-7888.patch > > > LUCENE-6464 has introduced a very flexible lookup method that takes as > parameter a BooleanQuery that is used for filtering results. > This ticket is to expose that method to Solr. > This would allow user to do: > {code} > /suggest?suggest=true=true=term=contexts:tennis > /suggest?suggest=true=true=term=contexts:golf > AND contexts:football > {code} > etc > Given that the context filtering in currently only implemented by the > {code}AnalyzingInfixSuggester{code} and by the > {code}BlendedInfixSuggester{code}, this initial implementation will support > only these 2 lookup implementations. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6301) Deprecate Filter
[ https://issues.apache.org/jira/browse/LUCENE-6301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14953047#comment-14953047 ] ASF subversion and git services commented on LUCENE-6301: - Commit 1708097 from [~jpountz] in branch 'dev/trunk' [ https://svn.apache.org/r1708097 ] LUCENE-6301: Removal of org.apache.lucene.Filter. >From a Lucene perspective Filter is gone. However it was still used for things like DocSet and SolrConstantScoreQuery in Solr, so it has been moved to the oas.search package for now, even though in the long term it would be nice for Solr to move to the Query API entirely as well. > Deprecate Filter > > > Key: LUCENE-6301 > URL: https://issues.apache.org/jira/browse/LUCENE-6301 > Project: Lucene - Core > Issue Type: Task >Reporter: Adrien Grand >Assignee: Adrien Grand > Fix For: 5.2, Trunk > > Attachments: LUCENE-6301.patch, LUCENE-6301.patch > > > It will still take time to completely remove Filter, but I think we should > start deprecating it now to state our intention and encourage users to move > to queries as soon as possible? -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-5944) Support updates of numeric DocValues
[ https://issues.apache.org/jira/browse/SOLR-5944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14953060#comment-14953060 ] Ishan Chattopadhyaya commented on SOLR-5944: I am close to a patch for the above proposal, and shall post it soon. One place where I am somewhere stuck is in the log buffering/replaying part. Here's the problem: When a replica is put into recovery by the leader, it comes back up and tries to perform a peersync. This seems to be happening in a two phase process: buffering (where the updates, after being obtained from the leader's tlog, are played back and written to the replica's tlog but not its index/ulog) and replaying (where the tlog is replayed and the updates are written to ulog/index, but not into tlog again). The problem I'm facing is that during this buffering phase, the inplace updates can't find dependent updates if they are not in the index, since the updates are not written to ulog in the buffering phase. I have two choices at the moment to get around this: # During a buffering phase, I can keep a separate map of all updates (id to tlog pointer) to be used during and discarded after the buffering phase. That map can help resolve inplace updates that follow. (Pro: fast, Con: memory) # For every inplace update, I traverse back into the tlog and linearly scan for the required dependent update. (Pro: no memory, Con: Slow / O(n)) At this point, I'm inclined to go for option 1, but I'm wondering if there are any serious downsides to doing this. Any suggestions, please? Also, am I correct in my assumption that the no. of updates processed during this buffering phase will not be more than {{numUpdatesToKeep}}? In case I sound confused/unclear, please let me know and I'll post the relevant failing test for this. > Support updates of numeric DocValues > > > Key: SOLR-5944 > URL: https://issues.apache.org/jira/browse/SOLR-5944 > Project: Solr > Issue Type: New Feature >Reporter: Ishan Chattopadhyaya >Assignee: Shalin Shekhar Mangar > Attachments: SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, > SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, > SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, > SOLR-5944.patch > > > LUCENE-5189 introduced support for updates to numeric docvalues. It would be > really nice to have Solr support this. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6301) Deprecate Filter
[ https://issues.apache.org/jira/browse/LUCENE-6301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14953124#comment-14953124 ] Adrien Grand commented on LUCENE-6301: -- bq. wasn't Filter supposed to be a big performance win over a Query since it eliminates the performance impact of scoring? If that was the case, is Lucene proving some alternate method of achieving a similar performance improvement? Over the past releases, we progressively improved to Query/Collector API so that queries can detect whether scores are needed and optimize in case scores are not needed in order to eg. avoid to read frequencies or stop after the first occurence is found in the case of phrase queries (LUCENE-6218). Everything is detected automatically now, for instance if you wrap a query in a ConstantScoreQuery, it will automatically notice that scores are not needed. If you sort by the value of a field and don't request scores, then again it will notice that scores are not needed and optimize query execution. Something else that Filters provided but not queries was random-access support. But it was a bit incomplete since Filters had no way to tell FilteredQuery if they should rather be consumed using iteration or random-access and making the wrong decision could sometimes result in super slow queries that would try to call advance() on a DocValuesRangeQuery which doesn't use an index and needs to perform a linear scan in order to locate the next match. So we added two-phase iteration support to queries (LUCENE-6198) which allows us to dissert queries into a fast approximation and a slow verification phase. For instance, a phrase query "A B" would return the conjunction (+A +B) as an approximation and check if it can find the two terms at consecutive positions as a verification phase. bq. that would have exactly the same effect (and performance gain) as the old Filter class. Is that statement 100% accurate? If you use a query that provides an efficient approximation (such as phrase queries) as a filter, things could be considerably faster. Otherwise, things will mostly work the same way as before and you could have slight speedups or slowdowns given that we use different code paths that hotspot might optimize differently. I will look into the deprecation comments for Filter. > Deprecate Filter > > > Key: LUCENE-6301 > URL: https://issues.apache.org/jira/browse/LUCENE-6301 > Project: Lucene - Core > Issue Type: Task >Reporter: Adrien Grand >Assignee: Adrien Grand > Fix For: 5.2, Trunk > > Attachments: LUCENE-6301.patch, LUCENE-6301.patch > > > It will still take time to completely remove Filter, but I think we should > start deprecating it now to state our intention and encourage users to move > to queries as soon as possible? -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-7888) Make Lucene's AnalyzingInfixSuggester.lookup() method that takes a BooleanQuery filter parameter available in Solr
[ https://issues.apache.org/jira/browse/SOLR-7888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jan Høydahl resolved SOLR-7888. --- Resolution: Fixed Fix Version/s: Trunk > Make Lucene's AnalyzingInfixSuggester.lookup() method that takes a > BooleanQuery filter parameter available in Solr > -- > > Key: SOLR-7888 > URL: https://issues.apache.org/jira/browse/SOLR-7888 > Project: Solr > Issue Type: New Feature > Components: Suggester >Affects Versions: 5.2.1 >Reporter: Arcadius Ahouansou >Assignee: Jan Høydahl > Fix For: 5.4, Trunk > > Attachments: SOLR-7888-7963.patch, SOLR-7888.patch, SOLR-7888.patch > > > LUCENE-6464 has introduced a very flexible lookup method that takes as > parameter a BooleanQuery that is used for filtering results. > This ticket is to expose that method to Solr. > This would allow user to do: > {code} > /suggest?suggest=true=true=term=contexts:tennis > /suggest?suggest=true=true=term=contexts:golf > AND contexts:football > {code} > etc > Given that the context filtering in currently only implemented by the > {code}AnalyzingInfixSuggester{code} and by the > {code}BlendedInfixSuggester{code}, this initial implementation will support > only these 2 lookup implementations. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7890) By default require admin rights to access /security.json in ZK
[ https://issues.apache.org/jira/browse/SOLR-7890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14953130#comment-14953130 ] Jan Høydahl commented on SOLR-7890: --- Happy to get more input on this approach to securing {{security.json}} > By default require admin rights to access /security.json in ZK > -- > > Key: SOLR-7890 > URL: https://issues.apache.org/jira/browse/SOLR-7890 > Project: Solr > Issue Type: Sub-task > Components: security >Reporter: Jan Høydahl >Assignee: Jan Høydahl > Fix For: Trunk > > Attachments: SOLR-7890.patch > > > Perhaps {{VMParamsAllAndReadonlyDigestZkACLProvider}} should by default > require admin access for read/write of {{/security.json}}, and other > sensitive paths. Today this is left to the user to implement. > Also, perhaps factor out the already-known sensitive paths into a separate > class, so that various {{ACLProvider}} implementations can get a list of > paths that should be admin-only, read-only etc from one central place. Then > 3rd party impls pulling ZK creds from elsewhere will still do the right thing > in the future if we introduce other sensitive Znodes... -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-8158) Analytics module not supporting multi-valued int values
[ https://issues.apache.org/jira/browse/SOLR-8158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jaroslaw Rozanski updated SOLR-8158: Priority: Major (was: Critical) > Analytics module not supporting multi-valued int values > --- > > Key: SOLR-8158 > URL: https://issues.apache.org/jira/browse/SOLR-8158 > Project: Solr > Issue Type: Bug > Components: SearchComponents - other >Affects Versions: 5.3.1 >Reporter: Jaroslaw Rozanski > > Despite documentation in SOLR-5302 it is not possible execute *olap* request > against multi-valued *int* field. > It does not matter whether field is {{docValues}} or not (full re-index > between changes). > Solr: 5.3.1 > Lucene match version: 5.3.1 > Schema version: 1.5 > Field definition: > {code} > multiValued="true"/> > {code} > Query: > {code} > q=*:*=true=unique(valueId) > {code} > Error: > {code} > java.lang.IllegalStateException: unexpected docvalues type SORTED_SET for > field 'valueId' (expected=NUMERIC). Use UninvertingReader or index with > docvalues. at > org.apache.lucene.index.DocValues.checkField(DocValues.java:208) at > org.apache.lucene.index.DocValues.getNumeric(DocValues.java:227) at > org.apache.lucene.queries.function.valuesource.IntFieldSource.getValues(IntFieldSource.java:56) > at > org.apache.solr.analytics.statistics.MinMaxStatsCollector.setNextReader(MinMaxStatsCollector.java:50) > at > org.apache.solr.analytics.statistics.AbstractDelegatingStatsCollector.setNextReader(AbstractDelegatingStatsCollector.java:47) > at > org.apache.solr.analytics.accumulator.BasicAccumulator.doSetNextReader(BasicAccumulator.java:86) > at > org.apache.lucene.search.SimpleCollector.getLeafCollector(SimpleCollector.java:33) > at > org.apache.solr.analytics.request.AnalyticsStats.execute(AnalyticsStats.java:116) > at > org.apache.solr.handler.component.AnalyticsComponent.process(AnalyticsComponent.java:44) > at > org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:277) > at > org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143) > at org.apache.solr.core.SolrCore.execute(SolrCore.java:2068) at > org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:669) at > org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:462) at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:214) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:179) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652) > at > org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585) at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) > at > org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577) > at > org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223) > at > org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127) > at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515) > at > org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) > at > org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) > at > org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215) > at > org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110) > at > org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97) > at org.eclipse.jetty.server.Server.handle(Server.java:499) at > org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310) at > org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257) > at org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540) > at > org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635) > at > org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555) > at java.lang.Thread.run(Thread.java:745) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6276) Add matchCost() api to TwoPhaseDocIdSetIterator
[ https://issues.apache.org/jira/browse/LUCENE-6276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14953114#comment-14953114 ] Robert Muir commented on LUCENE-6276: - {quote} As to TwoPhaseIterator or DocIdSetIterator, I think this boils down to whether the leading iterator in ConjunctionDISI should be chosen using the expected number of matching docs only, or also using the totalTermFreq's somehow. This is for more complex queries, for example a conjunction with at least one phrase or SpanNearQuery. But for the more complex queries two phase approximation is already in place, so having matchCost() only in the two phase code could be enough even for these queries. {quote} Yes, to keep things simple, I imagined this api would just be the cost of calling {{matches()}} itself so I think the two phase API is the correct place to put it (like in your patch). We already have a {{cost()}} api for DISI for doing things like conjunctions (yes its purely based on density and maybe that is imperfect) but I think we should try to narrow the scope of this issue to just the cost of the {{matches()}} operation, which can vary wildly depending on query type or document size. What adrien says about "likelyhood of match" is also interesting but I think we want to defer that too. To me that is just a matter of having more accurate {{cost()}} and it may not be easy or feasible to improve... > Add matchCost() api to TwoPhaseDocIdSetIterator > --- > > Key: LUCENE-6276 > URL: https://issues.apache.org/jira/browse/LUCENE-6276 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Robert Muir > Attachments: LUCENE-6276-ExactPhraseOnly.patch > > > We could add a method like TwoPhaseDISI.matchCost() defined as something like > estimate of nanoseconds or similar. > ConjunctionScorer could use this method to sort its 'twoPhaseIterators' array > so that cheaper ones are called first. Today it has no idea if one scorer is > a simple phrase scorer on a short field vs another that might do some geo > calculation or more expensive stuff. > PhraseScorers could implement this based on index statistics (e.g. > totalTermFreq/maxDoc) -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-8050) Partial update on document with multivalued date field fails
[ https://issues.apache.org/jira/browse/SOLR-8050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14953031#comment-14953031 ] Luc Vanlerberghe commented on SOLR-8050: Contrary to the components list in the original report, this is not a SolrJ issue but a bug in the update logic in solr core itself. @reger: I didn't submit the original report so I cannot update it. Could you update it to increase the likelihood that a committer picks it up? I'm having a go at it, but I'm not familiar with the internals of solr atomic updates... > Partial update on document with multivalued date field fails > > > Key: SOLR-8050 > URL: https://issues.apache.org/jira/browse/SOLR-8050 > Project: Solr > Issue Type: Bug > Components: clients - java, SolrJ >Affects Versions: 5.2.1 > Environment: embedded solr > java 1.7 > win >Reporter: Burkhard Buelte > Attachments: screenshot-1.png > > > When updating a document with multivalued date field Solr throws a exception > like: org.apache.solr.common.SolrException: Invalid Date String:'Mon Sep > 14 01:48:38 CEST 2015' > even if the update document doesn't contain any datefield. > See following code snippet to reproduce > 1. create a doc with multivalued date field (here dynamic field _dts) > SolrInputDocument doc = new SolrInputDocument(); > String id = Long.toString(System.currentTimeMillis()); > System.out.println("testUpdate: adding test document to solr ID=" + > id); > doc.addField(CollectionSchema.id.name(), id); > doc.addField(CollectionSchema.title.name(), "Lorem ipsum"); > doc.addField(CollectionSchema.host_s.name(), "yacy.net"); > doc.addField(CollectionSchema.text_t.name(), "Lorem ipsum dolor sit > amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut > labore et dolore magna aliqua."); > doc.addField(CollectionSchema.dates_in_content_dts.name(), new > Date()); > solr.add(doc); > solr.commit(true); > 2. update any field on this doc via partial update > SolrInputDocument sid = new SolrInputDocument(); > sid.addField(CollectionSchema.id.name(), > doc.getFieldValue(CollectionSchema.id.name())); > sid.addField(CollectionSchema.host_s.name(), "yacy.yacy"); > solr.update(sid); > solr.commit(true); > Result > Caused by: org.apache.solr.common.SolrException: Invalid Date String:'Mon Sep > 14 01:48:38 CEST 2015' > at org.apache.solr.util.DateFormatUtil.parseMath(DateFormatUtil.java:87) > at > org.apache.solr.schema.TrieField.readableToIndexed(TrieField.java:473) > at org.apache.solr.schema.TrieField.createFields(TrieField.java:715) > at > org.apache.solr.update.DocumentBuilder.addField(DocumentBuilder.java:48) > at > org.apache.solr.update.DocumentBuilder.toDocument(DocumentBuilder.java:123) > at > org.apache.solr.update.AddUpdateCommand.getLuceneDocument(AddUpdateCommand.java:83) > at > org.apache.solr.update.DirectUpdateHandler2.addDoc0(DirectUpdateHandler2.java:237) > at > org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:163) > at > org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:69) > at > org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:51) > at > org.apache.solr.update.processor.DistributedUpdateProcessor.doLocalAdd(DistributedUpdateProcessor.java:955) > at > org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(DistributedUpdateProcessor.java:1110) > at > org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(DistributedUpdateProcessor.java:706) > at > org.apache.solr.update.processor.LogUpdateProcessor.processAdd(LogUpdateProcessorFactory.java:104) > at > org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:51) > at > org.apache.solr.update.processor.LanguageIdentifierUpdateProcessor.processAdd(LanguageIdentifierUpdateProcessor.java:207) > at > org.apache.solr.handler.loader.XMLLoader.processUpdate(XMLLoader.java:250) > at org.apache.solr.handler.loader.XMLLoader.load(XMLLoader.java:177) > at > org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java:98) > at > org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:74) > at > org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143) > at org.apache.solr.core.SolrCore.execute(SolrCore.java:2068) > at > org.apache.solr.client.solrj.embedded.EmbeddedSolrServer.request(EmbeddedSolrServer.java:179) > at >
[jira] [Commented] (SOLR-8139) Provide a way for the admin UI to utilize managed schema functionality
[ https://issues.apache.org/jira/browse/SOLR-8139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14953072#comment-14953072 ] Jan Høydahl commented on SOLR-8139: --- This is cool! And we get extra testing of the APIs as well! > Provide a way for the admin UI to utilize managed schema functionality > -- > > Key: SOLR-8139 > URL: https://issues.apache.org/jira/browse/SOLR-8139 > Project: Solr > Issue Type: Improvement > Components: UI >Reporter: Erick Erickson >Assignee: Upayavira > Attachments: SOLR-8139.patch, add-field-with-errors.png, > add-field-with-omit-open.png, add-field.png > > > See the discussion at the related SOLR-8131. The suggestion there is to make > managed schema the default in 6.0. To make the new-user experience much > smoother in that setup, it would be great if the admin UI had a simple > wrapper around the managed schema API. > It would be a fine thing to have a way of bypassing the whole "find the magic > config set, edit it in your favorite editor, figure out how to upload it via > zkcli then reload the collection" current paradigm and instead be able to > update the schema via the admin UI. > This should bypass the issues with uploading arbitrary XML to the server that > shot down one of the other attempts to edit the schema from the admin UI. > This is mostly a marker. This could be a significant differentiator between > the old and new admin UIs. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-8158) Analytics module not supporting multi-valued int values
Jaroslaw Rozanski created SOLR-8158: --- Summary: Analytics module not supporting multi-valued int values Key: SOLR-8158 URL: https://issues.apache.org/jira/browse/SOLR-8158 Project: Solr Issue Type: Bug Components: SearchComponents - other Affects Versions: 5.3.1 Reporter: Jaroslaw Rozanski Priority: Critical Despite documentation in SOLR-5302 it is not possible execute *olap* request against multi-valued *int* field. It does not matter whether field is {{docValues}} or not (full re-index between changes). Solr: 5.3.1 Lucene match version: 5.3.1 Schema version: 1.5 Field definition: {code} {code} Query: {code} q=*:*=true=unique(valueId) {code} Error: {code} java.lang.IllegalStateException: unexpected docvalues type SORTED_SET for field 'valueId' (expected=NUMERIC). Use UninvertingReader or index with docvalues. at org.apache.lucene.index.DocValues.checkField(DocValues.java:208) at org.apache.lucene.index.DocValues.getNumeric(DocValues.java:227) at org.apache.lucene.queries.function.valuesource.IntFieldSource.getValues(IntFieldSource.java:56) at org.apache.solr.analytics.statistics.MinMaxStatsCollector.setNextReader(MinMaxStatsCollector.java:50) at org.apache.solr.analytics.statistics.AbstractDelegatingStatsCollector.setNextReader(AbstractDelegatingStatsCollector.java:47) at org.apache.solr.analytics.accumulator.BasicAccumulator.doSetNextReader(BasicAccumulator.java:86) at org.apache.lucene.search.SimpleCollector.getLeafCollector(SimpleCollector.java:33) at org.apache.solr.analytics.request.AnalyticsStats.execute(AnalyticsStats.java:116) at org.apache.solr.handler.component.AnalyticsComponent.process(AnalyticsComponent.java:44) at org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:277) at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143) at org.apache.solr.core.SolrCore.execute(SolrCore.java:2068) at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:669) at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:462) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:214) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:179) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652) at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577) at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223) at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127) at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515) at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215) at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97) at org.eclipse.jetty.server.Server.handle(Server.java:499) at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310) at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257) at org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635) at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555) at java.lang.Thread.run(Thread.java:745) {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Moved] (SOLR-8159) Tokenizing Chinese strings using lucene Chinese analyzer
[ https://issues.apache.org/jira/browse/SOLR-8159?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gavin moved INFRA-10577 to SOLR-8159: - INFRA-Members: (was: [infrastructure-team]) Workflow: classic default workflow (was: INFRA Workflow) Key: SOLR-8159 (was: INFRA-10577) Project: Solr (was: Infrastructure) > Tokenizing Chinese strings using lucene Chinese analyzer > > > Key: SOLR-8159 > URL: https://issues.apache.org/jira/browse/SOLR-8159 > Project: Solr > Issue Type: Bug >Reporter: Srimanth Bangalore Krishnamurthy >Priority: Minor > > The text that is indexed: 校准的卡尔曼滤波器 > Query string: 卡尔曼滤波 > The exact query string is present in an indexed document on SOLR. But it > doesn't return this document. > SOLR analysis shows on index: > 的卡 > 尔 > 曼 > 滤波器 > but the queried terms show: > 卡 > 尔 > 曼 > 滤波 > The other characters appear to be influencing how 卡尔曼滤波 is tokenized. > Is this an expected behavior?? > Here are the things I have tried > 1) I tried a couple of different tokenizers and the behavior is the same. > 2) I tried to explore the option of dictionary but I found this: > https://issues.apache.org/jira/browse/LUCENE-1817 > 3) I tried using the following with text_zh for chinese documents. > a) solr.KeywordMarkerFilterFactory > b) solr.StemmerOverrideFilterFactory > c) Adding to synonyms.txt > All these seem to work only with text_en and have no effect for text_zh > Are there any options I can try to make sure that the query returns this > document? -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6301) Deprecate Filter
[ https://issues.apache.org/jira/browse/LUCENE-6301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14953174#comment-14953174 ] Jack Krupansky commented on LUCENE-6301: Thanks! LGTM. Now let's see if the Solr guys pick up on this. > Deprecate Filter > > > Key: LUCENE-6301 > URL: https://issues.apache.org/jira/browse/LUCENE-6301 > Project: Lucene - Core > Issue Type: Task >Reporter: Adrien Grand >Assignee: Adrien Grand > Fix For: 5.2, Trunk > > Attachments: LUCENE-6301.patch, LUCENE-6301.patch > > > It will still take time to completely remove Filter, but I think we should > start deprecating it now to state our intention and encourage users to move > to queries as soon as possible? -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-trunk-MacOSX (64bit/jdk1.8.0) - Build # 2797 - Failure!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-MacOSX/2797/ Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC 1 tests failed. FAILED: org.apache.solr.cloud.DeleteLastCustomShardedReplicaTest.test Error Message: KeeperErrorCode = Session expired for /clusterstate.json Stack Trace: org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode = Session expired for /clusterstate.json at __randomizedtesting.SeedInfo.seed([4038303D36D06F39:C86C0FE7982C02C1]:0) at org.apache.zookeeper.KeeperException.create(KeeperException.java:127) at org.apache.zookeeper.KeeperException.create(KeeperException.java:51) at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1155) at org.apache.solr.common.cloud.SolrZkClient$7.execute(SolrZkClient.java:353) at org.apache.solr.common.cloud.SolrZkClient$7.execute(SolrZkClient.java:350) at org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(ZkCmdExecutor.java:61) at org.apache.solr.common.cloud.SolrZkClient.getData(SolrZkClient.java:350) at org.apache.solr.common.cloud.ZkStateReader.refreshLegacyClusterState(ZkStateReader.java:472) at org.apache.solr.common.cloud.ZkStateReader.updateClusterState(ZkStateReader.java:256) at org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:146) at org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:133) at org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:128) at org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForRecoveriesToFinish(AbstractFullDistribZkTestBase.java:830) at org.apache.solr.cloud.DeleteLastCustomShardedReplicaTest.test(DeleteLastCustomShardedReplicaTest.java:76) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1665) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:864) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:900) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:914) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:963) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:938) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:873) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:775) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:809) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:820) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at
[jira] [Commented] (LUCENE-6821) TermQuery's constructors should clone the incoming term
[ https://issues.apache.org/jira/browse/LUCENE-6821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14953138#comment-14953138 ] Tommaso Teofili commented on LUCENE-6821: - do you mean the {{BytesRef.deepCopyOf}} at https://github.com/apache/lucene-solr/blob/trunk/lucene/classification/src/java/org/apache/lucene/classification/SimpleNaiveBayesClassifier.java#L154 ? yes, that's because the reference is updated and used in the {{ClassificationResult}}. I'll see if I can simplify that. > TermQuery's constructors should clone the incoming term > --- > > Key: LUCENE-6821 > URL: https://issues.apache.org/jira/browse/LUCENE-6821 > Project: Lucene - Core > Issue Type: Task >Reporter: Adrien Grand >Priority: Minor > Attachments: LUCENE-6821.patch, LUCENE-6821.patch > > > This is a follow-up of LUCENE-6435: the bug stems from the fact that you can > build term queries out of shared BytesRef objects (such as the ones returned > by TermsEnum.next), which is a bit trappy. If TermQuery's constructors would > clone the incoming term, we wouldn't have this trap. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (LUCENE-6821) TermQuery's constructors should clone the incoming term
[ https://issues.apache.org/jira/browse/LUCENE-6821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14953138#comment-14953138 ] Tommaso Teofili edited comment on LUCENE-6821 at 10/12/15 1:52 PM: --- do you mean the {{BytesRef.deepCopyOf}} at https://github.com/apache/lucene-solr/blob/trunk/lucene/classification/src/java/org/apache/lucene/classification/SimpleNaiveBayesClassifier.java#L154 ? If yes, that's because the reference is updated and used in the {{ClassificationResult}}. I'll see if I can simplify that. was (Author: teofili): do you mean the {{BytesRef.deepCopyOf}} at https://github.com/apache/lucene-solr/blob/trunk/lucene/classification/src/java/org/apache/lucene/classification/SimpleNaiveBayesClassifier.java#L154 ? yes, that's because the reference is updated and used in the {{ClassificationResult}}. I'll see if I can simplify that. > TermQuery's constructors should clone the incoming term > --- > > Key: LUCENE-6821 > URL: https://issues.apache.org/jira/browse/LUCENE-6821 > Project: Lucene - Core > Issue Type: Task >Reporter: Adrien Grand >Priority: Minor > Attachments: LUCENE-6821.patch, LUCENE-6821.patch > > > This is a follow-up of LUCENE-6435: the bug stems from the fact that you can > build term queries out of shared BytesRef objects (such as the ones returned > by TermsEnum.next), which is a bit trappy. If TermQuery's constructors would > clone the incoming term, we wouldn't have this trap. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-8050) Partial update on document with multivalued date field fails
[ https://issues.apache.org/jira/browse/SOLR-8050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14953149#comment-14953149 ] Luc Vanlerberghe commented on SOLR-8050: P.s.: I updated the pull request so the original link to the patch ([https://github.com/apache/lucene-solr/pull/202.patch]) now includes the fix. > Partial update on document with multivalued date field fails > > > Key: SOLR-8050 > URL: https://issues.apache.org/jira/browse/SOLR-8050 > Project: Solr > Issue Type: Bug > Components: clients - java, SolrJ >Affects Versions: 5.2.1 > Environment: embedded solr > java 1.7 > win >Reporter: Burkhard Buelte > Attachments: screenshot-1.png > > > When updating a document with multivalued date field Solr throws a exception > like: org.apache.solr.common.SolrException: Invalid Date String:'Mon Sep > 14 01:48:38 CEST 2015' > even if the update document doesn't contain any datefield. > See following code snippet to reproduce > 1. create a doc with multivalued date field (here dynamic field _dts) > SolrInputDocument doc = new SolrInputDocument(); > String id = Long.toString(System.currentTimeMillis()); > System.out.println("testUpdate: adding test document to solr ID=" + > id); > doc.addField(CollectionSchema.id.name(), id); > doc.addField(CollectionSchema.title.name(), "Lorem ipsum"); > doc.addField(CollectionSchema.host_s.name(), "yacy.net"); > doc.addField(CollectionSchema.text_t.name(), "Lorem ipsum dolor sit > amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut > labore et dolore magna aliqua."); > doc.addField(CollectionSchema.dates_in_content_dts.name(), new > Date()); > solr.add(doc); > solr.commit(true); > 2. update any field on this doc via partial update > SolrInputDocument sid = new SolrInputDocument(); > sid.addField(CollectionSchema.id.name(), > doc.getFieldValue(CollectionSchema.id.name())); > sid.addField(CollectionSchema.host_s.name(), "yacy.yacy"); > solr.update(sid); > solr.commit(true); > Result > Caused by: org.apache.solr.common.SolrException: Invalid Date String:'Mon Sep > 14 01:48:38 CEST 2015' > at org.apache.solr.util.DateFormatUtil.parseMath(DateFormatUtil.java:87) > at > org.apache.solr.schema.TrieField.readableToIndexed(TrieField.java:473) > at org.apache.solr.schema.TrieField.createFields(TrieField.java:715) > at > org.apache.solr.update.DocumentBuilder.addField(DocumentBuilder.java:48) > at > org.apache.solr.update.DocumentBuilder.toDocument(DocumentBuilder.java:123) > at > org.apache.solr.update.AddUpdateCommand.getLuceneDocument(AddUpdateCommand.java:83) > at > org.apache.solr.update.DirectUpdateHandler2.addDoc0(DirectUpdateHandler2.java:237) > at > org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:163) > at > org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:69) > at > org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:51) > at > org.apache.solr.update.processor.DistributedUpdateProcessor.doLocalAdd(DistributedUpdateProcessor.java:955) > at > org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(DistributedUpdateProcessor.java:1110) > at > org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(DistributedUpdateProcessor.java:706) > at > org.apache.solr.update.processor.LogUpdateProcessor.processAdd(LogUpdateProcessorFactory.java:104) > at > org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:51) > at > org.apache.solr.update.processor.LanguageIdentifierUpdateProcessor.processAdd(LanguageIdentifierUpdateProcessor.java:207) > at > org.apache.solr.handler.loader.XMLLoader.processUpdate(XMLLoader.java:250) > at org.apache.solr.handler.loader.XMLLoader.load(XMLLoader.java:177) > at > org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java:98) > at > org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:74) > at > org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143) > at org.apache.solr.core.SolrCore.execute(SolrCore.java:2068) > at > org.apache.solr.client.solrj.embedded.EmbeddedSolrServer.request(EmbeddedSolrServer.java:179) > at > org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:135) > at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:174) > at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:191) > P.S.
[JENKINS] Lucene-Solr-NightlyTests-5.x - Build # 983 - Still Failing
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-5.x/983/ 13 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.TestSolrCoreProperties Error Message: ObjectTracker found 6 object(s) that were not released!!! [SolrZkClient] Stack Trace: java.lang.AssertionError: ObjectTracker found 6 object(s) that were not released!!! [SolrZkClient] at __randomizedtesting.SeedInfo.seed([628DB96865AB7A13]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.junit.Assert.assertNull(Assert.java:551) at org.apache.solr.SolrTestCaseJ4.afterClass(SolrTestCaseJ4.java:236) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1665) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at java.lang.Thread.run(Thread.java:745) FAILED: org.apache.solr.cloud.DeleteLastCustomShardedReplicaTest.test Error Message: KeeperErrorCode = Session expired for /solr Stack Trace: org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode = Session expired for /solr at __randomizedtesting.SeedInfo.seed([628DB96865AB7A13:EAD986B2CB5717EB]:0) at org.apache.zookeeper.KeeperException.create(KeeperException.java:127) at org.apache.zookeeper.KeeperException.create(KeeperException.java:51) at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1045) at org.apache.solr.common.cloud.SolrZkClient$4.execute(SolrZkClient.java:294) at org.apache.solr.common.cloud.SolrZkClient$4.execute(SolrZkClient.java:291) at org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(ZkCmdExecutor.java:61) at org.apache.solr.common.cloud.SolrZkClient.exists(SolrZkClient.java:291) at org.apache.solr.common.cloud.SolrZkClient.makePath(SolrZkClient.java:486) at org.apache.solr.common.cloud.SolrZkClient.makePath(SolrZkClient.java:403) at org.apache.solr.cloud.AbstractZkTestCase.buildZooKeeper(AbstractZkTestCase.java:90) at org.apache.solr.cloud.AbstractZkTestCase.buildZooKeeper(AbstractZkTestCase.java:83) at org.apache.solr.cloud.AbstractDistribZkTestBase.distribSetUp(AbstractDistribZkTestBase.java:72) at org.apache.solr.cloud.AbstractFullDistribZkTestBase.distribSetUp(AbstractFullDistribZkTestBase.java:197) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:933) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at
[jira] [Commented] (SOLR-7323) Core Admin API looks for config sets in wrong directory
[ https://issues.apache.org/jira/browse/SOLR-7323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14953180#comment-14953180 ] Søren Dyrsting commented on SOLR-7323: -- HI, I get the same issues as Mr Haase. After running install_solr_service.sh I run ln -s /opt/solr/server/solr/configsets /var/solr/data/configsets Then create core works successfully using curl 'http://localhost:8983/solr/admin/cores?action=CREATE=new_core=[myPreferredConfigSet]; I still have to customize my configset in /var/solr/data/configsets/. I really don't know what the intention is, whether the live config sets should be located in HOME or INSTALL directories, so it's just a workaround. > Core Admin API looks for config sets in wrong directory > --- > > Key: SOLR-7323 > URL: https://issues.apache.org/jira/browse/SOLR-7323 > Project: Solr > Issue Type: Bug > Components: Server >Affects Versions: 5.0 >Reporter: Mark Haase > > *To Reproduce* > Try to create a core using Core Admin API and a config set: > {code} > curl > 'http://localhost:8983/solr/admin/cores?action=CREATE=new_core=basic_configs' > {code} > *Expected Outcome* > Core is created in `/var/solr/data/new_core` using one of the config sets > installed by the installer script in > `/opt/solr/server/solr/configsets/basic_configs`. > *Actual Outcome* > {code} > > > 400 name="QTime">9Error CREATEing > SolrCore 'new_core': Unable to create core [new_core] Caused by: Could not > load configuration from directory > /var/solr/data/configsets/basic_configs400 > > {code} > Why is it looking for config sets in /var/solr/data? I don't know. If that's > where configsets are supposed to be placed, then why does the installer put > them somewhere else? > There's no documented API to tell it to look for config sets anywhere else, > either. It will always search inside /var/solr/data. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-8050) Partial update on document with multivalued date field fails
[ https://issues.apache.org/jira/browse/SOLR-8050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14953145#comment-14953145 ] Luc Vanlerberghe commented on SOLR-8050: I managed to fix it (at least it seems to be ok now without breaking any other tests) The Date object did contain a correct value, but Date.toString() confusingly uses the current Locale (See [Java SE 8 Date and Time|http://www.oracle.com/technetwork/articles/java/jf14-date-time-2125367.html], ??... For example, java.util.Date represents an instant on the timeline—a wrapper around the number of milli-seconds since the UNIX epoch—but if you call toString(), the result suggests that it has a time zone, causing confusion among developers.?? The bug was introduced more than two years ago when adding support for multivalued docvalues. The old code calls {{readableToIndexed}} on {{value.ToString()}} which works for most TrieField types, expect when value is a Date object obtained from reading the old value during an update. Since a little higher the code already construct a correct StorableField, I changed it to use {{storeableToIndexed}} instead. > Partial update on document with multivalued date field fails > > > Key: SOLR-8050 > URL: https://issues.apache.org/jira/browse/SOLR-8050 > Project: Solr > Issue Type: Bug > Components: clients - java, SolrJ >Affects Versions: 5.2.1 > Environment: embedded solr > java 1.7 > win >Reporter: Burkhard Buelte > Attachments: screenshot-1.png > > > When updating a document with multivalued date field Solr throws a exception > like: org.apache.solr.common.SolrException: Invalid Date String:'Mon Sep > 14 01:48:38 CEST 2015' > even if the update document doesn't contain any datefield. > See following code snippet to reproduce > 1. create a doc with multivalued date field (here dynamic field _dts) > SolrInputDocument doc = new SolrInputDocument(); > String id = Long.toString(System.currentTimeMillis()); > System.out.println("testUpdate: adding test document to solr ID=" + > id); > doc.addField(CollectionSchema.id.name(), id); > doc.addField(CollectionSchema.title.name(), "Lorem ipsum"); > doc.addField(CollectionSchema.host_s.name(), "yacy.net"); > doc.addField(CollectionSchema.text_t.name(), "Lorem ipsum dolor sit > amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut > labore et dolore magna aliqua."); > doc.addField(CollectionSchema.dates_in_content_dts.name(), new > Date()); > solr.add(doc); > solr.commit(true); > 2. update any field on this doc via partial update > SolrInputDocument sid = new SolrInputDocument(); > sid.addField(CollectionSchema.id.name(), > doc.getFieldValue(CollectionSchema.id.name())); > sid.addField(CollectionSchema.host_s.name(), "yacy.yacy"); > solr.update(sid); > solr.commit(true); > Result > Caused by: org.apache.solr.common.SolrException: Invalid Date String:'Mon Sep > 14 01:48:38 CEST 2015' > at org.apache.solr.util.DateFormatUtil.parseMath(DateFormatUtil.java:87) > at > org.apache.solr.schema.TrieField.readableToIndexed(TrieField.java:473) > at org.apache.solr.schema.TrieField.createFields(TrieField.java:715) > at > org.apache.solr.update.DocumentBuilder.addField(DocumentBuilder.java:48) > at > org.apache.solr.update.DocumentBuilder.toDocument(DocumentBuilder.java:123) > at > org.apache.solr.update.AddUpdateCommand.getLuceneDocument(AddUpdateCommand.java:83) > at > org.apache.solr.update.DirectUpdateHandler2.addDoc0(DirectUpdateHandler2.java:237) > at > org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:163) > at > org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:69) > at > org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:51) > at > org.apache.solr.update.processor.DistributedUpdateProcessor.doLocalAdd(DistributedUpdateProcessor.java:955) > at > org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(DistributedUpdateProcessor.java:1110) > at > org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(DistributedUpdateProcessor.java:706) > at > org.apache.solr.update.processor.LogUpdateProcessor.processAdd(LogUpdateProcessorFactory.java:104) > at > org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:51) > at > org.apache.solr.update.processor.LanguageIdentifierUpdateProcessor.processAdd(LanguageIdentifierUpdateProcessor.java:207) > at >
[jira] [Commented] (LUCENE-6821) TermQuery's constructors should clone the incoming term
[ https://issues.apache.org/jira/browse/LUCENE-6821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14953160#comment-14953160 ] Tommaso Teofili commented on LUCENE-6821: - after a quick look it doesn't seem removing the deep copy in favour of creating new {{BytesRef}} would improve anything, actually it'd be slightly worse. I would say let's keep that. > TermQuery's constructors should clone the incoming term > --- > > Key: LUCENE-6821 > URL: https://issues.apache.org/jira/browse/LUCENE-6821 > Project: Lucene - Core > Issue Type: Task >Reporter: Adrien Grand >Priority: Minor > Attachments: LUCENE-6821.patch, LUCENE-6821.patch > > > This is a follow-up of LUCENE-6435: the bug stems from the fact that you can > build term queries out of shared BytesRef objects (such as the ones returned > by TermsEnum.next), which is a bit trappy. If TermQuery's constructors would > clone the incoming term, we wouldn't have this trap. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS-EA] Lucene-Solr-trunk-Linux (64bit/jdk1.9.0-ea-b83) - Build # 14504 - Still Failing!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/14504/ Java: 64bit/jdk1.9.0-ea-b83 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC -XX:CompileCommand=exclude,org.apache.directory.api.ldap.model.name.Dn::rdnOidToName 1 tests failed. FAILED: org.apache.lucene.index.TestBlockJoinSorter.test Error Message: The top-reader used to create Weight (LeafReaderContext(SlowCompositeReaderWrapper(FCInvisibleMultiReader(FCInvisibleMultiReader(_c(6.0.0):C4293))) docBase=0 ord=0)) is not the same as the current reader's top-reader (LeafReaderContext(_c(6.0.0):C4293 docBase=0 ord=0) Stack Trace: java.lang.AssertionError: The top-reader used to create Weight (LeafReaderContext(SlowCompositeReaderWrapper(FCInvisibleMultiReader(FCInvisibleMultiReader(_c(6.0.0):C4293))) docBase=0 ord=0)) is not the same as the current reader's top-reader (LeafReaderContext(_c(6.0.0):C4293 docBase=0 ord=0) at __randomizedtesting.SeedInfo.seed([B655F224183AE465:3E01CDFEB6C6899D]:0) at org.apache.lucene.search.TermQuery$TermWeight.scorer(TermQuery.java:100) at org.apache.lucene.search.LRUQueryCache$CachingWrapperWeight.scorer(LRUQueryCache.java:592) at org.apache.lucene.search.AssertingWeight.scorer(AssertingWeight.java:62) at org.apache.lucene.search.AssertingWeight.scorer(AssertingWeight.java:62) at org.apache.lucene.index.TestBlockJoinSorter.test(TestBlockJoinSorter.java:73) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:519) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1665) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:864) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:900) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:914) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:873) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:775) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:809) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:820) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at
[jira] [Commented] (LUCENE-6301) Deprecate Filter
[ https://issues.apache.org/jira/browse/LUCENE-6301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14953147#comment-14953147 ] ASF subversion and git services commented on LUCENE-6301: - Commit 1708121 from [~jpountz] in branch 'dev/branches/branch_5x' [ https://svn.apache.org/r1708121 ] LUCENE-6301: Deprecate Filter. > Deprecate Filter > > > Key: LUCENE-6301 > URL: https://issues.apache.org/jira/browse/LUCENE-6301 > Project: Lucene - Core > Issue Type: Task >Reporter: Adrien Grand >Assignee: Adrien Grand > Fix For: 5.2, Trunk > > Attachments: LUCENE-6301.patch, LUCENE-6301.patch > > > It will still take time to completely remove Filter, but I think we should > start deprecating it now to state our intention and encourage users to move > to queries as soon as possible? -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6301) Deprecate Filter
[ https://issues.apache.org/jira/browse/LUCENE-6301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14953148#comment-14953148 ] Adrien Grand commented on LUCENE-6301: -- Jack, I just backported to 5.x. Feel free to review and suggest improvements if you feel that the migration path is not clear enough. > Deprecate Filter > > > Key: LUCENE-6301 > URL: https://issues.apache.org/jira/browse/LUCENE-6301 > Project: Lucene - Core > Issue Type: Task >Reporter: Adrien Grand >Assignee: Adrien Grand > Fix For: 5.2, Trunk > > Attachments: LUCENE-6301.patch, LUCENE-6301.patch > > > It will still take time to completely remove Filter, but I think we should > start deprecating it now to state our intention and encourage users to move > to queries as soon as possible? -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6821) TermQuery's constructors should clone the incoming term
[ https://issues.apache.org/jira/browse/LUCENE-6821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14953479#comment-14953479 ] Paul Elschot commented on LUCENE-6821: -- One could also create the Term in the loop and pass that, or its Term.bytes(), around to the other methods. Term.bytes() can also be passed to the ClassificationResult. The patch here has this javadoc at Term.bytes(): /** Returns the bytes of this term, these should not be modified. */ > TermQuery's constructors should clone the incoming term > --- > > Key: LUCENE-6821 > URL: https://issues.apache.org/jira/browse/LUCENE-6821 > Project: Lucene - Core > Issue Type: Task >Reporter: Adrien Grand >Priority: Minor > Attachments: LUCENE-6821.patch, LUCENE-6821.patch > > > This is a follow-up of LUCENE-6435: the bug stems from the fact that you can > build term queries out of shared BytesRef objects (such as the ones returned > by TermsEnum.next), which is a bit trappy. If TermQuery's constructors would > clone the incoming term, we wouldn't have this trap. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.8.0_60) - Build # 14215 - Still Failing!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/14215/ Java: 64bit/jdk1.8.0_60 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC 1 tests failed. FAILED: org.apache.solr.cloud.SolrCloudExampleTest.testLoadDocsIntoGettingStartedCollection Error Message: Delete action failed! Stack Trace: java.lang.AssertionError: Delete action failed! at __randomizedtesting.SeedInfo.seed([41B6AD34A33EF0BB:52D59F5B9251491D]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.apache.solr.cloud.SolrCloudExampleTest.doTestDeleteAction(SolrCloudExampleTest.java:169) at org.apache.solr.cloud.SolrCloudExampleTest.testLoadDocsIntoGettingStartedCollection(SolrCloudExampleTest.java:145) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1665) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:864) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:900) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:914) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:963) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:938) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:873) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:775) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:809) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:820) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at
[jira] [Updated] (SOLR-8160) Terms query parser ignores query analysis
[ https://issues.apache.org/jira/browse/SOLR-8160?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Devansh Dhutia updated SOLR-8160: - Description: Field setup as {code} {code} Value sent to cs field for indexing include: AA, BB Following is observed # {code}{!terms f=cs}AA,BB{code} yields 0 results # {code}{!terms f=cs}aa,bb{code} yields 2 results # {code}=cs:(AA BB){code} yields 2 results # {code}=cs:(aa bb){code} yields 2 results 1 above should behave like the other parsers & obey query time analysis was: Field setup as {code} {code} Value sent to cs field for indexing include: AA, BB, CC Following is observed # {code}{!terms f=cs}AA,BB{code} yields 0 results # {code}{!terms f=cs}aa,bb{code} yields 2 results # {code}=cs:(AA BB){code} yields 2 results # {code}=cs:(aa bb){code} yields 2 results 1 above should behave like the other parsers & obey query time analysis > Terms query parser ignores query analysis > -- > > Key: SOLR-8160 > URL: https://issues.apache.org/jira/browse/SOLR-8160 > Project: Solr > Issue Type: Bug > Components: query parsers, search >Affects Versions: 5.3 >Reporter: Devansh Dhutia > > Field setup as > {code} > multiValued="false" required="false" /> > > > > > > > > > > > {code} > Value sent to cs field for indexing include: AA, BB > Following is observed > # {code}{!terms f=cs}AA,BB{code} yields 0 results > # {code}{!terms f=cs}aa,bb{code} yields 2 results > # {code}=cs:(AA BB){code} yields 2 results > # {code}=cs:(aa bb){code} yields 2 results > 1 above should behave like the other parsers & obey query time analysis -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-8160) Terms query parser ignores query analysis
Devansh Dhutia created SOLR-8160: Summary: Terms query parser ignores query analysis Key: SOLR-8160 URL: https://issues.apache.org/jira/browse/SOLR-8160 Project: Solr Issue Type: Bug Components: query parsers, search Affects Versions: 5.3 Reporter: Devansh Dhutia Field setup as {code} {code} Value sent to cs field for indexing include: AA, BB, CC Following is observed # {code}{!terms f=cs}AA,BB{code} yields 0 results # {code}{!terms f=cs}aa,bb{code} yields 2 results # {code}=cs:(AA BB){code} yields 2 results # {code}=cs:(aa bb){code} yields 2 results 1 above should behave like the other parsers & obey query time analysis -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-8160) Terms query parser ignores query analysis
[ https://issues.apache.org/jira/browse/SOLR-8160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14953503#comment-14953503 ] Hoss Man commented on SOLR-8160: TermsQParser is behaving as intended & documented... https://cwiki.apache.org/confluence/display/solr/Other+Parsers#OtherParsers-TermsQueryParser bq. ... functions similarly to the Term Query Parser but takes in multiple values separated by commas and returns documents matching any of the specified values. This can be useful for generating filter queries from the external human readable terms returned by the faceting or terms components, ... Changing this behavior to involve query time analysis would be a new feature request, and would need to be dependent on some other new localparam option to indicate when it should be enabled. > Terms query parser ignores query analysis > -- > > Key: SOLR-8160 > URL: https://issues.apache.org/jira/browse/SOLR-8160 > Project: Solr > Issue Type: Bug > Components: query parsers, search >Affects Versions: 5.3 >Reporter: Devansh Dhutia > > Field setup as > {code} > multiValued="false" required="false" /> > > > > > > > > > > > {code} > Value sent to cs field for indexing include: AA, BB > Following is observed > {code}={!terms f=cs}AA,BB{code} yields 0 results > {code}={!terms f=cs}aa,bb{code} yields 2 results > {code}=cs:(AA BB){code} yields 2 results > {code}=cs:(aa bb){code} yields 2 results > The first variant above should behave like the other 3 & obey query time > analysis -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-trunk-Solaris (64bit/jdk1.8.0) - Build # 118 - Failure!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Solaris/118/ Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseSerialGC 1 tests failed. FAILED: org.apache.solr.cloud.SolrCloudExampleTest.testLoadDocsIntoGettingStartedCollection Error Message: Delete action failed! Stack Trace: java.lang.AssertionError: Delete action failed! at __randomizedtesting.SeedInfo.seed([ECB26D4BA945E92E:FFD15F24982A5088]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.apache.solr.cloud.SolrCloudExampleTest.doTestDeleteAction(SolrCloudExampleTest.java:169) at org.apache.solr.cloud.SolrCloudExampleTest.testLoadDocsIntoGettingStartedCollection(SolrCloudExampleTest.java:145) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1665) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:864) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:900) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:914) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:963) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:938) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:873) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:775) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:809) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:820) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at
[JENKINS-EA] Lucene-Solr-trunk-Linux (64bit/jdk1.9.0-ea-b83) - Build # 14505 - Still Failing!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/14505/ Java: 64bit/jdk1.9.0-ea-b83 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:CompileCommand=exclude,org.apache.directory.api.ldap.model.name.Dn::rdnOidToName 1 tests failed. FAILED: org.apache.solr.cloud.SolrCloudExampleTest.testLoadDocsIntoGettingStartedCollection Error Message: Delete action failed! Stack Trace: java.lang.AssertionError: Delete action failed! at __randomizedtesting.SeedInfo.seed([5BC9FB0402AADEC0:48AAC96B33C56766]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.apache.solr.cloud.SolrCloudExampleTest.doTestDeleteAction(SolrCloudExampleTest.java:169) at org.apache.solr.cloud.SolrCloudExampleTest.testLoadDocsIntoGettingStartedCollection(SolrCloudExampleTest.java:145) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:519) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1665) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:864) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:900) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:914) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:963) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:938) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:873) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:775) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:809) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:820) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55) at
[jira] [Updated] (SOLR-8160) Terms query parser ignores query analysis
[ https://issues.apache.org/jira/browse/SOLR-8160?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Devansh Dhutia updated SOLR-8160: - Description: Field setup as {code} {code} Value sent to cs field for indexing include: AA, BB Following is observed {code}={!terms f=cs}AA,BB{code} yields 0 results {code}={!terms f=cs}aa,bb{code} yields 2 results {code}=cs:(AA BB){code} yields 2 results {code}=cs:(aa bb){code} yields 2 results The first variant above should behave like the other 3 & obey query time analysis was: Field setup as {code} {code} Value sent to cs field for indexing include: AA, BB Following is observed # {code}{!terms f=cs}AA,BB{code} yields 0 results # {code}{!terms f=cs}aa,bb{code} yields 2 results # {code}=cs:(AA BB){code} yields 2 results # {code}=cs:(aa bb){code} yields 2 results 1 above should behave like the other parsers & obey query time analysis > Terms query parser ignores query analysis > -- > > Key: SOLR-8160 > URL: https://issues.apache.org/jira/browse/SOLR-8160 > Project: Solr > Issue Type: Bug > Components: query parsers, search >Affects Versions: 5.3 >Reporter: Devansh Dhutia > > Field setup as > {code} > multiValued="false" required="false" /> > > > > > > > > > > > {code} > Value sent to cs field for indexing include: AA, BB > Following is observed > {code}={!terms f=cs}AA,BB{code} yields 0 results > {code}={!terms f=cs}aa,bb{code} yields 2 results > {code}=cs:(AA BB){code} yields 2 results > {code}=cs:(aa bb){code} yields 2 results > The first variant above should behave like the other 3 & obey query time > analysis -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: Geospatial indexing of 2laksh polygons in lucene in 2 mins
If you just need the intersecting geohashes for a provided polygon you can do so in a few lines without indexing the shape. // build poly ctx = JtsSpatialContext.GEO; Shape shape = ctx.readShapeFromWkt(wkt); // build geohash prefix tree and iterate intersecting cells CellIterator iter = new GeohashPrefixTree(ctx, 6).getTreeCellIterator(shape, 6); while (iter.hasNext()) { Cell c = iter.next(); System.out.println(c); } On Mon, Oct 12, 2015 at 4:40 AM, Swarn Kumarwrote: > > Hi, > > I am trying to find intersecting geo-hashes(upto precision length 6) of > around 2lakhs polygons. I have tried using postgis st_geohash and > st_intersect but it is very slow for my use-case. I need to index 2lakhs > polygons and find their intersecting geohashes in 2 mins. > > I read it that its possible to do so using lucene. > > http://opensourceconnections.com/blog/2014/04/11/indexing-polygons-in-lucene-with-accuracy/ > > Kindly, tell me how to do it or point me in the right direction. > > Regards, > Swarn Avinash Kumar >
[jira] [Commented] (LUCENE-6276) Add matchCost() api to TwoPhaseDocIdSetIterator
[ https://issues.apache.org/jira/browse/LUCENE-6276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14953518#comment-14953518 ] Paul Elschot commented on LUCENE-6276: -- bq. it would make more sense to sum up totalTermFreq/docFreq for each term I'll change that and change the matchCost() method to return a float instead of a long. bq. TermStatistics.totalTermFreq() may return -1 I'll add a check for that. bq. what definition we should give to matchCost() I'd like to have it reflect an avarage cost to process a single document, once the two phase iterator is at the document. That would exclude the cost for next() and advance(), which would be better in the DISI.cost() method for now. How much of the cost of matches() should be in there I don't know, we'll see. NearSpans also does work after matches() returns true. And the likelyhood of match is the probability that matches() returns true... > Add matchCost() api to TwoPhaseDocIdSetIterator > --- > > Key: LUCENE-6276 > URL: https://issues.apache.org/jira/browse/LUCENE-6276 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Robert Muir > Attachments: LUCENE-6276-ExactPhraseOnly.patch > > > We could add a method like TwoPhaseDISI.matchCost() defined as something like > estimate of nanoseconds or similar. > ConjunctionScorer could use this method to sort its 'twoPhaseIterators' array > so that cheaper ones are called first. Today it has no idea if one scorer is > a simple phrase scorer on a short field vs another that might do some geo > calculation or more expensive stuff. > PhraseScorers could implement this based on index statistics (e.g. > totalTermFreq/maxDoc) -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6836) TestBlockJoinSorter test failure
[ https://issues.apache.org/jira/browse/LUCENE-6836?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14953612#comment-14953612 ] ASF subversion and git services commented on LUCENE-6836: - Commit 1708209 from [~jpountz] in branch 'dev/trunk' [ https://svn.apache.org/r1708209 ] LUCENE-6836: Fix reader context management with block-join sorting. > TestBlockJoinSorter test failure > > > Key: LUCENE-6836 > URL: https://issues.apache.org/jira/browse/LUCENE-6836 > Project: Lucene - Core > Issue Type: Bug >Reporter: Adrien Grand >Assignee: Adrien Grand >Priority: Minor > > http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/14504/ > {noformat} > java.lang.AssertionError: The top-reader used to create Weight > (LeafReaderContext(SlowCompositeReaderWrapper(FCInvisibleMultiReader(FCInvisibleMultiReader(_c(6.0.0):C4293))) > docBase=0 ord=0)) is not the same as the current reader's top-reader > (LeafReaderContext(_c(6.0.0):C4293 docBase=0 ord=0) > at > __randomizedtesting.SeedInfo.seed([B655F224183AE465:3E01CDFEB6C6899D]:0) > at > org.apache.lucene.search.TermQuery$TermWeight.scorer(TermQuery.java:100) > at > org.apache.lucene.search.LRUQueryCache$CachingWrapperWeight.scorer(LRUQueryCache.java:592) > at > org.apache.lucene.search.AssertingWeight.scorer(AssertingWeight.java:62) > at > org.apache.lucene.search.AssertingWeight.scorer(AssertingWeight.java:62) > at > org.apache.lucene.index.TestBlockJoinSorter.test(TestBlockJoinSorter.java:73) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:519) > at > com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1665) > at > com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:864) > at > com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:900) > at > com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:914) > at > org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) > at > org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) > at > org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) > at > org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) > at > org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) > at > com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) > at > com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) > at > com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809) > at > com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460) > at > com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:873) > at > com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:775) > at > com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:809) > at > com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:820) > at > org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) > at > com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) > at > org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) > at > com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) > at > com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) > at > com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) > at > com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) > at > com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) > at > org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54) > at >
[jira] [Commented] (LUCENE-6305) BooleanQuery.equals should ignore clause order
[ https://issues.apache.org/jira/browse/LUCENE-6305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14953622#comment-14953622 ] ASF subversion and git services commented on LUCENE-6305: - Commit 1708211 from [~jpountz] in branch 'dev/trunk' [ https://svn.apache.org/r1708211 ] LUCENE-6305: BooleanQuery.equals/hashcode ignore clause order. > BooleanQuery.equals should ignore clause order > -- > > Key: LUCENE-6305 > URL: https://issues.apache.org/jira/browse/LUCENE-6305 > Project: Lucene - Core > Issue Type: Bug >Reporter: Adrien Grand >Assignee: Adrien Grand >Priority: Minor > Attachments: LUCENE-6305.patch, LUCENE-6305.patch, LUCENE-6305.patch > > > BooleanQuery.equals is sensitive to the order in which clauses have been > added. So for instance "+A +B" would be considered different from "+B +A" > although it generates the same matches and scores. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-6835) Directory.deleteFile should "own" retrying deletions on Windows
[ https://issues.apache.org/jira/browse/LUCENE-6835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael McCandless updated LUCENE-6835: --- Attachment: LUCENE-6835.patch Tentative initial patch, but there are problems. I removed IFD's hairy code about retrying deletes (yay!), and moved it down into FSDirectory, so it now becomes a Directory impl's job to "deal with" finicky filesystems that prevent deletion of open files. I also changed Directory.deleteFile to Directory.deleteFiles, so "revisit pending deletions" is less O(N^2). The big problem with the patch now is I completely disabled MDW's virus/open-file checker, because we are no longer allowed to "fake" a virus checker in a Directory wrapper (since it's now Directory's job to do retries on deletes) ... I think to move forward w/ this approach we must also re-implement the virus checker "down low" inside Mock FS Some tests are still angry because they rely on MDW's not deleting still open files ... > Directory.deleteFile should "own" retrying deletions on Windows > --- > > Key: LUCENE-6835 > URL: https://issues.apache.org/jira/browse/LUCENE-6835 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Michael McCandless > Fix For: Trunk, 5.4 > > Attachments: LUCENE-6835.patch > > > Rob's idea: > Today, we have hairy logic in IndexFileDeleter to deal with Windows file > systems that cannot delete still open files. > And with LUCENE-6829, where OfflineSorter now must deal with the situation > too ... I worked around it by fixing all tests to disable the virus checker. > I think it makes more sense to push this "platform specific problem" lower in > the stack, into Directory? I.e., its deleteFile method would catch the > access denied, and then retry the deletion later. Then we could re-enable > virus checker on all these tests, simplify IndexFileDeleter, etc. > Maybe in the future we could further push this down, into WindowsDirectory, > and fix FSDirectory.open to return WindowsDirectory on windows ... -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (LUCENE-6836) TestBlockJoinSorter test failure
[ https://issues.apache.org/jira/browse/LUCENE-6836?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adrien Grand resolved LUCENE-6836. -- Resolution: Fixed > TestBlockJoinSorter test failure > > > Key: LUCENE-6836 > URL: https://issues.apache.org/jira/browse/LUCENE-6836 > Project: Lucene - Core > Issue Type: Bug >Reporter: Adrien Grand >Assignee: Adrien Grand >Priority: Minor > > http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/14504/ > {noformat} > java.lang.AssertionError: The top-reader used to create Weight > (LeafReaderContext(SlowCompositeReaderWrapper(FCInvisibleMultiReader(FCInvisibleMultiReader(_c(6.0.0):C4293))) > docBase=0 ord=0)) is not the same as the current reader's top-reader > (LeafReaderContext(_c(6.0.0):C4293 docBase=0 ord=0) > at > __randomizedtesting.SeedInfo.seed([B655F224183AE465:3E01CDFEB6C6899D]:0) > at > org.apache.lucene.search.TermQuery$TermWeight.scorer(TermQuery.java:100) > at > org.apache.lucene.search.LRUQueryCache$CachingWrapperWeight.scorer(LRUQueryCache.java:592) > at > org.apache.lucene.search.AssertingWeight.scorer(AssertingWeight.java:62) > at > org.apache.lucene.search.AssertingWeight.scorer(AssertingWeight.java:62) > at > org.apache.lucene.index.TestBlockJoinSorter.test(TestBlockJoinSorter.java:73) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:519) > at > com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1665) > at > com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:864) > at > com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:900) > at > com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:914) > at > org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) > at > org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) > at > org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) > at > org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) > at > org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) > at > com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) > at > com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) > at > com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809) > at > com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460) > at > com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:873) > at > com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:775) > at > com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:809) > at > com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:820) > at > org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) > at > com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) > at > org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) > at > com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) > at > com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) > at > com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) > at > com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) > at > com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) > at > org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54) > at > org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) > at > org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) > at >
[jira] [Commented] (LUCENE-6835) Directory.deleteFile should "own" retrying deletions on Windows
[ https://issues.apache.org/jira/browse/LUCENE-6835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14953676#comment-14953676 ] ASF subversion and git services commented on LUCENE-6835: - Commit 1708229 from [~mikemccand] in branch 'dev/branches/lucene6835' [ https://svn.apache.org/r1708229 ] LUCENE-6835: make branch > Directory.deleteFile should "own" retrying deletions on Windows > --- > > Key: LUCENE-6835 > URL: https://issues.apache.org/jira/browse/LUCENE-6835 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Michael McCandless > Fix For: Trunk, 5.4 > > Attachments: LUCENE-6835.patch > > > Rob's idea: > Today, we have hairy logic in IndexFileDeleter to deal with Windows file > systems that cannot delete still open files. > And with LUCENE-6829, where OfflineSorter now must deal with the situation > too ... I worked around it by fixing all tests to disable the virus checker. > I think it makes more sense to push this "platform specific problem" lower in > the stack, into Directory? I.e., its deleteFile method would catch the > access denied, and then retry the deletion later. Then we could re-enable > virus checker on all these tests, simplify IndexFileDeleter, etc. > Maybe in the future we could further push this down, into WindowsDirectory, > and fix FSDirectory.open to return WindowsDirectory on windows ... -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6835) Directory.deleteFile should "own" retrying deletions on Windows
[ https://issues.apache.org/jira/browse/LUCENE-6835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14953677#comment-14953677 ] ASF subversion and git services commented on LUCENE-6835: - Commit 1708230 from [~mikemccand] in branch 'dev/branches/lucene6835' [ https://svn.apache.org/r1708230 ] LUCENE-6835: starting patch > Directory.deleteFile should "own" retrying deletions on Windows > --- > > Key: LUCENE-6835 > URL: https://issues.apache.org/jira/browse/LUCENE-6835 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Michael McCandless > Fix For: Trunk, 5.4 > > Attachments: LUCENE-6835.patch > > > Rob's idea: > Today, we have hairy logic in IndexFileDeleter to deal with Windows file > systems that cannot delete still open files. > And with LUCENE-6829, where OfflineSorter now must deal with the situation > too ... I worked around it by fixing all tests to disable the virus checker. > I think it makes more sense to push this "platform specific problem" lower in > the stack, into Directory? I.e., its deleteFile method would catch the > access denied, and then retry the deletion later. Then we could re-enable > virus checker on all these tests, simplify IndexFileDeleter, etc. > Maybe in the future we could further push this down, into WindowsDirectory, > and fix FSDirectory.open to return WindowsDirectory on windows ... -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6835) Directory.deleteFile should "own" retrying deletions on Windows
[ https://issues.apache.org/jira/browse/LUCENE-6835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14953679#comment-14953679 ] Michael McCandless commented on LUCENE-6835: I committed the starting patch here: https://svn.apache.org/repos/asf/lucene/dev/branches/lucene6835 I'm not sure how to do the virus checking in mock FS ... > Directory.deleteFile should "own" retrying deletions on Windows > --- > > Key: LUCENE-6835 > URL: https://issues.apache.org/jira/browse/LUCENE-6835 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Michael McCandless > Fix For: Trunk, 5.4 > > Attachments: LUCENE-6835.patch > > > Rob's idea: > Today, we have hairy logic in IndexFileDeleter to deal with Windows file > systems that cannot delete still open files. > And with LUCENE-6829, where OfflineSorter now must deal with the situation > too ... I worked around it by fixing all tests to disable the virus checker. > I think it makes more sense to push this "platform specific problem" lower in > the stack, into Directory? I.e., its deleteFile method would catch the > access denied, and then retry the deletion later. Then we could re-enable > virus checker on all these tests, simplify IndexFileDeleter, etc. > Maybe in the future we could further push this down, into WindowsDirectory, > and fix FSDirectory.open to return WindowsDirectory on windows ... -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6836) TestBlockJoinSorter test failure
[ https://issues.apache.org/jira/browse/LUCENE-6836?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14953613#comment-14953613 ] ASF subversion and git services commented on LUCENE-6836: - Commit 1708210 from [~jpountz] in branch 'dev/branches/branch_5x' [ https://svn.apache.org/r1708210 ] LUCENE-6836: Fix reader context management with block-join sorting. > TestBlockJoinSorter test failure > > > Key: LUCENE-6836 > URL: https://issues.apache.org/jira/browse/LUCENE-6836 > Project: Lucene - Core > Issue Type: Bug >Reporter: Adrien Grand >Assignee: Adrien Grand >Priority: Minor > > http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/14504/ > {noformat} > java.lang.AssertionError: The top-reader used to create Weight > (LeafReaderContext(SlowCompositeReaderWrapper(FCInvisibleMultiReader(FCInvisibleMultiReader(_c(6.0.0):C4293))) > docBase=0 ord=0)) is not the same as the current reader's top-reader > (LeafReaderContext(_c(6.0.0):C4293 docBase=0 ord=0) > at > __randomizedtesting.SeedInfo.seed([B655F224183AE465:3E01CDFEB6C6899D]:0) > at > org.apache.lucene.search.TermQuery$TermWeight.scorer(TermQuery.java:100) > at > org.apache.lucene.search.LRUQueryCache$CachingWrapperWeight.scorer(LRUQueryCache.java:592) > at > org.apache.lucene.search.AssertingWeight.scorer(AssertingWeight.java:62) > at > org.apache.lucene.search.AssertingWeight.scorer(AssertingWeight.java:62) > at > org.apache.lucene.index.TestBlockJoinSorter.test(TestBlockJoinSorter.java:73) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:519) > at > com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1665) > at > com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:864) > at > com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:900) > at > com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:914) > at > org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) > at > org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) > at > org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) > at > org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) > at > org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) > at > com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) > at > com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) > at > com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809) > at > com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460) > at > com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:873) > at > com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:775) > at > com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:809) > at > com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:820) > at > org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) > at > com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) > at > org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) > at > com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) > at > com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) > at > com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) > at > com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) > at > com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) > at > org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54) > at >
[jira] [Commented] (LUCENE-6276) Add matchCost() api to TwoPhaseDocIdSetIterator
[ https://issues.apache.org/jira/browse/LUCENE-6276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14953571#comment-14953571 ] Adrien Grand commented on LUCENE-6276: -- bq. change the matchCost() method to return a float instead of a long I liked having it as a long, like DISI.cost(). Maybe we could just round? bq. I'd like to have it reflect an avarage cost to process a single document, once the two phase iterator is at the document. That would exclude the cost for next() and advance(), which would be better in the DISI.cost() method for now. Indeed this is what it should do! Sorry I introduced some confusion, the reason why I brought these methods is ReqExclScorer, whose TwoPhaseIterator calls DocIdSetIterator.advance() on the excluded iterator in oder to validate a match. So we need to decide how costly calling advance() is. > Add matchCost() api to TwoPhaseDocIdSetIterator > --- > > Key: LUCENE-6276 > URL: https://issues.apache.org/jira/browse/LUCENE-6276 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Robert Muir > Attachments: LUCENE-6276-ExactPhraseOnly.patch > > > We could add a method like TwoPhaseDISI.matchCost() defined as something like > estimate of nanoseconds or similar. > ConjunctionScorer could use this method to sort its 'twoPhaseIterators' array > so that cheaper ones are called first. Today it has no idea if one scorer is > a simple phrase scorer on a short field vs another that might do some geo > calculation or more expensive stuff. > PhraseScorers could implement this based on index statistics (e.g. > totalTermFreq/maxDoc) -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7888) Make Lucene's AnalyzingInfixSuggester.lookup() method that takes a BooleanQuery filter parameter available in Solr
[ https://issues.apache.org/jira/browse/SOLR-7888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14953694#comment-14953694 ] Jan Høydahl commented on SOLR-7888: --- [~arcadius] and [~ctargett], I'd appreciate a review of my changes to the refguide page above. > Make Lucene's AnalyzingInfixSuggester.lookup() method that takes a > BooleanQuery filter parameter available in Solr > -- > > Key: SOLR-7888 > URL: https://issues.apache.org/jira/browse/SOLR-7888 > Project: Solr > Issue Type: New Feature > Components: Suggester >Affects Versions: 5.2.1 >Reporter: Arcadius Ahouansou >Assignee: Jan Høydahl > Fix For: 5.4, Trunk > > Attachments: SOLR-7888-7963.patch, SOLR-7888.patch, SOLR-7888.patch > > > LUCENE-6464 has introduced a very flexible lookup method that takes as > parameter a BooleanQuery that is used for filtering results. > This ticket is to expose that method to Solr. > This would allow user to do: > {code} > /suggest?suggest=true=true=term=contexts:tennis > /suggest?suggest=true=true=term=contexts:golf > AND contexts:football > {code} > etc > Given that the context filtering in currently only implemented by the > {code}AnalyzingInfixSuggester{code} and by the > {code}BlendedInfixSuggester{code}, this initial implementation will support > only these 2 lookup implementations. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6305) BooleanQuery.equals should ignore clause order
[ https://issues.apache.org/jira/browse/LUCENE-6305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14953732#comment-14953732 ] ASF subversion and git services commented on LUCENE-6305: - Commit 1708244 from [~jpountz] in branch 'dev/branches/branch_5x' [ https://svn.apache.org/r1708244 ] LUCENE-6305: BooleanQuery.equals/hashcode ignore clause order. > BooleanQuery.equals should ignore clause order > -- > > Key: LUCENE-6305 > URL: https://issues.apache.org/jira/browse/LUCENE-6305 > Project: Lucene - Core > Issue Type: Bug >Reporter: Adrien Grand >Assignee: Adrien Grand >Priority: Minor > Attachments: LUCENE-6305.patch, LUCENE-6305.patch, LUCENE-6305.patch > > > BooleanQuery.equals is sensitive to the order in which clauses have been > added. So for instance "+A +B" would be considered different from "+B +A" > although it generates the same matches and scores. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-8153) Support upper case and mixed case column identifiers in the SQL interface
[ https://issues.apache.org/jira/browse/SOLR-8153?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joel Bernstein updated SOLR-8153: - Fix Version/s: Trunk > Support upper case and mixed case column identifiers in the SQL interface > - > > Key: SOLR-8153 > URL: https://issues.apache.org/jira/browse/SOLR-8153 > Project: Solr > Issue Type: Improvement > Components: SolrJ >Affects Versions: Trunk >Reporter: Joel Bernstein >Assignee: Joel Bernstein >Priority: Minor > Fix For: Trunk > > Attachments: SOLR-8153.patch > > > The version of the Presto parser currently in Solr is lower casing all SQL > identifiers unless they are string literals (single quotes). This appears to > be happening in the QualifiedName class. > The latest version of the Presto parser has changed the QualifiedName class > to maintain the original casing. This will allow Solr to maintain the > original case of SQL identifiers without requiring quoted identifiers. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-8153) Support upper case and mixed case column identifiers in the SQL interface
[ https://issues.apache.org/jira/browse/SOLR-8153?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joel Bernstein resolved SOLR-8153. -- Resolution: Fixed > Support upper case and mixed case column identifiers in the SQL interface > - > > Key: SOLR-8153 > URL: https://issues.apache.org/jira/browse/SOLR-8153 > Project: Solr > Issue Type: Improvement > Components: SolrJ >Affects Versions: Trunk >Reporter: Joel Bernstein >Assignee: Joel Bernstein >Priority: Minor > Attachments: SOLR-8153.patch > > > The version of the Presto parser currently in Solr is lower casing all SQL > identifiers unless they are string literals (single quotes). This appears to > be happening in the QualifiedName class. > The latest version of the Presto parser has changed the QualifiedName class > to maintain the original casing. This will allow Solr to maintain the > original case of SQL identifiers without requiring quoted identifiers. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-6829) OfflineSorter should use Directory API
[ https://issues.apache.org/jira/browse/LUCENE-6829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael McCandless updated LUCENE-6829: --- Attachment: LUCENE-6829.patch New patch, I think it's close. I made a few simplifications, removed the StandardOpenOption.TRUNCATE_EXISTING (that seems silly to use also with CREATE_NEW), and got precommit passing. Tests seem to pass (at least once). I'm hoping to commit this soon: LUCENE-6825 is blocked on it ... > OfflineSorter should use Directory API > -- > > Key: LUCENE-6829 > URL: https://issues.apache.org/jira/browse/LUCENE-6829 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Michael McCandless >Assignee: Michael McCandless > Fix For: Trunk, 5.4 > > Attachments: LUCENE-6829.patch, LUCENE-6829.patch, LUCENE-6829.patch, > LUCENE-6829.patch > > > I think this is a blocker for LUCENE-6825, because the block KD-tree makes > heavy use of OfflineSorter and we don't want to fill up tmp space ... > This should be a straightforward cutover, but there are some challenges, e.g. > the test was failing because virus checker blocked deleting of files. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6276) Add matchCost() api to TwoPhaseDocIdSetIterator
[ https://issues.apache.org/jira/browse/LUCENE-6276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14953889#comment-14953889 ] Adrien Grand commented on LUCENE-6276: -- The change in ConjunctionDISI does not look right to me: we should keep sorting the iterators based on DISI.cost, and only use {{TwoPhaseIterator.matchCost}} to sort {{TwoPhaseConjunctionDISI.twoPhaseIterators}}. I'm also unhappy about adding a method to TermStatistics, this class should remain as simple as possible. Can we make it private to PhraseWeight? > Add matchCost() api to TwoPhaseDocIdSetIterator > --- > > Key: LUCENE-6276 > URL: https://issues.apache.org/jira/browse/LUCENE-6276 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Robert Muir > Attachments: LUCENE-6276-ExactPhraseOnly.patch, > LUCENE-6276-NoSpans.patch > > > We could add a method like TwoPhaseDISI.matchCost() defined as something like > estimate of nanoseconds or similar. > ConjunctionScorer could use this method to sort its 'twoPhaseIterators' array > so that cheaper ones are called first. Today it has no idea if one scorer is > a simple phrase scorer on a short field vs another that might do some geo > calculation or more expensive stuff. > PhraseScorers could implement this based on index statistics (e.g. > totalTermFreq/maxDoc) -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (LUCENE-6305) BooleanQuery.equals should ignore clause order
[ https://issues.apache.org/jira/browse/LUCENE-6305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adrien Grand resolved LUCENE-6305. -- Resolution: Fixed Fix Version/s: 5.4 6.0 > BooleanQuery.equals should ignore clause order > -- > > Key: LUCENE-6305 > URL: https://issues.apache.org/jira/browse/LUCENE-6305 > Project: Lucene - Core > Issue Type: Bug >Reporter: Adrien Grand >Assignee: Adrien Grand >Priority: Minor > Fix For: 6.0, 5.4 > > Attachments: LUCENE-6305.patch, LUCENE-6305.patch, LUCENE-6305.patch > > > BooleanQuery.equals is sensitive to the order in which clauses have been > added. So for instance "+A +B" would be considered different from "+B +A" > although it generates the same matches and scores. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-NightlyTests-trunk - Build # 819 - Still Failing
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-trunk/819/ 6 tests failed. FAILED: org.apache.solr.cloud.SolrCloudExampleTest.testLoadDocsIntoGettingStartedCollection Error Message: Delete action failed! Stack Trace: java.lang.AssertionError: Delete action failed! at __randomizedtesting.SeedInfo.seed([23065C47B410ACAE:30656E28857F1508]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.apache.solr.cloud.SolrCloudExampleTest.doTestDeleteAction(SolrCloudExampleTest.java:169) at org.apache.solr.cloud.SolrCloudExampleTest.testLoadDocsIntoGettingStartedCollection(SolrCloudExampleTest.java:145) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1665) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:864) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:900) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:914) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:963) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:938) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:873) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:775) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:809) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:820) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at
[jira] [Commented] (LUCENE-6825) Add multidimensional byte[] indexing support to Lucene
[ https://issues.apache.org/jira/browse/LUCENE-6825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14953813#comment-14953813 ] Michael McCandless commented on LUCENE-6825: bq. have you considered a new module for this Well, I think a codec format is a natural way to expose this service, since (like postings, doc values, etc.), it's a low-level utility that can be used for diverse use cases (2D and 3D spatial, numeric range filtering, binary range filtering so we can support IPv6, BigInteger, BigDecimal, etc.). For it to be exposed as a part of the codec means it needs to be in core... > Add multidimensional byte[] indexing support to Lucene > -- > > Key: LUCENE-6825 > URL: https://issues.apache.org/jira/browse/LUCENE-6825 > Project: Lucene - Core > Issue Type: New Feature >Reporter: Michael McCandless >Assignee: Michael McCandless > Fix For: Trunk > > Attachments: LUCENE-6825.patch > > > I think we should graduate the low-level block KD-tree data structure > from sandbox into Lucene's core? > This can be used for very fast 1D range filtering for numerics, > removing the 8 byte (long/double) limit we have today, so e.g. we > could efficiently support BigInteger, BigDecimal, IPv6 addresses, etc. > It can also be used for > 1D use cases, like 2D (lat/lon) and 3D > (x/y/z with geo3d) geo shape intersection searches. > The idea here is to add a new part of the Codec API (DimensionalFormat > maybe?) that can do low-level N-dim point indexing and at runtime > exposes only an "intersect" method. > It should give sizable performance gains (smaller index, faster > searching) over what we have today, and even over what auto-prefix > with efficient numeric terms would do. > There are many steps here ... and I think adding this is analogous to > how we added FSTs, where we first added low level data structure > support and then gradually cutover the places that benefit from an > FST. > So for the first step, I'd like to just add the low-level block > KD-tree impl into oal.util.bkd, but make a couple improvements over > what we have now in sandbox: > * Use byte[] as the value not int (@rjernst's good idea!) > * Generalize it to arbitrary dimensions vs. specialized/forked 1D, > 2D, 3D cases we have now > This is already hard enough :) After that we can build the > DimensionalFormat on top, then cutover existing specialized block > KD-trees. We also need to fix OfflineSorter to use Directory API so > we don't fill up /tmp when building a block KD-tree. > A block KD-tree is at heart an inverted data structure, like postings, > but is also similar to auto-prefix in that it "picks" proper > N-dimensional "terms" (leaf blocks) to index based on how the specific > data being indexed is distributed. I think this is a big part of why > it's so fast, i.e. in contrast to today where we statically slice up > the space into the same terms regardless of the data (trie shifting, > morton codes, geohash, hilbert curves, etc.) > I'm marking this as trunk only for now... as we iterate we can see if > it could maybe go back to 5.x... -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.8.0_60) - Build # 14217 - Failure!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/14217/ Java: 64bit/jdk1.8.0_60 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC 1 tests failed. FAILED: org.apache.solr.cloud.SolrCloudExampleTest.testLoadDocsIntoGettingStartedCollection Error Message: Delete action failed! Stack Trace: java.lang.AssertionError: Delete action failed! at __randomizedtesting.SeedInfo.seed([13307A588070D1FF:534837B11F6859]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.apache.solr.cloud.SolrCloudExampleTest.doTestDeleteAction(SolrCloudExampleTest.java:169) at org.apache.solr.cloud.SolrCloudExampleTest.testLoadDocsIntoGettingStartedCollection(SolrCloudExampleTest.java:145) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1665) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:864) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:900) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:914) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:963) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:938) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:873) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:775) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:809) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:820) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at
[jira] [Updated] (LUCENE-6276) Add matchCost() api to TwoPhaseDocIdSetIterator
[ https://issues.apache.org/jira/browse/LUCENE-6276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Paul Elschot updated LUCENE-6276: - Attachment: LUCENE-6276-NoSpans.patch Patch of 13 Oct 2015. No spans yet. Left matchDoc() returning float because in many cases the avarage number of positions in a matching document will be close to 1. Quite a few nocommits at matchDoc implementations throwing an Error("not yet implemented") This includes a first attempt at sorting the DISI's in ConjunctionDISI. To my surprise, quite a few tests pass, I have not yet tried all of them. > Add matchCost() api to TwoPhaseDocIdSetIterator > --- > > Key: LUCENE-6276 > URL: https://issues.apache.org/jira/browse/LUCENE-6276 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Robert Muir > Attachments: LUCENE-6276-ExactPhraseOnly.patch, > LUCENE-6276-NoSpans.patch > > > We could add a method like TwoPhaseDISI.matchCost() defined as something like > estimate of nanoseconds or similar. > ConjunctionScorer could use this method to sort its 'twoPhaseIterators' array > so that cheaper ones are called first. Today it has no idea if one scorer is > a simple phrase scorer on a short field vs another that might do some geo > calculation or more expensive stuff. > PhraseScorers could implement this based on index statistics (e.g. > totalTermFreq/maxDoc) -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.8.0_60) - Build # 14511 - Failure!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/14511/ Java: 32bit/jdk1.8.0_60 -client -XX:+UseConcMarkSweepGC 2 tests failed. FAILED: org.apache.solr.cloud.SolrCloudExampleTest.testLoadDocsIntoGettingStartedCollection Error Message: Delete action failed! Stack Trace: java.lang.AssertionError: Delete action failed! at __randomizedtesting.SeedInfo.seed([88CDC219B278F33D:9BAEF07683174A9B]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.apache.solr.cloud.SolrCloudExampleTest.doTestDeleteAction(SolrCloudExampleTest.java:169) at org.apache.solr.cloud.SolrCloudExampleTest.testLoadDocsIntoGettingStartedCollection(SolrCloudExampleTest.java:145) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1665) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:864) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:900) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:914) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:963) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:938) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:873) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:775) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:809) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:820) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at
[jira] [Updated] (LUCENE-6837) Add N-best output capability to JapaneseTokenizer
[ https://issues.apache.org/jira/browse/LUCENE-6837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] KONNO, Hiroharu updated LUCENE-6837: Attachment: LUCENE-6837.patch LUCENE-6837.patch > Add N-best output capability to JapaneseTokenizer > - > > Key: LUCENE-6837 > URL: https://issues.apache.org/jira/browse/LUCENE-6837 > Project: Lucene - Core > Issue Type: Improvement > Components: modules/analysis >Affects Versions: 5.3 >Reporter: KONNO, Hiroharu >Priority: Minor > Attachments: LUCENE-6837.patch > > > Japanese morphological analyzers often generate mis-segmented tokens. N-best > output reduces the impact of mis-segmentation on search result. N-best output > is more meaningful than character N-gram, and it increases hit count too. > If you use N-best output, you can get decompounded tokens (ex: > "シニアソフトウェアエンジニア" => {"シニア", "シニアソフトウェアエンジニア", "ソフトウェア", "エンジニア"}) and > overwrapped tokens (ex: "数学部長谷川" => {"数学", "部", "部長", "長谷川", "谷川"}), > depending on the dictionary and N-best parameter settings. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-8161) allowCompression parameter not been used
[ https://issues.apache.org/jira/browse/SOLR-8161?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] HanDongdong updated SOLR-8161: -- Description: shardHandlerFactory config in solr.xml: {noformat} ${socketTimeout:60} ${connTimeout:6} true false {noformat} actually *useRetries* can be set in HttpClient properly, but not use *allowCompression* paramter is it means Solr don't support response compression when do Http request ? here is the source code to parse parameters : {noformat} ModifiableSolrParams clientParams = new ModifiableSolrParams(); clientParams.set(HttpClientUtil.PROP_MAX_CONNECTIONS_PER_HOST, maxConnectionsPerHost); clientParams.set(HttpClientUtil.PROP_MAX_CONNECTIONS, maxConnections); clientParams.set(HttpClientUtil.PROP_SO_TIMEOUT, soTimeout); clientParams.set(HttpClientUtil.PROP_CONNECTION_TIMEOUT, connectionTimeout); if (!useRetries) { clientParams.set(HttpClientUtil.PROP_USE_RETRY, false); } this.defaultClient = HttpClientUtil.createClient(clientParams); // must come after createClient if (useRetries) { // our default retry handler will never retry on IOException if the request has been sent already, // but for these read only requests we can use the standard DefaultHttpRequestRetryHandler rules ((DefaultHttpClient) this.defaultClient).setHttpRequestRetryHandler(new DefaultHttpRequestRetryHandler()); } {noformat} can anyone please explain to me ? we are facing "2048KB upload size exceeds limit" issue, and we don't want to increase the limit for now {noformat} {noformat} was: shardHandlerFactory config in solr.xml: {noformat} ${socketTimeout:60} ${connTimeout:6} true false {noformat} actually *useRetries* can be set in HttpClient properly, but not use *allowCompression* paramter is it means Solr don't support response compression when do Http request ? here is the source code to parse parameters : {noformat} ModifiableSolrParams clientParams = new ModifiableSolrParams(); clientParams.set(HttpClientUtil.PROP_MAX_CONNECTIONS_PER_HOST, maxConnectionsPerHost); clientParams.set(HttpClientUtil.PROP_MAX_CONNECTIONS, maxConnections); clientParams.set(HttpClientUtil.PROP_SO_TIMEOUT, soTimeout); clientParams.set(HttpClientUtil.PROP_CONNECTION_TIMEOUT, connectionTimeout); if (!useRetries) { clientParams.set(HttpClientUtil.PROP_USE_RETRY, false); } this.defaultClient = HttpClientUtil.createClient(clientParams); // must come after createClient if (useRetries) { // our default retry handler will never retry on IOException if the request has been sent already, // but for these read only requests we can use the standard DefaultHttpRequestRetryHandler rules ((DefaultHttpClient) this.defaultClient).setHttpRequestRetryHandler(new DefaultHttpRequestRetryHandler()); } {noformat} can anyone one please explain to me ? we are facing "2048KB upload size exceeds limit" issue, and we don't want to increase the limit for now {noformat} {noformat} > allowCompression parameter not been used > > > Key: SOLR-8161 > URL: https://issues.apache.org/jira/browse/SOLR-8161 > Project: Solr > Issue Type: Bug > Components: clients - java >Affects Versions: 5.0 > Environment: CentOS 7 >Reporter: HanDongdong > Labels: compression, exceeds > Fix For: 5.0 > > > shardHandlerFactory config in solr.xml: > {noformat} > class="HttpShardHandlerFactory"> > ${socketTimeout:60} > ${connTimeout:6} > true > false > > {noformat} > actually *useRetries* can be set in HttpClient properly, but not use > *allowCompression* paramter > is it means Solr don't support response compression when do Http request ? > here is the source code to parse parameters : > {noformat} > ModifiableSolrParams clientParams = new ModifiableSolrParams(); > clientParams.set(HttpClientUtil.PROP_MAX_CONNECTIONS_PER_HOST, > maxConnectionsPerHost); > clientParams.set(HttpClientUtil.PROP_MAX_CONNECTIONS, maxConnections); > clientParams.set(HttpClientUtil.PROP_SO_TIMEOUT, soTimeout); > clientParams.set(HttpClientUtil.PROP_CONNECTION_TIMEOUT, > connectionTimeout); > if (!useRetries) { > clientParams.set(HttpClientUtil.PROP_USE_RETRY, false); > } > this.defaultClient = HttpClientUtil.createClient(clientParams); > > // must come after createClient > if (useRetries) { > // our default retry handler will never retry on IOException if the > request has been sent already, > // but for these read only requests we can use the standard > DefaultHttpRequestRetryHandler rules > ((DefaultHttpClient) this.defaultClient).setHttpRequestRetryHandler(new >
[jira] [Updated] (SOLR-7997) Add new Solr book 'Scaling Big Data with Hadoop and Solr' to resources
[ https://issues.apache.org/jira/browse/SOLR-7997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hrishikesh Karambelkar updated SOLR-7997: - Attachment: solr-7997.patch Patch for Solr-7997 > Add new Solr book 'Scaling Big Data with Hadoop and Solr' to resources > -- > > Key: SOLR-7997 > URL: https://issues.apache.org/jira/browse/SOLR-7997 > Project: Solr > Issue Type: Task >Reporter: Zico Fernandes > Attachments: 3396OS_Scaling Big Data with Hadoop and Solr - Second > Edition.jpg, solr-7997.patch > > > Hrishikesh Vijay Karambelkar is proud to finally announce the book Scaling > Big Data with Hadoop and Solr - Second Edition by Packt Publishing. This book > will help the readers understand, design, build, and optimize their big data > search engine with Hadoop and Apache Solr. > Scaling Big Data with Hadoop and Solr - Second Edition is aimed at > developers, designers, and architects who would like to build big data > enterprise search solutions for their customers or organizations. It explores > the different approaches to making Solr work on big data ecosystems apart > from Apache Hadoop. > A practical guide that covers interesting, real-life cases for big data > search along with sample code chaperones the readers to improve search > performance while working with big data. > Click to read more about Scaling Big Data with Hadoop and Solr - Second > Edition: > https://www.packtpub.com/big-data-and-business-intelligence/scaling-big-data-hadoop-and-solr-second-edition -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7997) Add new Solr book 'Scaling Big Data with Hadoop and Solr' to resources
[ https://issues.apache.org/jira/browse/SOLR-7997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14954387#comment-14954387 ] Hrishikesh Karambelkar commented on SOLR-7997: -- I've attached the files to change the site. Could someone please review and apply the patch ? > Add new Solr book 'Scaling Big Data with Hadoop and Solr' to resources > -- > > Key: SOLR-7997 > URL: https://issues.apache.org/jira/browse/SOLR-7997 > Project: Solr > Issue Type: Task >Reporter: Zico Fernandes > Attachments: 3396OS_Scaling Big Data with Hadoop and Solr - Second > Edition.jpg, solr-7997.patch > > > Hrishikesh Vijay Karambelkar is proud to finally announce the book Scaling > Big Data with Hadoop and Solr - Second Edition by Packt Publishing. This book > will help the readers understand, design, build, and optimize their big data > search engine with Hadoop and Apache Solr. > Scaling Big Data with Hadoop and Solr - Second Edition is aimed at > developers, designers, and architects who would like to build big data > enterprise search solutions for their customers or organizations. It explores > the different approaches to making Solr work on big data ecosystems apart > from Apache Hadoop. > A practical guide that covers interesting, real-life cases for big data > search along with sample code chaperones the readers to improve search > performance while working with big data. > Click to read more about Scaling Big Data with Hadoop and Solr - Second > Edition: > https://www.packtpub.com/big-data-and-business-intelligence/scaling-big-data-hadoop-and-solr-second-edition -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-8153) Support upper case and mixed case column identifiers in the SQL interface
[ https://issues.apache.org/jira/browse/SOLR-8153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14953808#comment-14953808 ] ASF subversion and git services commented on SOLR-8153: --- Commit 1708259 from [~joel.bernstein] in branch 'dev/trunk' [ https://svn.apache.org/r1708259 ] SOLR-8153: Support upper case and mixed case column identifiers in the SQL interface > Support upper case and mixed case column identifiers in the SQL interface > - > > Key: SOLR-8153 > URL: https://issues.apache.org/jira/browse/SOLR-8153 > Project: Solr > Issue Type: Improvement > Components: SolrJ >Affects Versions: Trunk >Reporter: Joel Bernstein >Assignee: Joel Bernstein >Priority: Minor > Attachments: SOLR-8153.patch > > > The version of the Presto parser currently in Solr is lower casing all SQL > identifiers unless they are string literals (single quotes). This appears to > be happening in the QualifiedName class. > The latest version of the Presto parser has changed the QualifiedName class > to maintain the original casing. This will allow Solr to maintain the > original case of SQL identifiers without requiring quoted identifiers. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-Tests-trunk-Java8 - Build # 479 - Still Failing
Build: https://builds.apache.org/job/Lucene-Solr-Tests-trunk-Java8/479/ 1 tests failed. FAILED: org.apache.solr.cloud.SolrCloudExampleTest.testLoadDocsIntoGettingStartedCollection Error Message: Delete action failed! Stack Trace: java.lang.AssertionError: Delete action failed! at __randomizedtesting.SeedInfo.seed([1347F60C1BB39EB2:24C4632ADC2714]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.apache.solr.cloud.SolrCloudExampleTest.doTestDeleteAction(SolrCloudExampleTest.java:169) at org.apache.solr.cloud.SolrCloudExampleTest.testLoadDocsIntoGettingStartedCollection(SolrCloudExampleTest.java:145) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1665) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:864) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:900) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:914) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:963) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:938) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:873) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:775) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:809) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:820) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at
[jira] [Created] (LUCENE-6837) Add N-best output capability to JapaneseTokenizer
KONNO, Hiroharu created LUCENE-6837: --- Summary: Add N-best output capability to JapaneseTokenizer Key: LUCENE-6837 URL: https://issues.apache.org/jira/browse/LUCENE-6837 Project: Lucene - Core Issue Type: Improvement Components: modules/analysis Affects Versions: 5.3 Reporter: KONNO, Hiroharu Priority: Minor Japanese morphological analyzers often generate mis-segmented tokens. N-best output reduces the impact of mis-segmentation on search result. N-best output is more meaningful than character N-gram, and it increases hit count too. If you use N-best output, you can get decompounded tokens (ex: "シニアソフトウェアエンジニア" => {"シニア", "シニアソフトウェアエンジニア", "ソフトウェア", "エンジニア"}) and overwrapped tokens (ex: "数学部長谷川" => {"数学", "部", "部長", "長谷川", "谷川"}), depending on the dictionary and N-best parameter settings. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-NightlyTests-5.x - Build # 984 - Still Failing
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-5.x/984/ 1 tests failed. FAILED: org.apache.lucene.search.join.TestBlockJoin.testMultiChildQueriesOfDiffParentLevels Error Message: this writer hit an unrecoverable error; cannot commit Stack Trace: java.lang.IllegalStateException: this writer hit an unrecoverable error; cannot commit at org.apache.lucene.index.IndexWriter.prepareCommitInternal(IndexWriter.java:2775) at org.apache.lucene.index.IndexWriter.commitInternal(IndexWriter.java:2961) at org.apache.lucene.index.IndexWriter.shutdown(IndexWriter.java:1080) at org.apache.lucene.index.IndexWriter.close(IndexWriter.java:1123) at org.apache.lucene.util.IOUtils.close(IOUtils.java:97) at org.apache.lucene.util.IOUtils.close(IOUtils.java:84) at org.apache.lucene.index.RandomIndexWriter.close(RandomIndexWriter.java:396) at org.apache.lucene.search.join.TestBlockJoin.testMultiChildQueriesOfDiffParentLevels(TestBlockJoin.java:1672) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1665) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:864) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:900) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:914) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:873) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:775) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:809) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:820) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at java.lang.Thread.run(Thread.java:745) Caused by: java.lang.OutOfMemoryError: Java heap space at org.apache.lucene.store.RAMFile.newBuffer(RAMFile.java:80) at org.apache.lucene.store.RAMFile.addBuffer(RAMFile.java:53) at
[jira] [Created] (SOLR-8161) allowCompression parameter not been used
HanDongdong created SOLR-8161: - Summary: allowCompression parameter not been used Key: SOLR-8161 URL: https://issues.apache.org/jira/browse/SOLR-8161 Project: Solr Issue Type: Bug Components: clients - java Affects Versions: 5.0 Environment: CentOS 7 Reporter: HanDongdong Fix For: 5.0 shardHandlerFactory config in solr.xml: {noformat} ${socketTimeout:60} ${connTimeout:6} true false {noformat} actually *useRetries* can be set in HttpClient properly, but not use *allowCompression* paramter is it means Solr don't support response compression when do Http request ? here is the source code to parse parameters : {noformat} ModifiableSolrParams clientParams = new ModifiableSolrParams(); clientParams.set(HttpClientUtil.PROP_MAX_CONNECTIONS_PER_HOST, maxConnectionsPerHost); clientParams.set(HttpClientUtil.PROP_MAX_CONNECTIONS, maxConnections); clientParams.set(HttpClientUtil.PROP_SO_TIMEOUT, soTimeout); clientParams.set(HttpClientUtil.PROP_CONNECTION_TIMEOUT, connectionTimeout); if (!useRetries) { clientParams.set(HttpClientUtil.PROP_USE_RETRY, false); } this.defaultClient = HttpClientUtil.createClient(clientParams); // must come after createClient if (useRetries) { // our default retry handler will never retry on IOException if the request has been sent already, // but for these read only requests we can use the standard DefaultHttpRequestRetryHandler rules ((DefaultHttpClient) this.defaultClient).setHttpRequestRetryHandler(new DefaultHttpRequestRetryHandler()); } {noformat} can anyone one please explain to me ? we are facing "2048KB upload size exceeds limit" issue, and we don't want to increase the limit for now {noformat} {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-trunk-Windows (32bit/jdk1.8.0_60) - Build # 5329 - Failure!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/5329/ Java: 32bit/jdk1.8.0_60 -client -XX:+UseConcMarkSweepGC 1 tests failed. FAILED: org.apache.solr.cloud.SolrCloudExampleTest.testLoadDocsIntoGettingStartedCollection Error Message: Delete action failed! Stack Trace: java.lang.AssertionError: Delete action failed! at __randomizedtesting.SeedInfo.seed([9C01B46561571992:8F62860A5038A034]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.apache.solr.cloud.SolrCloudExampleTest.doTestDeleteAction(SolrCloudExampleTest.java:169) at org.apache.solr.cloud.SolrCloudExampleTest.testLoadDocsIntoGettingStartedCollection(SolrCloudExampleTest.java:145) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1665) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:864) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:900) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:914) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:963) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:938) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:873) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:775) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:809) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:820) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at