[jira] [Commented] (LUCENE-7664) Deprecate GeoPointField & queries

2017-01-27 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7664?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15843885#comment-15843885
 ] 

David Smiley commented on LUCENE-7664:
--

+1I believe GeoPointField also never quite got distance sorting added.  I 
could sense development halted once LatLonPoint started showing it was going to 
win out.

Hey [~nknize] the GeoPointField was really excellent work -- the ideal geo 
field that was based on an inverted index.  Maybe someone out there would find 
it useful who either has an old Lucene index or who wishes to port to something 
else like LevelDB.

> Deprecate GeoPointField & queries
> -
>
> Key: LUCENE-7664
> URL: https://issues.apache.org/jira/browse/LUCENE-7664
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: master (7.0), 6.5.0
>
>
> The new dimensional points implementations for geo distance, polygon, shape 
> filtering are substantially faster and creates a smaller index than the 
> postings based {{GeoPointField}}.  They also offer nearest neighbor search, 
> which {{GeoPointField}} doesn't.
> I think we should deprecate {{GeoPointField}} and focus on the points 
> implementations.
> We have still other legacy postings based geo implementations but I think we 
> should keep them for now since they have functionality that points doesn't 
> yet have: the ability to index a shape and search for shapes overlapping the 
> indexed shapes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-10049) Collection deletion leaves behind the snapshot metadata

2017-01-27 Thread Hrishikesh Gadre (JIRA)
Hrishikesh Gadre created SOLR-10049:
---

 Summary: Collection deletion leaves behind the snapshot metadata
 Key: SOLR-10049
 URL: https://issues.apache.org/jira/browse/SOLR-10049
 Project: Solr
  Issue Type: Sub-task
Affects Versions: 6.3
Reporter: Hrishikesh Gadre


During the collection deletion the snapshot metadata cleanup is not happening 
(even though the actual index files associated with the snapshot are being 
deleted). We should ensure that the snapshot metadata is cleaned properly 
during the collection deletion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 1103 - Still Unstable!

2017-01-27 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/1103/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseParallelGC

2 tests failed.
FAILED:  
org.apache.solr.cloud.TestSolrCloudWithDelegationTokens.testDelegationTokenCancelFail

Error Message:
expected:<200> but was:<404>

Stack Trace:
java.lang.AssertionError: expected:<200> but was:<404>
at 
__randomizedtesting.SeedInfo.seed([CFD6B2E0BAB67609:A76987CA6A2C64E5]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.cloud.TestSolrCloudWithDelegationTokens.cancelDelegationToken(TestSolrCloudWithDelegationTokens.java:140)
at 
org.apache.solr.cloud.TestSolrCloudWithDelegationTokens.testDelegationTokenCancelFail(TestSolrCloudWithDelegationTokens.java:304)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS-EA] Lucene-Solr-master-Linux (32bit/jdk-9-ea+153) - Build # 18860 - Still Unstable!

2017-01-27 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/18860/
Java: 32bit/jdk-9-ea+153 -client -XX:+UseG1GC

1 tests failed.
FAILED:  
org.apache.solr.cloud.TestCloudRecovery.leaderRecoverFromLogOnStartupTest

Error Message:
expected:<2> but was:<3>

Stack Trace:
java.lang.AssertionError: expected:<2> but was:<3>
at 
__randomizedtesting.SeedInfo.seed([60B714B270B49EEB:1447F5F73490D964]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.cloud.TestCloudRecovery.leaderRecoverFromLogOnStartupTest(TestCloudRecovery.java:130)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:543)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)




Build Log:
[...truncated 10797 lines...]
   [junit4] Suite: org.apache.solr.cloud.TestCloudRecovery
   [junit4]   2> Creating dataDir: 

[jira] [Commented] (SOLR-8593) Integrate Apache Calcite into the SQLHandler

2017-01-27 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15843734#comment-15843734
 ] 

Joel Bernstein commented on SOLR-8593:
--

Yep, I think it's time to put this in master. I haven't yet run the entire test 
suite on this branch though. [~risdenk], how far have you gotten with tests and 
precommit? Any issues popping up?

> Integrate Apache Calcite into the SQLHandler
> 
>
> Key: SOLR-8593
> URL: https://issues.apache.org/jira/browse/SOLR-8593
> Project: Solr
>  Issue Type: Improvement
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Attachments: SOLR-8593.patch, SOLR-8593.patch
>
>
>The Presto SQL Parser was perfect for phase one of the SQLHandler. It was 
> nicely split off from the larger Presto project and it did everything that 
> was needed for the initial implementation.
> Phase two of the SQL work though will require an optimizer. Here is where 
> Apache Calcite comes into play. It has a battle tested cost based optimizer 
> and has been integrated into Apache Drill and Hive.
> This work can begin in trunk following the 6.0 release. The final query plans 
> will continue to be translated to Streaming API objects (TupleStreams), so 
> continued work on the JDBC driver should plug in nicely with the Calcite work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10032) Create report to assess Solr test quality at a commit point.

2017-01-27 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15843701#comment-15843701
 ] 

Mark Miller commented on SOLR-10032:


This report covers 876 tests. If any tests share the same name, the results for 
one would currently be misses. Will have to start using full package names to 
avoid that.

> Create report to assess Solr test quality at a commit point.
> 
>
> Key: SOLR-10032
> URL: https://issues.apache.org/jira/browse/SOLR-10032
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: Lucene-Solr Master Test Beast Results 
> 01-24-2017-9899cbd031dc3fc37a384b1f9e2b379e90a9a3a6 Level Medium- Running 30 
> iterations, 12 at a time .pdf
>
>
> We have many Jenkins instances blasting tests, some official, some policeman, 
> I and others have or had their own, and the email trail proves the power of 
> the Jenkins cluster to find test fails.
> However, I still have a very hard time with some basic questions:
> what tests are flakey right now? which test fails actually affect devs most? 
> did I break it? was that test already flakey? is that test still flakey? what 
> are our worst tests right now? is that test getting better or worse?
> We really need a way to see exactly what tests are the problem, not because 
> of OS or environmental issues, but more basic test quality issues. Which 
> tests are flakey and how flakey are they at any point in time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10032) Create report to assess Solr test quality at a commit point.

2017-01-27 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-10032:
---
Attachment: Lucene-Solr Master Test Beast Results 
01-24-2017-9899cbd031dc3fc37a384b1f9e2b379e90a9a3a6 Level Medium- Running 30 
iterations, 12 at a time .pdf

Here is the first report. There may still be some kinks to work out. I'll 
summarize the report and add additional commentary later. I'll also send that 
to the dev list. We can make or surface JIRA issues for any test not solid and 
prompt fixes or badapple/awaitsfix annotations.

You can see the attached report or here: 
https://docs.google.com/spreadsheets/d/1JySta2j2s7A_p16wA1UO-l6c4GsUHBIb4FONS2EzW9k/edit?usp=sharing

> Create report to assess Solr test quality at a commit point.
> 
>
> Key: SOLR-10032
> URL: https://issues.apache.org/jira/browse/SOLR-10032
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: Lucene-Solr Master Test Beast Results 
> 01-24-2017-9899cbd031dc3fc37a384b1f9e2b379e90a9a3a6 Level Medium- Running 30 
> iterations, 12 at a time .pdf
>
>
> We have many Jenkins instances blasting tests, some official, some policeman, 
> I and others have or had their own, and the email trail proves the power of 
> the Jenkins cluster to find test fails.
> However, I still have a very hard time with some basic questions:
> what tests are flakey right now? which test fails actually affect devs most? 
> did I break it? was that test already flakey? is that test still flakey? what 
> are our worst tests right now? is that test getting better or worse?
> We really need a way to see exactly what tests are the problem, not because 
> of OS or environmental issues, but more basic test quality issues. Which 
> tests are flakey and how flakey are they at any point in time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10032) Create report to assess Solr test quality at a commit point.

2017-01-27 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-10032:
---
Attachment: (was: Test-Report-Sample.pdf)

> Create report to assess Solr test quality at a commit point.
> 
>
> Key: SOLR-10032
> URL: https://issues.apache.org/jira/browse/SOLR-10032
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Reporter: Mark Miller
>Assignee: Mark Miller
>
> We have many Jenkins instances blasting tests, some official, some policeman, 
> I and others have or had their own, and the email trail proves the power of 
> the Jenkins cluster to find test fails.
> However, I still have a very hard time with some basic questions:
> what tests are flakey right now? which test fails actually affect devs most? 
> did I break it? was that test already flakey? is that test still flakey? what 
> are our worst tests right now? is that test getting better or worse?
> We really need a way to see exactly what tests are the problem, not because 
> of OS or environmental issues, but more basic test quality issues. Which 
> tests are flakey and how flakey are they at any point in time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-7664) Deprecate GeoPointField & queries

2017-01-27 Thread Michael McCandless (JIRA)
Michael McCandless created LUCENE-7664:
--

 Summary: Deprecate GeoPointField & queries
 Key: LUCENE-7664
 URL: https://issues.apache.org/jira/browse/LUCENE-7664
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Michael McCandless
Assignee: Michael McCandless
 Fix For: 6.5.0, master (7.0)


The new dimensional points implementations for geo distance, polygon, shape 
filtering are substantially faster and creates a smaller index than the 
postings based {{GeoPointField}}.  They also offer nearest neighbor search, 
which {{GeoPointField}} doesn't.

I think we should deprecate {{GeoPointField}} and focus on the points 
implementations.

We have still other legacy postings based geo implementations but I think we 
should keep them for now since they have functionality that points doesn't yet 
have: the ability to index a shape and search for shapes overlapping the 
indexed shapes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-9-ea+153) - Build # 18859 - Unstable!

2017-01-27 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/18859/
Java: 64bit/jdk-9-ea+153 -XX:+UseCompressedOops -XX:+UseG1GC

2 tests failed.
FAILED:  org.apache.solr.cloud.PeerSyncReplicationTest.test

Error Message:
timeout waiting to see all nodes active

Stack Trace:
java.lang.AssertionError: timeout waiting to see all nodes active
at 
__randomizedtesting.SeedInfo.seed([D034F5D8C42E161F:5860CA026AD27BE7]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.PeerSyncReplicationTest.waitTillNodesActive(PeerSyncReplicationTest.java:326)
at 
org.apache.solr.cloud.PeerSyncReplicationTest.bringUpDeadNodeAndEnsureNoReplication(PeerSyncReplicationTest.java:277)
at 
org.apache.solr.cloud.PeerSyncReplicationTest.forceNodeFailureAndDoPeerSync(PeerSyncReplicationTest.java:259)
at 
org.apache.solr.cloud.PeerSyncReplicationTest.test(PeerSyncReplicationTest.java:138)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:543)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 

[jira] [Commented] (LUCENE-7655) Speed up geo-distance queries that match most documents

2017-01-27 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15843662#comment-15843662
 ] 

Michael McCandless commented on LUCENE-7655:


+1

> Speed up geo-distance queries that match most documents
> ---
>
> Key: LUCENE-7655
> URL: https://issues.apache.org/jira/browse/LUCENE-7655
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Priority: Minor
>
> I think the same optimization that was applied in LUCENE-7641 would also work 
> with geo-distance queries?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7661) Speed up LatLonPointInPolygonQuery

2017-01-27 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15843648#comment-15843648
 ] 

Michael McCandless commented on LUCENE-7661:


+1, wow :)

> Speed up LatLonPointInPolygonQuery
> --
>
> Key: LUCENE-7661
> URL: https://issues.apache.org/jira/browse/LUCENE-7661
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-7661.patch
>
>
> We could apply the same idea as LUCENE-7656 to LatLonPointInPolygonQuery.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7663) Improve GeoPointDistanceQuery performance

2017-01-27 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15843639#comment-15843639
 ] 

Michael McCandless commented on LUCENE-7663:


bq. This approach can also be used to priorize rectangles, for top-k search.

Oh we also have a nearest neighbor implementation for points: 
{{LatLonPoint.nearest}}.  Seems like these papers could help that too?

> Improve GeoPointDistanceQuery performance
> -
>
> Key: LUCENE-7663
> URL: https://issues.apache.org/jira/browse/LUCENE-7663
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Erich Schubert
>Priority: Minor
>
> GeoPoint queries currently use only the bounding box for filtering.
> But the query circle is only roughly 80% of the bounding box, so we could be 
> roughly 20% faster. Furthermore, the current approach requires splitting the 
> box if it crosses the date line.
> > Schubert, E., Zimek, A., & Kriegel, H. P. (2013, August). Geodetic distance 
> > queries on r-trees for indexing geographic data. In International Symposium 
> > on Spatial and Temporal Databases (pp. 146-164).
> The minimum spherical distance of a point to a rectangle is given ("Algorithm 
> 2: Optimized Minimum Distance Point to MBR"). Rectangles whose minimum 
> distance is larger than the query radius can be skipped. The authors used the 
> R-tree, but it will work with any bounding box, so it can be used in 
> CellComparator#relate.
> It's not very complex - a few case distinctions, and then either Haversine 
> distance, or cross-track-distance. So the cost ist comparable to Haversine.
> This could be added as GeoRelationUtils.pointToRectMinimumDistance, for 
> example.
> This approach can also be used to priorize rectangles, for top-k search.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7663) Improve GeoPointDistanceQuery performance

2017-01-27 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15843638#comment-15843638
 ] 

Michael McCandless commented on LUCENE-7663:


This sounds promising!

But, we are moving away from {{GeoPointDistanceQuery}} in favor of the block KD 
tree (dimensional points) implemenation, {{LatLonPoint.newDistanceQuery}}: the 
latter is quite a bit faster [as measured in our nightly geo 
benchmarks|http://home.apache.org/~mikemccand/geobench.html#search-distance], 
and it recently just got even faster with LUCENE-7656.

Do you think the ideas from these papers would also apply to 
{{LatLonPoint.newDistanceQuery}}?

> Improve GeoPointDistanceQuery performance
> -
>
> Key: LUCENE-7663
> URL: https://issues.apache.org/jira/browse/LUCENE-7663
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Erich Schubert
>Priority: Minor
>
> GeoPoint queries currently use only the bounding box for filtering.
> But the query circle is only roughly 80% of the bounding box, so we could be 
> roughly 20% faster. Furthermore, the current approach requires splitting the 
> box if it crosses the date line.
> > Schubert, E., Zimek, A., & Kriegel, H. P. (2013, August). Geodetic distance 
> > queries on r-trees for indexing geographic data. In International Symposium 
> > on Spatial and Temporal Databases (pp. 146-164).
> The minimum spherical distance of a point to a rectangle is given ("Algorithm 
> 2: Optimized Minimum Distance Point to MBR"). Rectangles whose minimum 
> distance is larger than the query radius can be skipped. The authors used the 
> R-tree, but it will work with any bounding box, so it can be used in 
> CellComparator#relate.
> It's not very complex - a few case distinctions, and then either Haversine 
> distance, or cross-track-distance. So the cost ist comparable to Haversine.
> This could be added as GeoRelationUtils.pointToRectMinimumDistance, for 
> example.
> This approach can also be used to priorize rectangles, for top-k search.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [jira] [Commented] (SOLR-8029) Modernize and standardize Solr APIs

2017-01-27 Thread Yago Riveiro
No command to list aliases?

Call CLUSTERSTATUS to fetch the available list of alias is annoying, aliases 
belongs to collections not to the CLUSTER.

--

/Yago Riveiro

On 27 Jan 2017 23:01 +, Noble Paul (JIRA) , wrote:
>
> [ 
> https://issues.apache.org/jira/browse/SOLR-8029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15843618#comment-15843618
>  ]
>
> Noble Paul commented on SOLR-8029:
> --
>
> Refer to this document. This is not fully updated and some of the items 
> marked as "missing" are actually implemented.
> https://docs.google.com/document/d/18n9IL6y82C8gnBred6lzG0GLaT3OsZZsBvJQ2YAt72I/edit?usp=sharing
>
> > Modernize and standardize Solr APIs
> > ---
> >
> > Key: SOLR-8029
> > URL: https://issues.apache.org/jira/browse/SOLR-8029
> > Project: Solr
> > Issue Type: Improvement
> > Affects Versions: 6.0
> > Reporter: Noble Paul
> > Assignee: Noble Paul
> > Labels: API, EaseOfUse
> > Fix For: 6.0
> >
> > Attachments: SOLR-8029.patch, SOLR-8029.patch, SOLR-8029.patch, 
> > SOLR-8029.patch, SOLR-8029.patch
> >
> >
> > Solr APIs have organically evolved and they are sometimes inconsistent with 
> > each other or not in sync with the widely followed conventions of HTTP 
> > protocol. Trying to make incremental changes to make them modern is like 
> > applying band-aid. So, we have done a complete rethink of what the APIs 
> > should be. The most notable aspects of the API are as follows:
> > The new set of APIs will be placed under a new path {{/solr2}}. The legacy 
> > APIs will continue to work under the {{/solr}} path as they used to and 
> > they will be eventually deprecated.
> > There are 4 types of requests in the new API
> > * {{/v2//*}} : Hit a collection directly or manage 
> > collections/shards/replicas
> > * {{/v2//*}} : Hit a core directly or manage cores
> > * {{/v2/cluster/*}} : Operations on cluster not pertaining to any 
> > collection or core. e.g: security, overseer ops etc
> > This will be released as part of a major release. Check the link given 
> > below for the full specification. Your comments are welcome
> > [Solr API version 2 Specification | http://bit.ly/1JYsBMQ]
>
>
>
> --
> This message was sent by Atlassian JIRA
> (v6.3.4#6332)
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>


[jira] [Updated] (SOLR-7759) DebugComponent's explain should be implemented as a distributed query

2017-01-27 Thread Alessandro Benedetti (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7759?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alessandro Benedetti updated SOLR-7759:
---
Attachment: SOLR_7759.patch

Simple patch attached.
Bare in mind no test or anything has been verified ( apart manually checking on 
a local Solr)
Just attaching to start the discussion if it make sense :)

> DebugComponent's explain should be implemented as a distributed query
> -
>
> Key: SOLR-7759
> URL: https://issues.apache.org/jira/browse/SOLR-7759
> Project: Solr
>  Issue Type: Bug
>Reporter: Varun Thacker
> Attachments: SOLR_7759.patch
>
>
> Currently when we use debugQuery to see the explanation of the matched 
> documents, the query fired to get the statistics for the matched documents is 
> not a distributed query.
> This is a problem when using distributed idf. The actual documents are being 
> scored using the global stats and not per shard stats , but the explain will 
> show us the score by taking into account the stats from the shard where the 
> document belongs to.
> We should try to implement the explain query as a distributed request so that 
> the scores match the actual document scores.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8029) Modernize and standardize Solr APIs

2017-01-27 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15843618#comment-15843618
 ] 

Noble Paul commented on SOLR-8029:
--

Refer to this document. This is not fully updated and some of the items marked 
as "missing" are actually implemented. 
https://docs.google.com/document/d/18n9IL6y82C8gnBred6lzG0GLaT3OsZZsBvJQ2YAt72I/edit?usp=sharing

> Modernize and standardize Solr APIs
> ---
>
> Key: SOLR-8029
> URL: https://issues.apache.org/jira/browse/SOLR-8029
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 6.0
>Reporter: Noble Paul
>Assignee: Noble Paul
>  Labels: API, EaseOfUse
> Fix For: 6.0
>
> Attachments: SOLR-8029.patch, SOLR-8029.patch, SOLR-8029.patch, 
> SOLR-8029.patch, SOLR-8029.patch
>
>
> Solr APIs have organically evolved and they are sometimes inconsistent with 
> each other or not in sync with the widely followed conventions of HTTP 
> protocol. Trying to make incremental changes to make them modern is like 
> applying band-aid. So, we have done a complete rethink of what the APIs 
> should be. The most notable aspects of the API are as follows:
> The new set of APIs will be placed under a new path {{/solr2}}. The legacy 
> APIs will continue to work under the {{/solr}} path as they used to and they 
> will be eventually deprecated.
> There are 4 types of requests in the new API 
> * {{/v2//*}} : Hit a collection directly or manage 
> collections/shards/replicas 
> * {{/v2//*}} : Hit a core directly or manage cores 
> * {{/v2/cluster/*}} : Operations on cluster not pertaining to any collection 
> or core. e.g: security, overseer ops etc
> This will be released as part of a major release. Check the link given below 
> for the full specification.  Your comments are welcome
> [Solr API version 2 Specification | http://bit.ly/1JYsBMQ]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-7965) solrcore.properties not working in SolrCloud as expected

2017-01-27 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7965?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett closed SOLR-7965.
---
Resolution: Won't Fix

It's documented in the Ref Guide that solrcore.properties won't work in 
SolrCloud mode, and also that it may be removed in the future 
(https://cwiki.apache.org/confluence/display/solr/Configuring+solrconfig.xml#Configuringsolrconfig.xml-solrcore.properties),
 so closing this as Won't Fix.

> solrcore.properties not working in SolrCloud as expected
> 
>
> Key: SOLR-7965
> URL: https://issues.apache.org/jira/browse/SOLR-7965
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.2.1
> Environment: Linux
>Reporter: Shiva
>
> i have defined a variable in solrconfig.xml as below.
>  regex=".*\.jar"/>
> and referenced the same property in solrcore.properties as
> customSolr.dir=/export/Applications/solr521/CustomSolr
> uploaded the configuration to zookeeper with these changes expecting 
> customSolr.dir=/export/... should be in effect. BTW, i have reloaded the 
> collection too.
> Please do the needful.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8029) Modernize and standardize Solr APIs

2017-01-27 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8029?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-8029:
-
Attachment: (was: SOLR-8029.patch)

> Modernize and standardize Solr APIs
> ---
>
> Key: SOLR-8029
> URL: https://issues.apache.org/jira/browse/SOLR-8029
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 6.0
>Reporter: Noble Paul
>Assignee: Noble Paul
>  Labels: API, EaseOfUse
> Fix For: 6.0
>
> Attachments: SOLR-8029.patch, SOLR-8029.patch, SOLR-8029.patch, 
> SOLR-8029.patch, SOLR-8029.patch
>
>
> Solr APIs have organically evolved and they are sometimes inconsistent with 
> each other or not in sync with the widely followed conventions of HTTP 
> protocol. Trying to make incremental changes to make them modern is like 
> applying band-aid. So, we have done a complete rethink of what the APIs 
> should be. The most notable aspects of the API are as follows:
> The new set of APIs will be placed under a new path {{/solr2}}. The legacy 
> APIs will continue to work under the {{/solr}} path as they used to and they 
> will be eventually deprecated.
> There are 4 types of requests in the new API 
> * {{/v2//*}} : Hit a collection directly or manage 
> collections/shards/replicas 
> * {{/v2//*}} : Hit a core directly or manage cores 
> * {{/v2/cluster/*}} : Operations on cluster not pertaining to any collection 
> or core. e.g: security, overseer ops etc
> This will be released as part of a major release. Check the link given below 
> for the full specification.  Your comments are welcome
> [Solr API version 2 Specification | http://bit.ly/1JYsBMQ]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8029) Modernize and standardize Solr APIs

2017-01-27 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8029?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-8029:
-
Attachment: SOLR-8029.patch

> Modernize and standardize Solr APIs
> ---
>
> Key: SOLR-8029
> URL: https://issues.apache.org/jira/browse/SOLR-8029
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 6.0
>Reporter: Noble Paul
>Assignee: Noble Paul
>  Labels: API, EaseOfUse
> Fix For: 6.0
>
> Attachments: SOLR-8029.patch, SOLR-8029.patch, SOLR-8029.patch, 
> SOLR-8029.patch, SOLR-8029.patch, SOLR-8029.patch
>
>
> Solr APIs have organically evolved and they are sometimes inconsistent with 
> each other or not in sync with the widely followed conventions of HTTP 
> protocol. Trying to make incremental changes to make them modern is like 
> applying band-aid. So, we have done a complete rethink of what the APIs 
> should be. The most notable aspects of the API are as follows:
> The new set of APIs will be placed under a new path {{/solr2}}. The legacy 
> APIs will continue to work under the {{/solr}} path as they used to and they 
> will be eventually deprecated.
> There are 4 types of requests in the new API 
> * {{/v2//*}} : Hit a collection directly or manage 
> collections/shards/replicas 
> * {{/v2//*}} : Hit a core directly or manage cores 
> * {{/v2/cluster/*}} : Operations on cluster not pertaining to any collection 
> or core. e.g: security, overseer ops etc
> This will be released as part of a major release. Check the link given below 
> for the full specification.  Your comments are welcome
> [Solr API version 2 Specification | http://bit.ly/1JYsBMQ]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-7663) Improve GeoPointDistanceQuery performance

2017-01-27 Thread Erich Schubert (JIRA)
Erich Schubert created LUCENE-7663:
--

 Summary: Improve GeoPointDistanceQuery performance
 Key: LUCENE-7663
 URL: https://issues.apache.org/jira/browse/LUCENE-7663
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Erich Schubert
Priority: Minor


GeoPoint queries currently use only the bounding box for filtering.
But the query circle is only roughly 80% of the bounding box, so we could be 
roughly 20% faster. Furthermore, the current approach requires splitting the 
box if it crosses the date line.

> Schubert, E., Zimek, A., & Kriegel, H. P. (2013, August). Geodetic distance 
> queries on r-trees for indexing geographic data. In International Symposium 
> on Spatial and Temporal Databases (pp. 146-164).

The minimum spherical distance of a point to a rectangle is given ("Algorithm 
2: Optimized Minimum Distance Point to MBR"). Rectangles whose minimum distance 
is larger than the query radius can be skipped. The authors used the R-tree, 
but it will work with any bounding box, so it can be used in 
CellComparator#relate.
It's not very complex - a few case distinctions, and then either Haversine 
distance, or cross-track-distance. So the cost ist comparable to Haversine.
This could be added as GeoRelationUtils.pointToRectMinimumDistance, for example.

This approach can also be used to priorize rectangles, for top-k search.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-6.4 - Build # 10 - Failure

2017-01-27 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-6.4/10/

12 tests failed.
FAILED:  org.apache.lucene.index.TestIndexSorting.testRandom3

Error Message:
Test abandoned because suite timeout was reached.

Stack Trace:
java.lang.Exception: Test abandoned because suite timeout was reached.
at __randomizedtesting.SeedInfo.seed([7FC9412F185C6C35]:0)


FAILED:  junit.framework.TestSuite.org.apache.lucene.index.TestIndexSorting

Error Message:
Suite timeout exceeded (>= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (>= 720 msec).
at __randomizedtesting.SeedInfo.seed([7FC9412F185C6C35]:0)


FAILED:  
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testCollectionsAPI

Error Message:
expected:<3> but was:<0>

Stack Trace:
java.lang.AssertionError: expected:<3> but was:<0>
at 
__randomizedtesting.SeedInfo.seed([22A44D2D62EA209E:6AD1399964D90F0B]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testCollectionsAPI(CollectionsAPIDistributedZkTest.java:516)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 

[jira] [Commented] (SOLR-10048) Distributed result set paging sometimes yields incorrect results

2017-01-27 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15843557#comment-15843557
 ] 

Hoss Man commented on SOLR-10048:
-

I'm sorry, I was confusingly imprecise before...

bq. ... which means the order is non-deterministic (and practically speaking 
winds up depending on the on disk ordering of the docs).

...what i should have said was:  ... Which means the default sort is by {{score 
desc}} w/o any secondary sort -- but since the query is {{\*:\*}} all docs 
(should) score identically, so the effective sort is non-deterministic ...

I'm not sure what might have changed between 6.3 and 6.4 to cause this test to 
only start failing recently (perhaps something change in the shard querying 
logic to improve the randomized distribution of sub-requests?) but the bottom 
line is that w/o a deterministic sort, Solr has _never_ guaranteed consistent 
ordering between multiple requests.

> Distributed result set paging sometimes yields incorrect results
> 
>
> Key: SOLR-10048
> URL: https://issues.apache.org/jira/browse/SOLR-10048
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SearchComponents - other
>Affects Versions: 6.4
>Reporter: Markus Jelsma
>Priority: Critical
> Fix For: 6.4.1, master (7.0)
>
> Attachments: DistributedPagedQueryComponentTest.java
>
>
> This bug appeared in 6.4 and i spotted it yesterday when i upgraded a my 
> project. It has, amongst others,an extension of QueryComponent, its unit test 
> failed, but never in any predictable way and always in another spot.
> The test is very straightforward, it indexes a bunch of silly documents and 
> then executes a series of getAll() queries. An array of ids is collected and 
> stored for comparison. Then, the same query is executed again but it pages 
> through the entire result set.
> It then compares ids, the id at position N must be the same as id NUM_PAGE * 
> PAGE_SIZE + M (where M is the position of the result in the paged set). The 
> comparison sometimes failes.
> I'll attach the test for 6.4 shortly. If it passes, just try it again (or 
> increase maxDocs). It can pass over ten times in a row, but it can also fail 
> ten times in a row.
> You should see this if it fails, but probably with different values for 
> expected and actual. Below was a few minutes ago, now i can't seem to 
> reproduce it anymore.
> {code}
>[junit4] FAILURE 25.1s | 
> DistributedPagedQueryComponentTest.testTheCrazyPager <<<
>[junit4]> Throwable #1: java.lang.AssertionError: ids misaligned 
> expected:<406> but was:<811>
>[junit4]>at 
> __randomizedtesting.SeedInfo.seed([97A7F02D1E4ACF75:7493130F03129E6D]:0)
>[junit4]>at 
> org.apache.solr.handler.component.DistributedPagedQueryComponentTest.testTheCrazyPager(DistributedPagedQueryComponentTest.java:83)
>[junit4]>at 
> org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992)
>[junit4]>at 
> org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
>[junit4]>at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9983) TestManagedSchemaThreadSafety.testThreadSafety() failures

2017-01-27 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15843546#comment-15843546
 ] 

ASF subversion and git services commented on SOLR-9983:
---

Commit 55c1e88d907bf54610f72981f7b569ae45d0 in lucene-solr's branch 
refs/heads/branch_6x from [~mkhludnev]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=55c1e88 ]

SOLR-9983: fixing TestManagedSchemaThreadSafety NPE failure.

Also testing session expiration and set sensible Zookeeper connection
timeout.


> TestManagedSchemaThreadSafety.testThreadSafety() failures
> -
>
> Key: SOLR-9983
> URL: https://issues.apache.org/jira/browse/SOLR-9983
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Schema and Analysis
>Affects Versions: master (7.0)
>Reporter: Steve Rowe
>Assignee: Mikhail Khludnev
> Fix For: 6.5, master (7.0)
>
> Attachments: SOLR-9983-connection-loss-retry.patch, SOLR-9983.patch, 
> tests-failures-TestManagedSchemaThreadSafety-724.txt
>
>
> I set up a Jenkins job to hammer all tests on the {{jira/solr-5944}} branch, 
> and at least four times this test failed (none of the seeds reproduce for 
> me): [http://jenkins.sarowe.net/job/Solr-tests-SOLR-5944/155/], 
> [http://jenkins.sarowe.net/job/Solr-tests-SOLR-5944/167/], 
> [http://jenkins.sarowe.net/job/Solr-tests-SOLR-5944/106/], 
> [http://jenkins.sarowe.net/job/Solr-tests-SOLR-5944/332/].  My email search 
> didn't turn up any failures on ASF or Policeman Jenkins. Here's the output 
> from one of the above runs:
> {noformat}
>[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestManagedSchemaThreadSafety -Dtests.method=testThreadSafety 
> -Dtests.seed=3DB2B79301AA806B -Dtests.slow=true -Dtests.locale=lt 
> -Dtests.timezone=Asia/Anadyr -Dtests.asserts=true 
> -Dtests.file.encoding=US-ASCII
>[junit4] ERROR   4.37s J4  | 
> TestManagedSchemaThreadSafety.testThreadSafety <<<
>[junit4]> Throwable #1: java.util.concurrent.ExecutionException: 
> java.lang.RuntimeException: org.apache.solr.common.SolrException: Error 
> loading solr config from solrconfig.xml
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([3DB2B79301AA806B:A7F8A3CBD235329D]:0)
>[junit4]>  at 
> java.util.concurrent.FutureTask.report(FutureTask.java:122)
>[junit4]>  at 
> java.util.concurrent.FutureTask.get(FutureTask.java:192)
>[junit4]>  at 
> org.apache.solr.schema.TestManagedSchemaThreadSafety.testThreadSafety(TestManagedSchemaThreadSafety.java:126)
>[junit4]>  at java.lang.Thread.run(Thread.java:745)
>[junit4]> Caused by: java.lang.RuntimeException: 
> org.apache.solr.common.SolrException: Error loading solr config from 
> solrconfig.xml
>[junit4]>  at 
> org.apache.solr.schema.TestManagedSchemaThreadSafety.lambda$indexSchemaLoader$0(TestManagedSchemaThreadSafety.java:159)
>[junit4]>  at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>[junit4]>  at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
>[junit4]>  at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
>[junit4]>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>[junit4]>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>[junit4]>  ... 1 more
>[junit4]> Caused by: org.apache.solr.common.SolrException: Error 
> loading solr config from solrconfig.xml
>[junit4]>  at 
> org.apache.solr.core.SolrConfig.readFromResourceLoader(SolrConfig.java:187)
>[junit4]>  at 
> org.apache.solr.schema.TestManagedSchemaThreadSafety.lambda$indexSchemaLoader$0(TestManagedSchemaThreadSafety.java:152)
>[junit4]>  ... 6 more
>[junit4]> Caused by: java.lang.NullPointerException
>[junit4]>  at 
> org.apache.solr.cloud.ZkSolrResourceLoader.openResource(ZkSolrResourceLoader.java:99)
>[junit4]>  at 
> org.apache.solr.core.SolrResourceLoader.openConfig(SolrResourceLoader.java:361)
>[junit4]>  at org.apache.solr.core.Config.(Config.java:120)
>[junit4]>  at org.apache.solr.core.Config.(Config.java:90)
>[junit4]>  at 
> org.apache.solr.core.SolrConfig.(SolrConfig.java:202)
>[junit4]>  at 
> org.apache.solr.core.SolrConfig.readFromResourceLoader(SolrConfig.java:179)
>[junit4]>  ... 7 more
> {noformat}
> Looks to me like this is a test bug: the test mocks {{ZkController}}, but the 
> mock returns null for (the uninitialized {{cc}} returned by) 
> 

[jira] [Commented] (SOLR-9983) TestManagedSchemaThreadSafety.testThreadSafety() failures

2017-01-27 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15843538#comment-15843538
 ] 

ASF subversion and git services commented on SOLR-9983:
---

Commit d9741205b5a39a5d0d4f63698adfcabe0a6a5892 in lucene-solr's branch 
refs/heads/master from [~mkhludnev]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=d974120 ]

SOLR-9983: fixing TestManagedSchemaThreadSafety NPE failure.

Also testing session expiration and set sensible Zookeeper connection
timeout.


> TestManagedSchemaThreadSafety.testThreadSafety() failures
> -
>
> Key: SOLR-9983
> URL: https://issues.apache.org/jira/browse/SOLR-9983
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Schema and Analysis
>Affects Versions: master (7.0)
>Reporter: Steve Rowe
>Assignee: Mikhail Khludnev
> Fix For: 6.5, master (7.0)
>
> Attachments: SOLR-9983-connection-loss-retry.patch, SOLR-9983.patch, 
> tests-failures-TestManagedSchemaThreadSafety-724.txt
>
>
> I set up a Jenkins job to hammer all tests on the {{jira/solr-5944}} branch, 
> and at least four times this test failed (none of the seeds reproduce for 
> me): [http://jenkins.sarowe.net/job/Solr-tests-SOLR-5944/155/], 
> [http://jenkins.sarowe.net/job/Solr-tests-SOLR-5944/167/], 
> [http://jenkins.sarowe.net/job/Solr-tests-SOLR-5944/106/], 
> [http://jenkins.sarowe.net/job/Solr-tests-SOLR-5944/332/].  My email search 
> didn't turn up any failures on ASF or Policeman Jenkins. Here's the output 
> from one of the above runs:
> {noformat}
>[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestManagedSchemaThreadSafety -Dtests.method=testThreadSafety 
> -Dtests.seed=3DB2B79301AA806B -Dtests.slow=true -Dtests.locale=lt 
> -Dtests.timezone=Asia/Anadyr -Dtests.asserts=true 
> -Dtests.file.encoding=US-ASCII
>[junit4] ERROR   4.37s J4  | 
> TestManagedSchemaThreadSafety.testThreadSafety <<<
>[junit4]> Throwable #1: java.util.concurrent.ExecutionException: 
> java.lang.RuntimeException: org.apache.solr.common.SolrException: Error 
> loading solr config from solrconfig.xml
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([3DB2B79301AA806B:A7F8A3CBD235329D]:0)
>[junit4]>  at 
> java.util.concurrent.FutureTask.report(FutureTask.java:122)
>[junit4]>  at 
> java.util.concurrent.FutureTask.get(FutureTask.java:192)
>[junit4]>  at 
> org.apache.solr.schema.TestManagedSchemaThreadSafety.testThreadSafety(TestManagedSchemaThreadSafety.java:126)
>[junit4]>  at java.lang.Thread.run(Thread.java:745)
>[junit4]> Caused by: java.lang.RuntimeException: 
> org.apache.solr.common.SolrException: Error loading solr config from 
> solrconfig.xml
>[junit4]>  at 
> org.apache.solr.schema.TestManagedSchemaThreadSafety.lambda$indexSchemaLoader$0(TestManagedSchemaThreadSafety.java:159)
>[junit4]>  at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>[junit4]>  at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
>[junit4]>  at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
>[junit4]>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>[junit4]>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>[junit4]>  ... 1 more
>[junit4]> Caused by: org.apache.solr.common.SolrException: Error 
> loading solr config from solrconfig.xml
>[junit4]>  at 
> org.apache.solr.core.SolrConfig.readFromResourceLoader(SolrConfig.java:187)
>[junit4]>  at 
> org.apache.solr.schema.TestManagedSchemaThreadSafety.lambda$indexSchemaLoader$0(TestManagedSchemaThreadSafety.java:152)
>[junit4]>  ... 6 more
>[junit4]> Caused by: java.lang.NullPointerException
>[junit4]>  at 
> org.apache.solr.cloud.ZkSolrResourceLoader.openResource(ZkSolrResourceLoader.java:99)
>[junit4]>  at 
> org.apache.solr.core.SolrResourceLoader.openConfig(SolrResourceLoader.java:361)
>[junit4]>  at org.apache.solr.core.Config.(Config.java:120)
>[junit4]>  at org.apache.solr.core.Config.(Config.java:90)
>[junit4]>  at 
> org.apache.solr.core.SolrConfig.(SolrConfig.java:202)
>[junit4]>  at 
> org.apache.solr.core.SolrConfig.readFromResourceLoader(SolrConfig.java:179)
>[junit4]>  ... 7 more
> {noformat}
> Looks to me like this is a test bug: the test mocks {{ZkController}}, but the 
> mock returns null for (the uninitialized {{cc}} returned by) 
> 

[jira] [Closed] (SOLR-7398) Major imbalance between different shard numDocs in SolrCloud on HDFS

2017-01-27 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett closed SOLR-7398.
---
Resolution: Duplicate

Closing as duplicate of SOLR-7395. Please reopen if this is an error.

> Major imbalance between different shard numDocs in SolrCloud on HDFS
> 
>
> Key: SOLR-7398
> URL: https://issues.apache.org/jira/browse/SOLR-7398
> Project: Solr
>  Issue Type: Bug
>  Components: Hadoop Integration, hdfs, SolrCloud
>Affects Versions: 4.10.3
> Environment: HDP 2.2 / HDP Search
>Reporter: Hari Sekhon
> Attachments: 145_core.png, 146_core.png, 147_core.png, 149_core.png, 
> Cloud UI.png
>
>
> I've observed major numDoc imbalance between shards in a collection such as 
> 6k vs 193k docs between the 2 different shards.
> See attached screenshots which shows the shards and replicas as well as the 
> core UI output of each of the shard cores taken at the same time.
> Hari Sekhon
> http://www.linkedin.com/in/harisekhon



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-Solaris (64bit/jdk1.8.0) - Build # 639 - Unstable!

2017-01-27 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Solaris/639/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseSerialGC

3 tests failed.
FAILED:  org.apache.solr.TestDistributedSearch.test

Error Message:
Expected to find shardAddress in the up shard info

Stack Trace:
java.lang.AssertionError: Expected to find shardAddress in the up shard info
at 
__randomizedtesting.SeedInfo.seed([C556D21BFF9ABEAE:4D02EDC15166D356]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.TestDistributedSearch.comparePartialResponses(TestDistributedSearch.java:1174)
at 
org.apache.solr.TestDistributedSearch.queryPartialResults(TestDistributedSearch.java:1115)
at 
org.apache.solr.TestDistributedSearch.test(TestDistributedSearch.java:975)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsRepeatStatement.callStatement(BaseDistributedSearchTestCase.java:1018)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 

[jira] [Commented] (SOLR-8029) Modernize and standardize Solr APIs

2017-01-27 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15843526#comment-15843526
 ] 

Noble Paul commented on SOLR-8029:
--

[~arafalov] Good question. 

It's unlikely that it sticks to the original documentation fully.

I shall just put up a small one pager for folks who wish to review this quickly 

> Modernize and standardize Solr APIs
> ---
>
> Key: SOLR-8029
> URL: https://issues.apache.org/jira/browse/SOLR-8029
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 6.0
>Reporter: Noble Paul
>Assignee: Noble Paul
>  Labels: API, EaseOfUse
> Fix For: 6.0
>
> Attachments: SOLR-8029.patch, SOLR-8029.patch, SOLR-8029.patch, 
> SOLR-8029.patch, SOLR-8029.patch
>
>
> Solr APIs have organically evolved and they are sometimes inconsistent with 
> each other or not in sync with the widely followed conventions of HTTP 
> protocol. Trying to make incremental changes to make them modern is like 
> applying band-aid. So, we have done a complete rethink of what the APIs 
> should be. The most notable aspects of the API are as follows:
> The new set of APIs will be placed under a new path {{/solr2}}. The legacy 
> APIs will continue to work under the {{/solr}} path as they used to and they 
> will be eventually deprecated.
> There are 4 types of requests in the new API 
> * {{/v2//*}} : Hit a collection directly or manage 
> collections/shards/replicas 
> * {{/v2//*}} : Hit a core directly or manage cores 
> * {{/v2/cluster/*}} : Operations on cluster not pertaining to any collection 
> or core. e.g: security, overseer ops etc
> This will be released as part of a major release. Check the link given below 
> for the full specification.  Your comments are welcome
> [Solr API version 2 Specification | http://bit.ly/1JYsBMQ]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7071) SolrCore init failure handling preempts SolrCloud's failover support.

2017-01-27 Thread Cassandra Targett (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15843518#comment-15843518
 ] 

Cassandra Targett commented on SOLR-7071:
-

I believe SOLR-7069 is the same problem.

> SolrCore init failure handling preempts SolrCloud's failover support.
> -
>
> Key: SOLR-7071
> URL: https://issues.apache.org/jira/browse/SOLR-7071
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Reporter: Mark Miller
>Assignee: Mark Miller
>
> If you are using a load balancer or direct querying and one of your replicas 
> can't load a core for some reason (init failure due to index corruption or 
> bad config or whatever), if a query for a collection hit's that node, it 
> won't get proxied to another node for good failover - you will get an error 
> returned about the init failure. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8029) Modernize and standardize Solr APIs

2017-01-27 Thread Alexandre Rafalovitch (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15843499#comment-15843499
 ] 

Alexandre Rafalovitch commented on SOLR-8029:
-

Awesome news.

So, is this now fully implementing original specs and answering all issue 
questions? Or is there a newer document reflecting the actual implementation. 

How do we review this, apart from pure code-reading?

> Modernize and standardize Solr APIs
> ---
>
> Key: SOLR-8029
> URL: https://issues.apache.org/jira/browse/SOLR-8029
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 6.0
>Reporter: Noble Paul
>Assignee: Noble Paul
>  Labels: API, EaseOfUse
> Fix For: 6.0
>
> Attachments: SOLR-8029.patch, SOLR-8029.patch, SOLR-8029.patch, 
> SOLR-8029.patch, SOLR-8029.patch
>
>
> Solr APIs have organically evolved and they are sometimes inconsistent with 
> each other or not in sync with the widely followed conventions of HTTP 
> protocol. Trying to make incremental changes to make them modern is like 
> applying band-aid. So, we have done a complete rethink of what the APIs 
> should be. The most notable aspects of the API are as follows:
> The new set of APIs will be placed under a new path {{/solr2}}. The legacy 
> APIs will continue to work under the {{/solr}} path as they used to and they 
> will be eventually deprecated.
> There are 4 types of requests in the new API 
> * {{/v2//*}} : Hit a collection directly or manage 
> collections/shards/replicas 
> * {{/v2//*}} : Hit a core directly or manage cores 
> * {{/v2/cluster/*}} : Operations on cluster not pertaining to any collection 
> or core. e.g: security, overseer ops etc
> This will be released as part of a major release. Check the link given below 
> for the full specification.  Your comments are welcome
> [Solr API version 2 Specification | http://bit.ly/1JYsBMQ]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8491) solr.cmd SOLR_SSL_OPTS is overwritten

2017-01-27 Thread Kevin Risden (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8491?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-8491:
---
Attachment: SOLR-8491.patch

Patch that I am testing.

> solr.cmd SOLR_SSL_OPTS is overwritten
> -
>
> Key: SOLR-8491
> URL: https://issues.apache.org/jira/browse/SOLR-8491
> Project: Solr
>  Issue Type: Improvement
>  Components: scripts and tools
>Affects Versions: 5.2, 6.0
> Environment: Windows
>Reporter: Sam Yi
>Assignee: Kevin Risden
> Attachments: SOLR-8491.patch
>
>
> In solr.cmd, the SOLR_SSL_OPTS variable is assigned within a block, but then 
> assigned again later in the same block, using {{%SOLR_SSL_OPTS%}} to attempt 
> to append to itself.  However, since we're still inside the same block for 
> this 2nd assignment, {{%SOLR_SSL_OPTS%}} resolves to nothing, so everything 
> in the first assignment (the solr.jetty opts) becomes overwritten.
> I was able to work around this by using {code}!SOLR_SSL_OPTS!{code} instead 
> of {{%SOLR_SSL_OPTS%}} in the 2nd assignments (in both the {{IF}} and 
> {{ELSE}} blocks), since delayed expension is enabled.
> Here's the full block for reference, from commit 
> d4e3f50a6f6bc7b96fa6317f028ae26be25c8928, lines 43-55:
> {code}IF DEFINED SOLR_SSL_KEY_STORE (
>   set "SOLR_JETTY_CONFIG=--module=https"
>   set SOLR_URL_SCHEME=https
>   set "SCRIPT_ERROR=Solr server directory %SOLR_SERVER_DIR% not found!"
>   set "SOLR_SSL_OPTS=-Dsolr.jetty.keystore=%SOLR_SSL_KEY_STORE% 
> -Dsolr.jetty.keystore.password=%SOLR_SSL_KEY_STORE_PASSWORD% 
> -Dsolr.jetty.truststore=%SOLR_SSL_TRUST_STORE% 
> -Dsolr.jetty.truststore.password=%SOLR_SSL_TRUST_STORE_PASSWORD% 
> -Dsolr.jetty.ssl.needClientAuth=%SOLR_SSL_NEED_CLIENT_AUTH% 
> -Dsolr.jetty.ssl.wantClientAuth=%SOLR_SSL_WANT_CLIENT_AUTH%"
>   IF DEFINED SOLR_SSL_CLIENT_KEY_STORE  (
> set "SOLR_SSL_OPTS=%SOLR_SSL_OPTS% 
> -Djavax.net.ssl.keyStore=%SOLR_SSL_CLIENT_KEY_STORE% 
> -Djavax.net.ssl.keyStorePassword=%SOLR_SSL_CLIENT_KEY_STORE_PASSWORD% 
> -Djavax.net.ssl.trustStore=%SOLR_SSL_CLIENT_TRUST_STORE% 
> -Djavax.net.ssl.trustStorePassword=%SOLR_SSL_CLIENT_TRUST_STORE_PASSWORD%"
>   ) ELSE (
> set "SOLR_SSL_OPTS=%SOLR_SSL_OPTS% 
> -Djavax.net.ssl.keyStore=%SOLR_SSL_KEY_STORE% 
> -Djavax.net.ssl.keyStorePassword=%SOLR_SSL_KEY_STORE_PASSWORD% 
> -Djavax.net.ssl.trustStore=%SOLR_SSL_TRUST_STORE% 
> -Djavax.net.ssl.trustStorePassword=%SOLR_SSL_TRUST_STORE_PASSWORD%"
>   )
> ) ELSE (
>   set SOLR_SSL_OPTS=
> )
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-8491) solr.cmd SOLR_SSL_OPTS is overwritten

2017-01-27 Thread Kevin Risden (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8491?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden reassigned SOLR-8491:
--

Assignee: Kevin Risden

> solr.cmd SOLR_SSL_OPTS is overwritten
> -
>
> Key: SOLR-8491
> URL: https://issues.apache.org/jira/browse/SOLR-8491
> Project: Solr
>  Issue Type: Improvement
>  Components: scripts and tools
>Affects Versions: 5.2, 6.0
> Environment: Windows
>Reporter: Sam Yi
>Assignee: Kevin Risden
> Attachments: SOLR-8491.patch
>
>
> In solr.cmd, the SOLR_SSL_OPTS variable is assigned within a block, but then 
> assigned again later in the same block, using {{%SOLR_SSL_OPTS%}} to attempt 
> to append to itself.  However, since we're still inside the same block for 
> this 2nd assignment, {{%SOLR_SSL_OPTS%}} resolves to nothing, so everything 
> in the first assignment (the solr.jetty opts) becomes overwritten.
> I was able to work around this by using {code}!SOLR_SSL_OPTS!{code} instead 
> of {{%SOLR_SSL_OPTS%}} in the 2nd assignments (in both the {{IF}} and 
> {{ELSE}} blocks), since delayed expension is enabled.
> Here's the full block for reference, from commit 
> d4e3f50a6f6bc7b96fa6317f028ae26be25c8928, lines 43-55:
> {code}IF DEFINED SOLR_SSL_KEY_STORE (
>   set "SOLR_JETTY_CONFIG=--module=https"
>   set SOLR_URL_SCHEME=https
>   set "SCRIPT_ERROR=Solr server directory %SOLR_SERVER_DIR% not found!"
>   set "SOLR_SSL_OPTS=-Dsolr.jetty.keystore=%SOLR_SSL_KEY_STORE% 
> -Dsolr.jetty.keystore.password=%SOLR_SSL_KEY_STORE_PASSWORD% 
> -Dsolr.jetty.truststore=%SOLR_SSL_TRUST_STORE% 
> -Dsolr.jetty.truststore.password=%SOLR_SSL_TRUST_STORE_PASSWORD% 
> -Dsolr.jetty.ssl.needClientAuth=%SOLR_SSL_NEED_CLIENT_AUTH% 
> -Dsolr.jetty.ssl.wantClientAuth=%SOLR_SSL_WANT_CLIENT_AUTH%"
>   IF DEFINED SOLR_SSL_CLIENT_KEY_STORE  (
> set "SOLR_SSL_OPTS=%SOLR_SSL_OPTS% 
> -Djavax.net.ssl.keyStore=%SOLR_SSL_CLIENT_KEY_STORE% 
> -Djavax.net.ssl.keyStorePassword=%SOLR_SSL_CLIENT_KEY_STORE_PASSWORD% 
> -Djavax.net.ssl.trustStore=%SOLR_SSL_CLIENT_TRUST_STORE% 
> -Djavax.net.ssl.trustStorePassword=%SOLR_SSL_CLIENT_TRUST_STORE_PASSWORD%"
>   ) ELSE (
> set "SOLR_SSL_OPTS=%SOLR_SSL_OPTS% 
> -Djavax.net.ssl.keyStore=%SOLR_SSL_KEY_STORE% 
> -Djavax.net.ssl.keyStorePassword=%SOLR_SSL_KEY_STORE_PASSWORD% 
> -Djavax.net.ssl.trustStore=%SOLR_SSL_TRUST_STORE% 
> -Djavax.net.ssl.trustStorePassword=%SOLR_SSL_TRUST_STORE_PASSWORD%"
>   )
> ) ELSE (
>   set SOLR_SSL_OPTS=
> )
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8491) solr.cmd SOLR_SSL_OPTS is overwritten

2017-01-27 Thread Kevin Risden (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8491?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-8491:
---
Affects Version/s: 6.4
   6.1
   6.2
   6.3

> solr.cmd SOLR_SSL_OPTS is overwritten
> -
>
> Key: SOLR-8491
> URL: https://issues.apache.org/jira/browse/SOLR-8491
> Project: Solr
>  Issue Type: Improvement
>  Components: scripts and tools
>Affects Versions: 5.2, 6.0, 6.1, 6.2, 6.3, 6.4
> Environment: Windows
>Reporter: Sam Yi
>Assignee: Kevin Risden
> Attachments: SOLR-8491.patch
>
>
> In solr.cmd, the SOLR_SSL_OPTS variable is assigned within a block, but then 
> assigned again later in the same block, using {{%SOLR_SSL_OPTS%}} to attempt 
> to append to itself.  However, since we're still inside the same block for 
> this 2nd assignment, {{%SOLR_SSL_OPTS%}} resolves to nothing, so everything 
> in the first assignment (the solr.jetty opts) becomes overwritten.
> I was able to work around this by using {code}!SOLR_SSL_OPTS!{code} instead 
> of {{%SOLR_SSL_OPTS%}} in the 2nd assignments (in both the {{IF}} and 
> {{ELSE}} blocks), since delayed expension is enabled.
> Here's the full block for reference, from commit 
> d4e3f50a6f6bc7b96fa6317f028ae26be25c8928, lines 43-55:
> {code}IF DEFINED SOLR_SSL_KEY_STORE (
>   set "SOLR_JETTY_CONFIG=--module=https"
>   set SOLR_URL_SCHEME=https
>   set "SCRIPT_ERROR=Solr server directory %SOLR_SERVER_DIR% not found!"
>   set "SOLR_SSL_OPTS=-Dsolr.jetty.keystore=%SOLR_SSL_KEY_STORE% 
> -Dsolr.jetty.keystore.password=%SOLR_SSL_KEY_STORE_PASSWORD% 
> -Dsolr.jetty.truststore=%SOLR_SSL_TRUST_STORE% 
> -Dsolr.jetty.truststore.password=%SOLR_SSL_TRUST_STORE_PASSWORD% 
> -Dsolr.jetty.ssl.needClientAuth=%SOLR_SSL_NEED_CLIENT_AUTH% 
> -Dsolr.jetty.ssl.wantClientAuth=%SOLR_SSL_WANT_CLIENT_AUTH%"
>   IF DEFINED SOLR_SSL_CLIENT_KEY_STORE  (
> set "SOLR_SSL_OPTS=%SOLR_SSL_OPTS% 
> -Djavax.net.ssl.keyStore=%SOLR_SSL_CLIENT_KEY_STORE% 
> -Djavax.net.ssl.keyStorePassword=%SOLR_SSL_CLIENT_KEY_STORE_PASSWORD% 
> -Djavax.net.ssl.trustStore=%SOLR_SSL_CLIENT_TRUST_STORE% 
> -Djavax.net.ssl.trustStorePassword=%SOLR_SSL_CLIENT_TRUST_STORE_PASSWORD%"
>   ) ELSE (
> set "SOLR_SSL_OPTS=%SOLR_SSL_OPTS% 
> -Djavax.net.ssl.keyStore=%SOLR_SSL_KEY_STORE% 
> -Djavax.net.ssl.keyStorePassword=%SOLR_SSL_KEY_STORE_PASSWORD% 
> -Djavax.net.ssl.trustStore=%SOLR_SSL_TRUST_STORE% 
> -Djavax.net.ssl.trustStorePassword=%SOLR_SSL_TRUST_STORE_PASSWORD%"
>   )
> ) ELSE (
>   set SOLR_SSL_OPTS=
> )
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8029) Modernize and standardize Solr APIs

2017-01-27 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8029?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-8029:
-
Attachment: SOLR-8029.patch

patch with all tests , precommit passing. 

> Modernize and standardize Solr APIs
> ---
>
> Key: SOLR-8029
> URL: https://issues.apache.org/jira/browse/SOLR-8029
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 6.0
>Reporter: Noble Paul
>Assignee: Noble Paul
>  Labels: API, EaseOfUse
> Fix For: 6.0
>
> Attachments: SOLR-8029.patch, SOLR-8029.patch, SOLR-8029.patch, 
> SOLR-8029.patch, SOLR-8029.patch
>
>
> Solr APIs have organically evolved and they are sometimes inconsistent with 
> each other or not in sync with the widely followed conventions of HTTP 
> protocol. Trying to make incremental changes to make them modern is like 
> applying band-aid. So, we have done a complete rethink of what the APIs 
> should be. The most notable aspects of the API are as follows:
> The new set of APIs will be placed under a new path {{/solr2}}. The legacy 
> APIs will continue to work under the {{/solr}} path as they used to and they 
> will be eventually deprecated.
> There are 4 types of requests in the new API 
> * {{/v2//*}} : Hit a collection directly or manage 
> collections/shards/replicas 
> * {{/v2//*}} : Hit a core directly or manage cores 
> * {{/v2/cluster/*}} : Operations on cluster not pertaining to any collection 
> or core. e.g: security, overseer ops etc
> This will be released as part of a major release. Check the link given below 
> for the full specification.  Your comments are welcome
> [Solr API version 2 Specification | http://bit.ly/1JYsBMQ]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-7048) warning:org.apache.hadoop.hdfs.security.token.block.InvalidBlockTokenException,this warn's problem take place when searching and indexing

2017-01-27 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett closed SOLR-7048.
---
Resolution: Cannot Reproduce

Closing as Cannot Reproduce since there isn't a lot to go on to try to 
reproduce; plus, some Googling indicates this is a HDFS problem.

> warning:org.apache.hadoop.hdfs.security.token.block.InvalidBlockTokenException,this
>  warn's problem take place when searching and indexing
> -
>
> Key: SOLR-7048
> URL: https://issues.apache.org/jira/browse/SOLR-7048
> Project: Solr
>  Issue Type: Bug
>  Components: hdfs, SolrCloud
>Affects Versions: 4.8.1
> Environment: hadoop cluster:HDP2.1,solr version:4.8.1
>Reporter: kelo2015
>
> indexing:
> org.apache.hadoop.hdfs.security.token.block.InvalidBlockTokenException: 
> access control error while attempting to set up short-circuit access to 
> /user/solr/sub_2014_s08/data/index/_1k1.fdtBlock token with 
> block_token_identifier (expiryDate=1422394876052, keyId=-280715669, 
> userId=solr, blockPoolId=BP-2117321730-132.121.94.119-1395990208332, 
> blockId=1117922079, access modes=[READ]) is expired.
>   at 
> org.apache.hadoop.hdfs.BlockReaderFactory.newShortCircuitBlockReader(BlockReaderFactory.java:217)
>   at 
> org.apache.hadoop.hdfs.BlockReaderFactory.newBlockReader(BlockReaderFactory.java:99)
>   at 
> org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:1064)
>   at 
> org.apache.hadoop.hdfs.DFSInputStream.fetchBlockByteRange(DFSInputStream.java:898)
>   at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:1154)
>   at org.apache.hadoop.fs.FSInputStream.readFully(FSInputStream.java:76)
>   at 
> org.apache.hadoop.fs.FSDataInputStream.readFully(FSDataInputStream.java:95)
>   at 
> org.apache.solr.store.hdfs.HdfsDirectory$HdfsIndexInput.readInternal(HdfsDirectory.java:212)
>   at 
> org.apache.solr.store.blockcache.CustomBufferedIndexInput.refill(CustomBufferedIndexInput.java:191)
>   at 
> org.apache.solr.store.blockcache.CustomBufferedIndexInput.readBytes(CustomBufferedIndexInput.java:93)
>   at 
> org.apache.solr.store.blockcache.CustomBufferedIndexInput.readBytes(CustomBufferedIndexInput.java:67)
>   at 
> org.apache.solr.store.blockcache.BlockDirectory$CachedIndexInput.readIntoCacheAndResult(BlockDirectory.java:210)
>   at 
> org.apache.solr.store.blockcache.BlockDirectory$CachedIndexInput.fetchBlock(BlockDirectory.java:197)
>   at 
> org.apache.solr.store.blockcache.BlockDirectory$CachedIndexInput.readInternal(BlockDirectory.java:181)
>   at 
> org.apache.solr.store.blockcache.CustomBufferedIndexInput.refill(CustomBufferedIndexInput.java:191)
>   at 
> org.apache.solr.store.blockcache.CustomBufferedIndexInput.readBytes(CustomBufferedIndexInput.java:93)
>   at 
> org.apache.solr.store.blockcache.CustomBufferedIndexInput.readBytes(CustomBufferedIndexInput.java:67)
>   at 
> org.apache.lucene.store.BufferedChecksumIndexInput.readBytes(BufferedChecksumIndexInput.java:49)
>   at org.apache.lucene.codecs.compressing.LZ4.decompress(LZ4.java:101)
>   at 
> org.apache.lucene.codecs.compressing.CompressionMode$4.decompress(CompressionMode.java:135)
>   at 
> org.apache.lucene.codecs.compressing.CompressingStoredFieldsReader$ChunkIterator.decompress(CompressingStoredFieldsReader.java:501)
>   at 
> org.apache.lucene.codecs.compressing.CompressingStoredFieldsWriter.merge(CompressingStoredFieldsWriter.java:387)
>   at 
> org.apache.lucene.index.SegmentMerger.mergeFields(SegmentMerger.java:322)
>   at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:100)
>   at 
> org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4132)
>   at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3728)
>   at 
> org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:405)
>   at 
> org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:482)
> searching :
> org.apache.hadoop.hdfs.security.token.block.InvalidBlockTokenException: 
> access control error while attempting to set up short-circuit access to 
> /user/solr/data/index/_s5d_Lucene41_0.timBlock token with 
> block_token_identifier (expiryDate=1422357860644, keyId=-280715670, 
> userId=solr, blockPoolId=BP-2117321730-132.121.94.119-1395990208332, 
> blockId=1086137313, access modes=[READ]) is expired.
>   at 
> org.apache.hadoop.hdfs.BlockReaderFactory.newShortCircuitBlockReader(BlockReaderFactory.java:217)
>   at 
> org.apache.hadoop.hdfs.BlockReaderFactory.newBlockReader(BlockReaderFactory.java:99)
>   at 
> 

[jira] [Updated] (SOLR-8045) Deploy Solr in ROOT (/) path

2017-01-27 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8045?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-8045:
-
Issue Type: Wish  (was: Sub-task)
Parent: (was: SOLR-8029)

> Deploy Solr in ROOT (/) path 
> -
>
> Key: SOLR-8045
> URL: https://issues.apache.org/jira/browse/SOLR-8045
> Project: Solr
>  Issue Type: Wish
>Reporter: Noble Paul
>Assignee: Noble Paul
> Fix For: 6.0
>
>
> This does not mean that the path to access Solr will be changed. All paths 
> will remain as is and would behave exactly the same



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9983) TestManagedSchemaThreadSafety.testThreadSafety() failures

2017-01-27 Thread Mikhail Khludnev (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15843484#comment-15843484
 ] 

Mikhail Khludnev commented on SOLR-9983:


I haven't yet cherry-picked it yet.  

> TestManagedSchemaThreadSafety.testThreadSafety() failures
> -
>
> Key: SOLR-9983
> URL: https://issues.apache.org/jira/browse/SOLR-9983
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Schema and Analysis
>Affects Versions: master (7.0)
>Reporter: Steve Rowe
>Assignee: Mikhail Khludnev
> Fix For: 6.5, master (7.0)
>
> Attachments: SOLR-9983-connection-loss-retry.patch, SOLR-9983.patch, 
> tests-failures-TestManagedSchemaThreadSafety-724.txt
>
>
> I set up a Jenkins job to hammer all tests on the {{jira/solr-5944}} branch, 
> and at least four times this test failed (none of the seeds reproduce for 
> me): [http://jenkins.sarowe.net/job/Solr-tests-SOLR-5944/155/], 
> [http://jenkins.sarowe.net/job/Solr-tests-SOLR-5944/167/], 
> [http://jenkins.sarowe.net/job/Solr-tests-SOLR-5944/106/], 
> [http://jenkins.sarowe.net/job/Solr-tests-SOLR-5944/332/].  My email search 
> didn't turn up any failures on ASF or Policeman Jenkins. Here's the output 
> from one of the above runs:
> {noformat}
>[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestManagedSchemaThreadSafety -Dtests.method=testThreadSafety 
> -Dtests.seed=3DB2B79301AA806B -Dtests.slow=true -Dtests.locale=lt 
> -Dtests.timezone=Asia/Anadyr -Dtests.asserts=true 
> -Dtests.file.encoding=US-ASCII
>[junit4] ERROR   4.37s J4  | 
> TestManagedSchemaThreadSafety.testThreadSafety <<<
>[junit4]> Throwable #1: java.util.concurrent.ExecutionException: 
> java.lang.RuntimeException: org.apache.solr.common.SolrException: Error 
> loading solr config from solrconfig.xml
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([3DB2B79301AA806B:A7F8A3CBD235329D]:0)
>[junit4]>  at 
> java.util.concurrent.FutureTask.report(FutureTask.java:122)
>[junit4]>  at 
> java.util.concurrent.FutureTask.get(FutureTask.java:192)
>[junit4]>  at 
> org.apache.solr.schema.TestManagedSchemaThreadSafety.testThreadSafety(TestManagedSchemaThreadSafety.java:126)
>[junit4]>  at java.lang.Thread.run(Thread.java:745)
>[junit4]> Caused by: java.lang.RuntimeException: 
> org.apache.solr.common.SolrException: Error loading solr config from 
> solrconfig.xml
>[junit4]>  at 
> org.apache.solr.schema.TestManagedSchemaThreadSafety.lambda$indexSchemaLoader$0(TestManagedSchemaThreadSafety.java:159)
>[junit4]>  at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>[junit4]>  at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
>[junit4]>  at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
>[junit4]>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>[junit4]>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>[junit4]>  ... 1 more
>[junit4]> Caused by: org.apache.solr.common.SolrException: Error 
> loading solr config from solrconfig.xml
>[junit4]>  at 
> org.apache.solr.core.SolrConfig.readFromResourceLoader(SolrConfig.java:187)
>[junit4]>  at 
> org.apache.solr.schema.TestManagedSchemaThreadSafety.lambda$indexSchemaLoader$0(TestManagedSchemaThreadSafety.java:152)
>[junit4]>  ... 6 more
>[junit4]> Caused by: java.lang.NullPointerException
>[junit4]>  at 
> org.apache.solr.cloud.ZkSolrResourceLoader.openResource(ZkSolrResourceLoader.java:99)
>[junit4]>  at 
> org.apache.solr.core.SolrResourceLoader.openConfig(SolrResourceLoader.java:361)
>[junit4]>  at org.apache.solr.core.Config.(Config.java:120)
>[junit4]>  at org.apache.solr.core.Config.(Config.java:90)
>[junit4]>  at 
> org.apache.solr.core.SolrConfig.(SolrConfig.java:202)
>[junit4]>  at 
> org.apache.solr.core.SolrConfig.readFromResourceLoader(SolrConfig.java:179)
>[junit4]>  ... 7 more
> {noformat}
> Looks to me like this is a test bug: the test mocks {{ZkController}}, but the 
> mock returns null for (the uninitialized {{cc}} returned by) 
> {{getCoreContainer()}}, which is called when the ZK session expires in 
> {{ZkSolrResourceLoader.openResource()}}.  The NPE is triggered when 
> {{isShutdown()}} is called on the null core container:
> {code:java|title=ZkSolrResourceLoader.java}
>  97: } catch (KeeperException.SessionExpiredException e) {
>  98:   exception = 

[jira] [Closed] (SOLR-6432) ant example shouldn't create 'bin' directory inside example/solr/

2017-01-27 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6432?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett closed SOLR-6432.
---
Resolution: Won't Fix

'ant example' was removed with SOLR-6926.

> ant example shouldn't create 'bin' directory inside example/solr/
> -
>
> Key: SOLR-6432
> URL: https://issues.apache.org/jira/browse/SOLR-6432
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 4.10
>Reporter: Anshum Gupta
>Priority: Minor
>
> 'ant example' creates an empty directory which might confuse users.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8491) solr.cmd SOLR_SSL_OPTS is overwritten

2017-01-27 Thread Kevin Risden (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15843436#comment-15843436
 ] 

Kevin Risden commented on SOLR-8491:


I'll submit a patch for this in a few minutes. I hope to test this this 
afternoon. It would be great to get into 6.4.1.

> solr.cmd SOLR_SSL_OPTS is overwritten
> -
>
> Key: SOLR-8491
> URL: https://issues.apache.org/jira/browse/SOLR-8491
> Project: Solr
>  Issue Type: Improvement
>  Components: scripts and tools
>Affects Versions: 5.2, 6.0
> Environment: Windows
>Reporter: Sam Yi
>
> In solr.cmd, the SOLR_SSL_OPTS variable is assigned within a block, but then 
> assigned again later in the same block, using {{%SOLR_SSL_OPTS%}} to attempt 
> to append to itself.  However, since we're still inside the same block for 
> this 2nd assignment, {{%SOLR_SSL_OPTS%}} resolves to nothing, so everything 
> in the first assignment (the solr.jetty opts) becomes overwritten.
> I was able to work around this by using {code}!SOLR_SSL_OPTS!{code} instead 
> of {{%SOLR_SSL_OPTS%}} in the 2nd assignments (in both the {{IF}} and 
> {{ELSE}} blocks), since delayed expension is enabled.
> Here's the full block for reference, from commit 
> d4e3f50a6f6bc7b96fa6317f028ae26be25c8928, lines 43-55:
> {code}IF DEFINED SOLR_SSL_KEY_STORE (
>   set "SOLR_JETTY_CONFIG=--module=https"
>   set SOLR_URL_SCHEME=https
>   set "SCRIPT_ERROR=Solr server directory %SOLR_SERVER_DIR% not found!"
>   set "SOLR_SSL_OPTS=-Dsolr.jetty.keystore=%SOLR_SSL_KEY_STORE% 
> -Dsolr.jetty.keystore.password=%SOLR_SSL_KEY_STORE_PASSWORD% 
> -Dsolr.jetty.truststore=%SOLR_SSL_TRUST_STORE% 
> -Dsolr.jetty.truststore.password=%SOLR_SSL_TRUST_STORE_PASSWORD% 
> -Dsolr.jetty.ssl.needClientAuth=%SOLR_SSL_NEED_CLIENT_AUTH% 
> -Dsolr.jetty.ssl.wantClientAuth=%SOLR_SSL_WANT_CLIENT_AUTH%"
>   IF DEFINED SOLR_SSL_CLIENT_KEY_STORE  (
> set "SOLR_SSL_OPTS=%SOLR_SSL_OPTS% 
> -Djavax.net.ssl.keyStore=%SOLR_SSL_CLIENT_KEY_STORE% 
> -Djavax.net.ssl.keyStorePassword=%SOLR_SSL_CLIENT_KEY_STORE_PASSWORD% 
> -Djavax.net.ssl.trustStore=%SOLR_SSL_CLIENT_TRUST_STORE% 
> -Djavax.net.ssl.trustStorePassword=%SOLR_SSL_CLIENT_TRUST_STORE_PASSWORD%"
>   ) ELSE (
> set "SOLR_SSL_OPTS=%SOLR_SSL_OPTS% 
> -Djavax.net.ssl.keyStore=%SOLR_SSL_KEY_STORE% 
> -Djavax.net.ssl.keyStorePassword=%SOLR_SSL_KEY_STORE_PASSWORD% 
> -Djavax.net.ssl.trustStore=%SOLR_SSL_TRUST_STORE% 
> -Djavax.net.ssl.trustStorePassword=%SOLR_SSL_TRUST_STORE_PASSWORD%"
>   )
> ) ELSE (
>   set SOLR_SSL_OPTS=
> )
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-10025) SOLR_SSL_OPTS are ignored in bin\solr.cmd

2017-01-27 Thread Kevin Risden (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden closed SOLR-10025.
---
Resolution: Duplicate

Looks like 6.4 is affected by this and it is a duplicate of SOLR-8491. I can 
address it with a patch in SOLR-8491

> SOLR_SSL_OPTS are ignored in bin\solr.cmd
> -
>
> Key: SOLR-10025
> URL: https://issues.apache.org/jira/browse/SOLR-10025
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.3
>Reporter: Andy Hind
>
> SSL config fails on windows.
> Requires fixes for late binding.
> See !SOLR_SSL_OPTS! below 
> {code}
> REM Select HTTP OR HTTPS related configurations
> set SOLR_URL_SCHEME=http
> set "SOLR_JETTY_CONFIG=--module=http"
> set "SOLR_SSL_OPTS= "
> IF DEFINED SOLR_SSL_KEY_STORE (
>   set "SOLR_JETTY_CONFIG=--module=https"
>   set SOLR_URL_SCHEME=https
>   set "SCRIPT_ERROR=Solr server directory %SOLR_SERVER_DIR% not found!"
>   set "SOLR_SSL_OPTS=-Dsolr.jetty.keystore=%SOLR_SSL_KEY_STORE% 
> -Dsolr.jetty.keystore.password=%SOLR_SSL_KEY_STORE_PASSWORD% 
> -Dsolr.jetty.truststore=%SOLR_SSL_TRUST_STORE% 
> -Dsolr.jetty.truststore.password=%SOLR_SSL_TRUST_STORE_PASSWORD% 
> -Dsolr.jetty.ssl.needClientAuth=%SOLR_SSL_NEED_CLIENT_AUTH% 
> -Dsolr.jetty.ssl.wantClientAuth=%SOLR_SSL_WANT_CLIENT_AUTH%"
>   IF DEFINED SOLR_SSL_CLIENT_KEY_STORE  (
> set "SOLR_SSL_OPTS=!SOLR_SSL_OPTS! 
> -Djavax.net.ssl.keyStore=%SOLR_SSL_CLIENT_KEY_STORE% 
> -Djavax.net.ssl.keyStorePassword=%SOLR_SSL_CLIENT_KEY_STORE_PASSWORD% 
> -Djavax.net.ssl.trustStore=%SOLR_SSL_CLIENT_TRUST_STORE% 
> -Djavax.net.ssl.trustStorePassword=%SOLR_SSL_CLIENT_TRUST_STORE_PASSWORD%"
>   ) ELSE (
> set "SOLR_SSL_OPTS=!SOLR_SSL_OPTS! 
> -Djavax.net.ssl.keyStore=%SOLR_SSL_KEY_STORE% 
> -Djavax.net.ssl.keyStorePassword=%SOLR_SSL_KEY_STORE_PASSWORD% 
> -Djavax.net.ssl.trustStore=%SOLR_SSL_TRUST_STORE% 
> -Djavax.net.ssl.trustStorePassword=%SOLR_SSL_TRUST_STORE_PASSWORD%"
>   )
> ) ELSE (
>   set SOLR_SSL_OPTS=
> )
> {code}
> We also use a non default keystore type and have to disable perr name 
> chekcking:
> {code}
> -a ". -Djavax.net.ssl.keyStoreType=JCEKS 
> -Djavax.net.ssl.trustStoreType=JCEKS -Dsolr.ssl.checkPeerName=false"
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-6400) SolrCloud tests are not properly testing session expiration.

2017-01-27 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6400?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett closed SOLR-6400.
---
Resolution: Fixed

This looks like it's been fixed for a while. If I'm interpreting the commits 
wrong, please re-open.

> SolrCloud tests are not properly testing session expiration.
> 
>
> Key: SOLR-6400
> URL: https://issues.apache.org/jira/browse/SOLR-6400
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: SOLR-6400.patch
>
>
> We are using a test method from the ZK project to pause the connection for a 
> length of time. A while back, I found that the pause time did not really 
> matter. All that happens is that the connection is closed and the zk client 
> creates a new one. So it just causes a dc event, but never reaches expiration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 1102 - Unstable!

2017-01-27 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/1102/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.cloud.PeerSyncReplicationTest.test

Error Message:
timeout waiting to see all nodes active

Stack Trace:
java.lang.AssertionError: timeout waiting to see all nodes active
at 
__randomizedtesting.SeedInfo.seed([7835D1824B325C8C:F061EE58E5CE3174]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.PeerSyncReplicationTest.waitTillNodesActive(PeerSyncReplicationTest.java:326)
at 
org.apache.solr.cloud.PeerSyncReplicationTest.bringUpDeadNodeAndEnsureNoReplication(PeerSyncReplicationTest.java:277)
at 
org.apache.solr.cloud.PeerSyncReplicationTest.forceNodeFailureAndDoPeerSync(PeerSyncReplicationTest.java:259)
at 
org.apache.solr.cloud.PeerSyncReplicationTest.test(PeerSyncReplicationTest.java:138)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 

[jira] [Commented] (SOLR-10044) Snap package for Solr

2017-01-27 Thread Christine Poerschke (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15843402#comment-15843402
 ] 

Christine Poerschke commented on SOLR-10044:


Linking SOLR-5176 as semi-related i.e. also about packaging.

> Snap package for Solr
> -
>
> Key: SOLR-10044
> URL: https://issues.apache.org/jira/browse/SOLR-10044
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Roberto Mier
>
> Yesterday I completed code for having a snap package for Solr. You can find 
> source code in https://github.com/rmescandon/solr-snap.
> Includes:
> - Solr 6.4.0 release as a daemon
> - Hook creating a core for nextant by default, when installed
> Obviously this is an initial idea and can be modified or complete with 
> whatever you consider needed or better.
> You can clone source code and try it:
> - git clone g...@github.com:rmescandon/solr-snap.git
> - snapcraft
> - snap install  --dangerous 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-6056) Zookeeper crash JVM stack OOM because of recover strategy

2017-01-27 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett closed SOLR-6056.
---
   Resolution: Fixed
Fix Version/s: 5.0

Per Ishan's last comment, it seems part of this issue was committed, and the 
other part was fixed in SOLR-8371.

> Zookeeper crash JVM stack OOM because of recover strategy 
> --
>
> Key: SOLR-6056
> URL: https://issues.apache.org/jira/browse/SOLR-6056
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 4.6
> Environment: Two linux servers, 65G memory, 16 core cpu
> 20 collections, every collection has one shard two replica 
> one zookeeper
>Reporter: Raintung Li
>Assignee: Shalin Shekhar Mangar
>Priority: Critical
>  Labels: cluster, crash, recover
> Fix For: 5.0
>
> Attachments: patch-6056.txt
>
>
> Some errors"org.apache.solr.common.SolrException: Error opening new searcher. 
> exceeded limit of maxWarmingSearchers=2, try again later", that occur 
> distributedupdateprocessor trig the core admin recover process.
> That means every update request will send the core admin recover request.
> (see the code DistributedUpdateProcessor.java doFinish())
> The terrible thing is CoreAdminHandler will start a new thread to publish the 
> recover status and start recovery. Threads increase very quickly, and stack 
> OOM , Overseer can't handle a lot of status update , zookeeper node for  
> /overseer/queue/qn-125553 increase more than 40 thousand in two minutes.
> At the last zookeeper crash. 
> The worse thing is queue has too much nodes in the zookeeper, the cluster 
> can't publish the right status because only one overseer work, I have to 
> start three threads to clear the queue nodes. The cluster doesn't work normal 
> near 30 minutes...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10006) Cannot do a full sync (fetchindex) if the replica can't open a searcher

2017-01-27 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15843380#comment-15843380
 ] 

Erick Erickson commented on SOLR-10006:
---

This may be funny. I claimed I "removed random segment files". Probably 
habitually removed the doc file. Siiih.

I suppose to be thorough I should remove one file at a time

Erick

> Cannot do a full sync (fetchindex) if the replica can't open a searcher
> ---
>
> Key: SOLR-10006
> URL: https://issues.apache.org/jira/browse/SOLR-10006
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 5.3.1, 6.4
>Reporter: Erick Erickson
> Attachments: SOLR-10006.patch, SOLR-10006.patch, solr.log
>
>
> Doing a full sync or fetchindex requires an open searcher and if you can't 
> open the searcher those operations fail.
> For discussion. I've seen a situation in the field where a replica's index 
> became corrupt. When the node was restarted, the replica tried to do a full 
> sync but fails because the core can't open a searcher. The replica went into 
> an endless sync/fail/sync cycle.
> I couldn't reproduce that exact scenario, but it's easy enough to get into a 
> similar situation. Create a 2x2 collection and index some docs. Then stop one 
> of the instances and go in and remove a couple of segments files and restart.
> The replica stays in the "down" state, fine so far.
> Manually issue a fetchindex. That fails because the replica can't open a 
> searcher. Sure, issuing a fetchindex is abusive but I think it's the same 
> underlying issue: why should we care about the state of a replica's current 
> index when we're going to completely replace it anyway?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10044) Snap package for Solr

2017-01-27 Thread Michael Hall (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15843367#comment-15843367
 ] 

Michael Hall commented on SOLR-10044:
-

The .tgz is only downloaded at built-time, so it really only matters to the 
person building the .snap package. Once built, the .snap can be hosted 
anywhere, but ideally it would at least be in the Snap Store hosted by 
Canonical, because that's used by default. New snaps can be built that use 
6.4.1 or 6.5.0 once those are released (or even before, and published to the 
'edge' and 'beta' release channels in the store)

> Snap package for Solr
> -
>
> Key: SOLR-10044
> URL: https://issues.apache.org/jira/browse/SOLR-10044
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Roberto Mier
>
> Yesterday I completed code for having a snap package for Solr. You can find 
> source code in https://github.com/rmescandon/solr-snap.
> Includes:
> - Solr 6.4.0 release as a daemon
> - Hook creating a core for nextant by default, when installed
> Obviously this is an initial idea and can be modified or complete with 
> whatever you consider needed or better.
> You can clone source code and try it:
> - git clone g...@github.com:rmescandon/solr-snap.git
> - snapcraft
> - snap install  --dangerous 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10048) Distributed result set paging sometimes yields incorrect results

2017-01-27 Thread Markus Jelsma (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15843327#comment-15843327
 ] 

Markus Jelsma commented on SOLR-10048:
--

As far as i know, the sort is implicitly score desc. Knowing this test never 
failed between 5.4 (give or take), it seems the result sets never had any ties 
on score, apparently on 6.4 it does. Of course, explicitly sorting on score 
desc,id asc solves the problem.

Any idea on why this is not reproducible on 6.3? I ran it hundreds of times 
without failure. Is it possible 6.4 causes scores to tie in some cases?

Thanks

> Distributed result set paging sometimes yields incorrect results
> 
>
> Key: SOLR-10048
> URL: https://issues.apache.org/jira/browse/SOLR-10048
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SearchComponents - other
>Affects Versions: 6.4
>Reporter: Markus Jelsma
>Priority: Critical
> Fix For: 6.4.1, master (7.0)
>
> Attachments: DistributedPagedQueryComponentTest.java
>
>
> This bug appeared in 6.4 and i spotted it yesterday when i upgraded a my 
> project. It has, amongst others,an extension of QueryComponent, its unit test 
> failed, but never in any predictable way and always in another spot.
> The test is very straightforward, it indexes a bunch of silly documents and 
> then executes a series of getAll() queries. An array of ids is collected and 
> stored for comparison. Then, the same query is executed again but it pages 
> through the entire result set.
> It then compares ids, the id at position N must be the same as id NUM_PAGE * 
> PAGE_SIZE + M (where M is the position of the result in the paged set). The 
> comparison sometimes failes.
> I'll attach the test for 6.4 shortly. If it passes, just try it again (or 
> increase maxDocs). It can pass over ten times in a row, but it can also fail 
> ten times in a row.
> You should see this if it fails, but probably with different values for 
> expected and actual. Below was a few minutes ago, now i can't seem to 
> reproduce it anymore.
> {code}
>[junit4] FAILURE 25.1s | 
> DistributedPagedQueryComponentTest.testTheCrazyPager <<<
>[junit4]> Throwable #1: java.lang.AssertionError: ids misaligned 
> expected:<406> but was:<811>
>[junit4]>at 
> __randomizedtesting.SeedInfo.seed([97A7F02D1E4ACF75:7493130F03129E6D]:0)
>[junit4]>at 
> org.apache.solr.handler.component.DistributedPagedQueryComponentTest.testTheCrazyPager(DistributedPagedQueryComponentTest.java:83)
>[junit4]>at 
> org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992)
>[junit4]>at 
> org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
>[junit4]>at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10044) Snap package for Solr

2017-01-27 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15843326#comment-15843326
 ] 

Shawn Heisey commented on SOLR-10044:
-

What does this do that's different/better than the service installer script 
that comes with Solr?  Have we made some kind of mistake in that script that 
you're trying to correct?

The config seems to be limiting the download to a single Apache mirror in 
Madrid.  That's will probably be less than ideal for anyone outside of Europe, 
and if enough people use it, will result in that mirror server's Internet 
connection getting overloaded.  Also, the tarball you've referenced is going to 
disappear from that mirror server as soon as 6.4.1 or 6.5.0 is released.


> Snap package for Solr
> -
>
> Key: SOLR-10044
> URL: https://issues.apache.org/jira/browse/SOLR-10044
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Roberto Mier
>
> Yesterday I completed code for having a snap package for Solr. You can find 
> source code in https://github.com/rmescandon/solr-snap.
> Includes:
> - Solr 6.4.0 release as a daemon
> - Hook creating a core for nextant by default, when installed
> Obviously this is an initial idea and can be modified or complete with 
> whatever you consider needed or better.
> You can clone source code and try it:
> - git clone g...@github.com:rmescandon/solr-snap.git
> - snapcraft
> - snap install  --dangerous 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8593) Integrate Apache Calcite into the SQLHandler

2017-01-27 Thread Julian Hyde (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15843313#comment-15843313
 ] 

Julian Hyde commented on SOLR-8593:
---

If you have any "linking" issues with protobuf, you might check out HIVE-15708, 
which was caused because Hive used both avatica-core (which shades protobuf) 
and avatica (which does not).

> Integrate Apache Calcite into the SQLHandler
> 
>
> Key: SOLR-8593
> URL: https://issues.apache.org/jira/browse/SOLR-8593
> Project: Solr
>  Issue Type: Improvement
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Attachments: SOLR-8593.patch, SOLR-8593.patch
>
>
>The Presto SQL Parser was perfect for phase one of the SQLHandler. It was 
> nicely split off from the larger Presto project and it did everything that 
> was needed for the initial implementation.
> Phase two of the SQL work though will require an optimizer. Here is where 
> Apache Calcite comes into play. It has a battle tested cost based optimizer 
> and has been integrated into Apache Drill and Hive.
> This work can begin in trunk following the 6.0 release. The final query plans 
> will continue to be translated to Streaming API objects (TupleStreams), so 
> continued work on the JDBC driver should plug in nicely with the Calcite work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9983) TestManagedSchemaThreadSafety.testThreadSafety() failures

2017-01-27 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15843309#comment-15843309
 ] 

Steve Rowe edited comment on SOLR-9983 at 1/27/17 6:44 PM:
---

My Jenkins had a failure: 
[http://jenkins.sarowe.net/job/Lucene-Solr-tests-master/9459/] (doesn't 
reproduce for me):

{noformat}
Checking out Revision 01878380226c5be6bfedc45b8fb6174de4181a7c 
(refs/remotes/origin/master)
[...]
   [junit4]   2> NOTE: reproduce with: ant test  
-Dtestcase=TestManagedSchemaThreadSafety -Dtests.method=testThreadSafety 
-Dtests.seed=932CA88E4A647823 -Dtests.slow=true -Dtests.locale=pl 
-Dtests.timezone=Brazil/Acre -Dtests.asserts=true -Dtests.file.encoding=UTF-8
   [junit4] ERROR   4.28s J6  | TestManagedSchemaThreadSafety.testThreadSafety 
<<<
   [junit4]> Throwable #1: java.util.concurrent.ExecutionException: 
java.lang.RuntimeException: org.apache.solr.common.SolrException: Error loading 
solr config from solrconfig.xml
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([932CA88E4A647823:966BCD699FBCAD5]:0)
   [junit4]>at 
java.util.concurrent.FutureTask.report(FutureTask.java:122)
   [junit4]>at 
java.util.concurrent.FutureTask.get(FutureTask.java:192)
   [junit4]>at 
org.apache.solr.schema.TestManagedSchemaThreadSafety.testThreadSafety(TestManagedSchemaThreadSafety.java:126)
   [junit4]>at java.lang.Thread.run(Thread.java:745)
   [junit4]> Caused by: java.lang.RuntimeException: 
org.apache.solr.common.SolrException: Error loading solr config from 
solrconfig.xml
   [junit4]>at 
org.apache.solr.schema.TestManagedSchemaThreadSafety.lambda$indexSchemaLoader$0(TestManagedSchemaThreadSafety.java:159)
   [junit4]>at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
   [junit4]>at 
java.util.concurrent.FutureTask.run(FutureTask.java:266)
   [junit4]>at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
   [junit4]>at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
   [junit4]>at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
   [junit4]>... 1 more
   [junit4]> Caused by: org.apache.solr.common.SolrException: Error loading 
solr config from solrconfig.xml
   [junit4]>at 
org.apache.solr.core.SolrConfig.readFromResourceLoader(SolrConfig.java:187)
   [junit4]>at 
org.apache.solr.schema.TestManagedSchemaThreadSafety.lambda$indexSchemaLoader$0(TestManagedSchemaThreadSafety.java:152)
   [junit4]>... 6 more
   [junit4]> Caused by: java.lang.NullPointerException
   [junit4]>at 
org.apache.solr.cloud.ZkSolrResourceLoader.openResource(ZkSolrResourceLoader.java:99)
   [junit4]>at 
org.apache.solr.core.SolrResourceLoader.openConfig(SolrResourceLoader.java:361)
   [junit4]>at org.apache.solr.core.Config.(Config.java:120)
   [junit4]>at org.apache.solr.core.Config.(Config.java:90)
   [junit4]>at 
org.apache.solr.core.SolrConfig.(SolrConfig.java:202)
   [junit4]>at 
org.apache.solr.core.SolrConfig.readFromResourceLoader(SolrConfig.java:179)
   [junit4]>... 7 more
   [junit4]   2> 565162 INFO  
(SUITE-TestManagedSchemaThreadSafety-seed#[932CA88E4A647823]-worker) [] 
o.a.s.c.ZkTestServer connecting to 127.0.0.1:52297 52297
   [junit4]   2> 565318 INFO  (Thread-1704) [] o.a.s.c.ZkTestServer 
connecting to 127.0.0.1:52297 52297
   [junit4]   2> 565318 INFO  
(SUITE-TestManagedSchemaThreadSafety-seed#[932CA88E4A647823]-worker) [] 
o.a.s.SolrTestCaseJ4 ###deleteCore
   [junit4]   2> NOTE: leaving temporary files on disk at: 
/var/lib/jenkins/jobs/Lucene-Solr-tests-master/workspace/solr/build/solr-core/test/J6/temp/solr.schema.TestManagedSchemaThreadSafety_932CA88E4A647823-001
   [junit4]   2> NOTE: test params are: codec=CheapBastard, 
sim=RandomSimilarity(queryNorm=true): {}, locale=pl, timezone=Brazil/Acre
   [junit4]   2> NOTE: Linux 4.1.0-custom2-amd64 amd64/Oracle Corporation 
1.8.0_77 (64-bit)/cpus=16,threads=1,free=311373568,total=520093696
   [junit4]   2> NOTE: All tests run in this JVM: 
[CdcrReplicationDistributedZkTest, CSVRequestHandlerTest, 
DocExpirationUpdateProcessorFactoryTest, FieldAnalysisRequestHandlerTest, 
HdfsTlogReplayBufferedWhileIndexingTest, 
DistribDocExpirationUpdateProcessorTest, TestMacros, SuggestComponentTest, 
TestDynamicFieldCollectionResource, TestBadConfig, 
TestSha256AuthenticationProvider, TestSolrCoreSnapshots, TestSolrCLIRunExample, 
WordBreakSolrSpellCheckerTest, JvmMetricsTest, TestHighlightDedupGrouping, 
TestShortCircuitedRequests, TestUseDocValuesAsStored, 
TestFieldCacheWithThreads, TestMiniSolrCloudCluster, TestSolrJ, 
TestExceedMaxTermLength, 

[jira] [Commented] (SOLR-9983) TestManagedSchemaThreadSafety.testThreadSafety() failures

2017-01-27 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15843309#comment-15843309
 ] 

Steve Rowe commented on SOLR-9983:
--

My Jenkins had a failure: 
[http://jenkins.sarowe.net/job/Lucene-Solr-tests-master/9459/]:

{noformat}
   [junit4]   2> NOTE: reproduce with: ant test  
-Dtestcase=TestManagedSchemaThreadSafety -Dtests.method=testThreadSafety 
-Dtests.seed=932CA88E4A647823 -Dtests.slow=true -Dtests.locale=pl 
-Dtests.timezone=Brazil/Acre -Dtests.asserts=true -Dtests.file.encoding=UTF-8
   [junit4] ERROR   4.28s J6  | TestManagedSchemaThreadSafety.testThreadSafety 
<<<
   [junit4]> Throwable #1: java.util.concurrent.ExecutionException: 
java.lang.RuntimeException: org.apache.solr.common.SolrException: Error loading 
solr config from solrconfig.xml
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([932CA88E4A647823:966BCD699FBCAD5]:0)
   [junit4]>at 
java.util.concurrent.FutureTask.report(FutureTask.java:122)
   [junit4]>at 
java.util.concurrent.FutureTask.get(FutureTask.java:192)
   [junit4]>at 
org.apache.solr.schema.TestManagedSchemaThreadSafety.testThreadSafety(TestManagedSchemaThreadSafety.java:126)
   [junit4]>at java.lang.Thread.run(Thread.java:745)
   [junit4]> Caused by: java.lang.RuntimeException: 
org.apache.solr.common.SolrException: Error loading solr config from 
solrconfig.xml
   [junit4]>at 
org.apache.solr.schema.TestManagedSchemaThreadSafety.lambda$indexSchemaLoader$0(TestManagedSchemaThreadSafety.java:159)
   [junit4]>at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
   [junit4]>at 
java.util.concurrent.FutureTask.run(FutureTask.java:266)
   [junit4]>at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
   [junit4]>at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
   [junit4]>at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
   [junit4]>... 1 more
   [junit4]> Caused by: org.apache.solr.common.SolrException: Error loading 
solr config from solrconfig.xml
   [junit4]>at 
org.apache.solr.core.SolrConfig.readFromResourceLoader(SolrConfig.java:187)
   [junit4]>at 
org.apache.solr.schema.TestManagedSchemaThreadSafety.lambda$indexSchemaLoader$0(TestManagedSchemaThreadSafety.java:152)
   [junit4]>... 6 more
   [junit4]> Caused by: java.lang.NullPointerException
   [junit4]>at 
org.apache.solr.cloud.ZkSolrResourceLoader.openResource(ZkSolrResourceLoader.java:99)
   [junit4]>at 
org.apache.solr.core.SolrResourceLoader.openConfig(SolrResourceLoader.java:361)
   [junit4]>at org.apache.solr.core.Config.(Config.java:120)
   [junit4]>at org.apache.solr.core.Config.(Config.java:90)
   [junit4]>at 
org.apache.solr.core.SolrConfig.(SolrConfig.java:202)
   [junit4]>at 
org.apache.solr.core.SolrConfig.readFromResourceLoader(SolrConfig.java:179)
   [junit4]>... 7 more
   [junit4]   2> 565162 INFO  
(SUITE-TestManagedSchemaThreadSafety-seed#[932CA88E4A647823]-worker) [] 
o.a.s.c.ZkTestServer connecting to 127.0.0.1:52297 52297
   [junit4]   2> 565318 INFO  (Thread-1704) [] o.a.s.c.ZkTestServer 
connecting to 127.0.0.1:52297 52297
   [junit4]   2> 565318 INFO  
(SUITE-TestManagedSchemaThreadSafety-seed#[932CA88E4A647823]-worker) [] 
o.a.s.SolrTestCaseJ4 ###deleteCore
   [junit4]   2> NOTE: leaving temporary files on disk at: 
/var/lib/jenkins/jobs/Lucene-Solr-tests-master/workspace/solr/build/solr-core/test/J6/temp/solr.schema.TestManagedSchemaThreadSafety_932CA88E4A647823-001
   [junit4]   2> NOTE: test params are: codec=CheapBastard, 
sim=RandomSimilarity(queryNorm=true): {}, locale=pl, timezone=Brazil/Acre
   [junit4]   2> NOTE: Linux 4.1.0-custom2-amd64 amd64/Oracle Corporation 
1.8.0_77 (64-bit)/cpus=16,threads=1,free=311373568,total=520093696
   [junit4]   2> NOTE: All tests run in this JVM: 
[CdcrReplicationDistributedZkTest, CSVRequestHandlerTest, 
DocExpirationUpdateProcessorFactoryTest, FieldAnalysisRequestHandlerTest, 
HdfsTlogReplayBufferedWhileIndexingTest, 
DistribDocExpirationUpdateProcessorTest, TestMacros, SuggestComponentTest, 
TestDynamicFieldCollectionResource, TestBadConfig, 
TestSha256AuthenticationProvider, TestSolrCoreSnapshots, TestSolrCLIRunExample, 
WordBreakSolrSpellCheckerTest, JvmMetricsTest, TestHighlightDedupGrouping, 
TestShortCircuitedRequests, TestUseDocValuesAsStored, 
TestFieldCacheWithThreads, TestMiniSolrCloudCluster, TestSolrJ, 
TestExceedMaxTermLength, BlockJoinFacetRandomTest, MigrateRouteKeyTest, 
TestComplexPhraseQParserPlugin, CachingDirectoryFactoryTest, TestWriterPerf, 
DistributedSpellCheckComponentTest, 

[jira] [Commented] (LUCENE-7662) Index with missing files should throw CorruptIndexException

2017-01-27 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15843284#comment-15843284
 ] 

Michael McCandless commented on LUCENE-7662:


+1

> Index with missing files should throw CorruptIndexException
> ---
>
> Key: LUCENE-7662
> URL: https://issues.apache.org/jira/browse/LUCENE-7662
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 6.4
>Reporter: Mike Drob
>
> Similar to what we did in LUCENE-7592 for EOF, we should catch missing files 
> and rethrow those as CorruptIndexException.
> If a particular codec can handle missing files, it should be proactive check 
> for those optional files and not throw anything, so I think we can safely do 
> this at SegmentReader or SegmentCoreReaders level.
> Stack trace copied from SOLR-10006:
> {noformat}
> Caused by: java.nio.file.NoSuchFileException: 
> /Users/Erick/apache/solrVersions/trunk/solr/example/cloud/node3/solr/eoe_shard1_replica1/data/index/_1_Lucene50_0.doc
>   at 
> sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)
>   at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
>   at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
>   at 
> sun.nio.fs.UnixFileSystemProvider.newFileChannel(UnixFileSystemProvider.java:177)
>   at java.nio.channels.FileChannel.open(FileChannel.java:287)
>   at java.nio.channels.FileChannel.open(FileChannel.java:335)
>   at 
> org.apache.lucene.store.MMapDirectory.openInput(MMapDirectory.java:238)
>   at 
> org.apache.lucene.store.NRTCachingDirectory.openInput(NRTCachingDirectory.java:192)
>   at 
> org.apache.solr.core.MetricsDirectoryFactory$MetricsDirectory.openInput(MetricsDirectoryFactory.java:334)
>   at 
> org.apache.lucene.codecs.lucene50.Lucene50PostingsReader.(Lucene50PostingsReader.java:81)
>   at 
> org.apache.lucene.codecs.lucene50.Lucene50PostingsFormat.fieldsProducer(Lucene50PostingsFormat.java:442)
>   at 
> org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader.(PerFieldPostingsFormat.java:292)
>   at 
> org.apache.lucene.codecs.perfield.PerFieldPostingsFormat.fieldsProducer(PerFieldPostingsFormat.java:372)
>   at 
> org.apache.lucene.index.SegmentCoreReaders.(SegmentCoreReaders.java:109)
>   at org.apache.lucene.index.SegmentReader.(SegmentReader.java:74)
>   at 
> org.apache.lucene.index.ReadersAndUpdates.getReader(ReadersAndUpdates.java:143)
>   at 
> org.apache.lucene.index.ReadersAndUpdates.getReadOnlyClone(ReadersAndUpdates.java:195)
>   at 
> org.apache.lucene.index.StandardDirectoryReader.open(StandardDirectoryReader.java:103)
>   at org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:473)
>   at 
> org.apache.lucene.index.DirectoryReader.open(DirectoryReader.java:103)
>   at org.apache.lucene.index.DirectoryReader.open(DirectoryReader.java:79)
>   at 
> org.apache.solr.core.StandardIndexReaderFactory.newReader(StandardIndexReaderFactory.java:39)
>   at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:1958)
>   ... 12 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10048) Distributed result set paging sometimes yields incorrect results

2017-01-27 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15843215#comment-15843215
 ] 

Hoss Man commented on SOLR-10048:
-

I don't see a "sort" specified in the test queries, which means the order is 
non-deterministic (and practically speaking winds up depending on the on disk 
ordering of the docs). so there is no guarantee that the order of documents 
matching the query will be the same in two identical queries -- let alone if 
you change the paging.

even in a non distrib setup, this test could fail if your config alows for 
background merges that re-order segments.

> Distributed result set paging sometimes yields incorrect results
> 
>
> Key: SOLR-10048
> URL: https://issues.apache.org/jira/browse/SOLR-10048
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SearchComponents - other
>Affects Versions: 6.4
>Reporter: Markus Jelsma
>Priority: Critical
> Fix For: 6.4.1, master (7.0)
>
> Attachments: DistributedPagedQueryComponentTest.java
>
>
> This bug appeared in 6.4 and i spotted it yesterday when i upgraded a my 
> project. It has, amongst others,an extension of QueryComponent, its unit test 
> failed, but never in any predictable way and always in another spot.
> The test is very straightforward, it indexes a bunch of silly documents and 
> then executes a series of getAll() queries. An array of ids is collected and 
> stored for comparison. Then, the same query is executed again but it pages 
> through the entire result set.
> It then compares ids, the id at position N must be the same as id NUM_PAGE * 
> PAGE_SIZE + M (where M is the position of the result in the paged set). The 
> comparison sometimes failes.
> I'll attach the test for 6.4 shortly. If it passes, just try it again (or 
> increase maxDocs). It can pass over ten times in a row, but it can also fail 
> ten times in a row.
> You should see this if it fails, but probably with different values for 
> expected and actual. Below was a few minutes ago, now i can't seem to 
> reproduce it anymore.
> {code}
>[junit4] FAILURE 25.1s | 
> DistributedPagedQueryComponentTest.testTheCrazyPager <<<
>[junit4]> Throwable #1: java.lang.AssertionError: ids misaligned 
> expected:<406> but was:<811>
>[junit4]>at 
> __randomizedtesting.SeedInfo.seed([97A7F02D1E4ACF75:7493130F03129E6D]:0)
>[junit4]>at 
> org.apache.solr.handler.component.DistributedPagedQueryComponentTest.testTheCrazyPager(DistributedPagedQueryComponentTest.java:83)
>[junit4]>at 
> org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992)
>[junit4]>at 
> org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
>[junit4]>at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Cascading JIRA watch notification email flooding problems on SOLR-8029 and SOLR-4500

2017-01-27 Thread Steve Rowe
After chatting with Infra on hipchat about the email notification flood 
problems on SOLR-8029 and SOLR-4500, I filed a JIRA at their request: 
.

Initial indications are that procmailrc rules that should interdict 
auto-replies before they reach JIRA were never ported to the new hardware when 
the JIRA instance was recently moved.  They’re working on it now.

--
Steve
www.lucidworks.com


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-7662) Index with missing files should throw CorruptIndexException

2017-01-27 Thread Mike Drob (JIRA)
Mike Drob created LUCENE-7662:
-

 Summary: Index with missing files should throw 
CorruptIndexException
 Key: LUCENE-7662
 URL: https://issues.apache.org/jira/browse/LUCENE-7662
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/index
Affects Versions: 6.4
Reporter: Mike Drob


Similar to what we did in LUCENE-7592 for EOF, we should catch missing files 
and rethrow those as CorruptIndexException.

If a particular codec can handle missing files, it should be proactive check 
for those optional files and not throw anything, so I think we can safely do 
this at SegmentReader or SegmentCoreReaders level.

Stack trace copied from SOLR-10006:

{noformat}
Caused by: java.nio.file.NoSuchFileException: 
/Users/Erick/apache/solrVersions/trunk/solr/example/cloud/node3/solr/eoe_shard1_replica1/data/index/_1_Lucene50_0.doc
at 
sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
at 
sun.nio.fs.UnixFileSystemProvider.newFileChannel(UnixFileSystemProvider.java:177)
at java.nio.channels.FileChannel.open(FileChannel.java:287)
at java.nio.channels.FileChannel.open(FileChannel.java:335)
at 
org.apache.lucene.store.MMapDirectory.openInput(MMapDirectory.java:238)
at 
org.apache.lucene.store.NRTCachingDirectory.openInput(NRTCachingDirectory.java:192)
at 
org.apache.solr.core.MetricsDirectoryFactory$MetricsDirectory.openInput(MetricsDirectoryFactory.java:334)
at 
org.apache.lucene.codecs.lucene50.Lucene50PostingsReader.(Lucene50PostingsReader.java:81)
at 
org.apache.lucene.codecs.lucene50.Lucene50PostingsFormat.fieldsProducer(Lucene50PostingsFormat.java:442)
at 
org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader.(PerFieldPostingsFormat.java:292)
at 
org.apache.lucene.codecs.perfield.PerFieldPostingsFormat.fieldsProducer(PerFieldPostingsFormat.java:372)
at 
org.apache.lucene.index.SegmentCoreReaders.(SegmentCoreReaders.java:109)
at org.apache.lucene.index.SegmentReader.(SegmentReader.java:74)
at 
org.apache.lucene.index.ReadersAndUpdates.getReader(ReadersAndUpdates.java:143)
at 
org.apache.lucene.index.ReadersAndUpdates.getReadOnlyClone(ReadersAndUpdates.java:195)
at 
org.apache.lucene.index.StandardDirectoryReader.open(StandardDirectoryReader.java:103)
at org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:473)
at 
org.apache.lucene.index.DirectoryReader.open(DirectoryReader.java:103)
at org.apache.lucene.index.DirectoryReader.open(DirectoryReader.java:79)
at 
org.apache.solr.core.StandardIndexReaderFactory.newReader(StandardIndexReaderFactory.java:39)
at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:1958)
... 12 more
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9764) Design a memory efficient DocSet if a query returns all docs

2017-01-27 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley updated SOLR-9764:
---
Attachment: SOLR-9764.patch

tldr; the attached patch should make all queries that end up matching all 
documents use the same DocSet instead of caching different sets.

Notes on changes from the previous patch:
- I'm not sure what version this patch was made for, but it won't compile on 
either trunk or 6x.
{code}
[javac] 
/opt/code/lusolr/trunk/solr/core/src/java/org/apache/solr/query/SolrRangeQuery.java:157:
 error: incompatible types: DocSet cannot be converted to BitDocSet
[javac] BitDocSet liveDocs = searcher.getLiveDocs();
{code}
- Some code currently explicitly relies on BitDocSet from liveDocs (hence the 
compilation error above)

- hopefully the following change is just an optimization, and not meant to 
ensure that the same amount of 0 padding is used for each bitset?
  it's buggy in the general case (size is cardinality, not capacity)
{code}
   protected FixedBitSet getBits() {
-FixedBitSet bits = new FixedBitSet(64);
+FixedBitSet bits = new FixedBitSet(size());
{code}

- if MatchAllDocs is used as a lucene filter, it can cause a lot of unnessesary 
memory use... in fact it would end up creating a new bit set 
  for each use.  This is one instance of a more generic problem... the 
BaseDocSet implementations are often very inefficient and should be
  overridden by any DocSet meant for use in any common case.  Much of the code 
was also written for best performance with the knowledge that
  there were only 2 common sets (small and large)... that code will need to be 
revisited / re-reviewed when adding a 3rd into the mix.

- Given the current problems with MatchAllDocs, I've backed out that part of 
the patch for now.
  As detailed above, this is more a problem of the fragility of the current 
code base than with your class.
  It's probably best handled in a separate issue, and perhaps in a more general 
way that can handle more cases (like most docs matching or segments with 
deleted docs),
  or if the robustness of the DocSet hierarchy can be improved, we could even 
add multiple new implementations (Roaring, MatchMost, MatchAll, etc)

- The setting of liveDocs was not thread-safe (unsafe object publishing)

- added size() to DocSetCollector in favor of the more specific isMatchLiveDocs

- The check for liveDocs was only done in createDocSetGeneric, meaning many 
instances wouldn't be detected (term query, range query on string,
  non-fq parameters like base queries, etc). I created some DocSetUtil methods 
to handle these cases and called them
  from most of the appropriate places.

- just a draft patch, but if people agree, we can add tests for 
sharing/deduplication of liveDocs (which includes the MatchAllDocs case) and 
commit.



> Design a memory efficient DocSet if a query returns all docs
> 
>
> Key: SOLR-9764
> URL: https://issues.apache.org/jira/browse/SOLR-9764
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Michael Sun
> Attachments: SOLR_9764_no_cloneMe.patch, SOLR-9764.patch, 
> SOLR-9764.patch, SOLR-9764.patch, SOLR-9764.patch, SOLR-9764.patch, 
> SOLR-9764.patch, SOLR-9764.patch
>
>
> In some use cases, particularly use cases with time series data, using 
> collection alias and partitioning data into multiple small collections using 
> timestamp, a filter query can match all documents in a collection. Currently 
> BitDocSet is used which contains a large array of long integers with every 
> bits set to 1. After querying, the resulted DocSet saved in filter cache is 
> large and becomes one of the main memory consumers in these use cases.
> For example. suppose a Solr setup has 14 collections for data in last 14 
> days, each collection with one day of data. A filter query for last one week 
> data would result in at least six DocSet in filter cache which matches all 
> documents in six collections respectively.   
> This is to design a new DocSet that is memory efficient for such a use case.  
> The new DocSet removes the large array, reduces memory usage and GC pressure 
> without losing advantage of large filter cache.
> In particular, for use cases when using time series data, collection alias 
> and partition data into multiple small collections using timestamp, the gain 
> can be large.
> For further optimization, it may be helpful to design a DocSet with run 
> length encoding. Thanks [~mmokhtar] for suggestion. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10011) Refactor PointField & TrieField to share common code

2017-01-27 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-10011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15843153#comment-15843153
 ] 

Tomás Fernández Löbbe commented on SOLR-10011:
--

I'll upload a patch for this shortly

> Refactor PointField & TrieField to share common code
> 
>
> Key: SOLR-10011
> URL: https://issues.apache.org/jira/browse/SOLR-10011
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Ishan Chattopadhyaya
>Assignee: Ishan Chattopadhyaya
> Attachments: SOLR-10011.patch, SOLR-10011.patch, SOLR-10011.patch, 
> SOLR-10011.patch
>
>
> We should eliminate PointTypes and TrieTypes enum to have a common enum for 
> both. That would enable us to share a lot of code between the two field types.
> In the process, fix the bug:
> PointFields with indexed=false, docValues=true seem to be using 
> (Int|Double|Float|Long)Point.newRangeQuery() for performing exact matches and 
> range queries. However, they should instead be using DocValues based range 
> query.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Language Detection Individual Field Mapping Bug

2017-01-27 Thread Tomás Fernández Löbbe
Thanks Will,
This does look like a bug and I also couldn't find a Jira issue for it.
Feel free to create one.

Tomás

On Mon, Jan 23, 2017 at 10:37 PM, Will Martin 
wrote:

> Hello,
>
> While using Solr 6.0.4 I noticed that the org.apache.solr.update.
> processor.LangDetectLanguageIdentifierUpdateProcessor has a bug in it
> where it does not respect the "langid.map.individual" parameter in
> solrconfig.xml. The documentation for langid.map.individual
> 
> specifies:
>
> If you require detecting languages separately for each field, supply
>> langid.map.individual=true. The supplied fields will then be renamed
>> according to detected language on an individual field basis.
>>
>
> However, when this field is set to "true" the fields are still mapped to
> the language code of the entire document. For example: With the following
> snippet from solrconfig.xml
>
>  class="org.apache.solr.update.processor.TikaLanguageIdentifierUpdateProcessorFactory">
>
>  title,text
>  language_s
>  true
>  true
>
>
> a document that takes the form
>
> {
>   "title": "This is an English title",
>   "text": "Pero el texto de este documento está en español."
> }
>
> will be turned into
>
> {
>   "title_es": "This is an english title",
>   "text_es": "Pero el texto de este documento está en español.",
>   "language_s": ["es"]
> }
>
> rather than
>
> {
>   "title_en": "This is an english title",
>   "text_es": "Pero el texto de este documento está en español.",
>   "language_s": ["es","en"]
> }
>
> during processing.
>
> This bug seems to have been introduced in SOLR-3881
>  when the abstract
> method (LangDetectLanguageIdentifierUpdateProcessor.java:52)
>
> protected List detectLanguage(String content)
>
> was changed to the signature
>
> protected List detectLanguage(SolrInputDocument doc)
>
> which does not allow one to recognize individual fields while preforming
> language detection. As it stands, the entire document is analysed per
> individual field (included in the "langid.fl" or "langid.map.individual.fl"
> parameters) and the field is mapped to the language of the entire document.
>
> I searched the Apache Jira for a ticket tracking this bug but did not find
> anything that seemed related. I thought before filing a new ticket I would
> ping this mailing list to see if anyone knows about work relating to this
> issue or if there is already a ticket for it (not directly related to the
> term "langid.map.individual" perhaps). If not I can go ahead and file the
> ticket.
>
>
> Thanks,
>
> -William Martin
>


[jira] [Commented] (LUCENE-7656) Implement geo box and distance queries using doc values.

2017-01-27 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15843148#comment-15843148
 ] 

Michael McCandless commented on LUCENE-7656:


bq. For the record, the nightly benchmarks confirm the speedup.

Nice!

> Implement geo box and distance queries using doc values.
> 
>
> Key: LUCENE-7656
> URL: https://issues.apache.org/jira/browse/LUCENE-7656
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Priority: Minor
> Fix For: master (7.0), 6.5
>
> Attachments: LUCENE-7656.patch, LUCENE-7656.patch, LUCENE-7656.patch
>
>
> Having geo box and distance queries available as both point and 
> doc-values-based queries means we could use them with 
> {{IndexOrDocValuesQuery}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8593) Integrate Apache Calcite into the SQLHandler

2017-01-27 Thread Kevin Risden (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15843124#comment-15843124
 ] 

Kevin Risden commented on SOLR-8593:


[~md...@cloudera.com] [~markrmil...@gmail.com] Any thoughts on updating 
protobuf-java? Looks like it was included for the Hadoop support? The HDFS 
tests pass.

[~joel.bernstein] Thoughts on getting this merged?

> Integrate Apache Calcite into the SQLHandler
> 
>
> Key: SOLR-8593
> URL: https://issues.apache.org/jira/browse/SOLR-8593
> Project: Solr
>  Issue Type: Improvement
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Attachments: SOLR-8593.patch, SOLR-8593.patch
>
>
>The Presto SQL Parser was perfect for phase one of the SQLHandler. It was 
> nicely split off from the larger Presto project and it did everything that 
> was needed for the initial implementation.
> Phase two of the SQL work though will require an optimizer. Here is where 
> Apache Calcite comes into play. It has a battle tested cost based optimizer 
> and has been integrated into Apache Drill and Hive.
> This work can begin in trunk following the 6.0 release. The final query plans 
> will continue to be translated to Streaming API objects (TupleStreams), so 
> continued work on the JDBC driver should plug in nicely with the Calcite work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-7759) DebugComponent's explain should be implemented as a distributed query

2017-01-27 Thread Alessandro Benedetti (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15842774#comment-15842774
 ] 

Alessandro Benedetti edited comment on SOLR-7759 at 1/27/17 4:28 PM:
-

After a bit of investigations ( and a manual run) , I ended up with this little 
modifications :
*1) First we send the global stats :*
org/apache/solr/handler/component/QueryComponent.java:1295
...
StatsCache statsCache = rb.req.getCore().getStatsCache();
statsCache.sendGlobalStats(rb, sreq);
sreq.purpose = ShardRequest.PURPOSE_GET_FIELDS;
rb.addRequest(this, sreq);

*2) processing the request, we collect the Global Stats and we use them for 
debugging*

org/apache/solr/handler/component/QueryComponent.java:318
...
if ((purpose & ShardRequest.PURPOSE_SET_TERM_STATS) != 0 || (purpose & 
ShardRequest.PURPOSE_GET_DEBUG) != 0) {
  // retrieve from request and update local cache
  statsCache.receiveGlobalStats(req);
}
req.getContext().put(SolrIndexSearcher.STATS_SOURCE, statsCache.get(req));

Does this make sense ?
If you agree I can take a look to the tests ( if is covered) and then 
contribute the patch !


was (Author: alessandro.benedetti):
After a bit of investigations ( and a manual run) , I ended up with this little 
modifications :
1) First we send the global stats :
org/apache/solr/handler/component/QueryComponent.java:1295
...
StatsCache statsCache = rb.req.getCore().getStatsCache();
statsCache.sendGlobalStats(rb, sreq);
sreq.purpose = ShardRequest.PURPOSE_GET_FIELDS;
rb.addRequest(this, sreq);

2) processing the request, we collect the Global Stats and we use them for 
debugging

org/apache/solr/handler/component/QueryComponent.java:318
...
if ((purpose & ShardRequest.PURPOSE_SET_TERM_STATS) != 0 || (purpose & 
ShardRequest.PURPOSE_GET_DEBUG) != 0) {
  // retrieve from request and update local cache
  statsCache.receiveGlobalStats(req);
}
req.getContext().put(SolrIndexSearcher.STATS_SOURCE, statsCache.get(req));

Does this make sense ?
If you agree I can take a look to the tests ( if is covered) and then 
contribute the patch !

> DebugComponent's explain should be implemented as a distributed query
> -
>
> Key: SOLR-7759
> URL: https://issues.apache.org/jira/browse/SOLR-7759
> Project: Solr
>  Issue Type: Bug
>Reporter: Varun Thacker
>
> Currently when we use debugQuery to see the explanation of the matched 
> documents, the query fired to get the statistics for the matched documents is 
> not a distributed query.
> This is a problem when using distributed idf. The actual documents are being 
> scored using the global stats and not per shard stats , but the explain will 
> show us the score by taking into account the stats from the shard where the 
> document belongs to.
> We should try to implement the explain query as a distributed request so that 
> the scores match the actual document scores.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-7759) DebugComponent's explain should be implemented as a distributed query

2017-01-27 Thread Alessandro Benedetti (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15842774#comment-15842774
 ] 

Alessandro Benedetti edited comment on SOLR-7759 at 1/27/17 4:27 PM:
-

After a bit of investigations ( and a manual run) , I ended up with this little 
modifications :
1) First we send the global stats :
org/apache/solr/handler/component/QueryComponent.java:1295
...
StatsCache statsCache = rb.req.getCore().getStatsCache();
statsCache.sendGlobalStats(rb, sreq);
sreq.purpose = ShardRequest.PURPOSE_GET_FIELDS;
rb.addRequest(this, sreq);

2) processing the request, we collect the Global Stats and we use them for 
debugging

org/apache/solr/handler/component/QueryComponent.java:318
...
if ((purpose & ShardRequest.PURPOSE_SET_TERM_STATS) != 0 || (purpose & 
ShardRequest.PURPOSE_GET_DEBUG) != 0) {
  // retrieve from request and update local cache
  statsCache.receiveGlobalStats(req);
}
req.getContext().put(SolrIndexSearcher.STATS_SOURCE, statsCache.get(req));

Does this make sense ?
If you agree I can take a look to the tests ( if is covered) and then 
contribute the patch !


was (Author: alessandro.benedetti):
I didn't try it yet, but would this Naive approach work ?

org/apache/solr/handler/component/QueryComponent.java:318

if ((purpose & ShardRequest.PURPOSE_SET_TERM_STATS) != 0 || (purpose & 
ShardRequest.PURPOSE_GET_DEBUG) != 0) {
  // retrieve from request and update local cache
  statsCache.receiveGlobalStats(req);
}

I will investigate further

P.S. investigating more, of course is not that easy, sorry for the simplistic 
approach

> DebugComponent's explain should be implemented as a distributed query
> -
>
> Key: SOLR-7759
> URL: https://issues.apache.org/jira/browse/SOLR-7759
> Project: Solr
>  Issue Type: Bug
>Reporter: Varun Thacker
>
> Currently when we use debugQuery to see the explanation of the matched 
> documents, the query fired to get the statistics for the matched documents is 
> not a distributed query.
> This is a problem when using distributed idf. The actual documents are being 
> scored using the global stats and not per shard stats , but the explain will 
> show us the score by taking into account the stats from the shard where the 
> document belongs to.
> We should try to implement the explain query as a distributed request so that 
> the scores match the actual document scores.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10044) Snap package for Solr

2017-01-27 Thread Kevin Risden (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15843005#comment-15843005
 ] 

Kevin Risden commented on SOLR-10044:
-

I wasn't sure what a snap package was and found http://snapcraft.io/. There 
aren't any packages (deb, rpm, etc) within Solr yet. It you are looking for 
people to test/try it you probably want to post on the solr-user mailing list. 

> Snap package for Solr
> -
>
> Key: SOLR-10044
> URL: https://issues.apache.org/jira/browse/SOLR-10044
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Roberto Mier
>
> Yesterday I completed code for having a snap package for Solr. You can find 
> source code in https://github.com/rmescandon/solr-snap.
> Includes:
> - Solr 6.4.0 release as a daemon
> - Hook creating a core for nextant by default, when installed
> Obviously this is an initial idea and can be modified or complete with 
> whatever you consider needed or better.
> You can clone source code and try it:
> - git clone g...@github.com:rmescandon/solr-snap.git
> - snapcraft
> - snap install  --dangerous 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-Windows (64bit/jdk1.8.0_121) - Build # 706 - Still Unstable!

2017-01-27 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Windows/706/
Java: 64bit/jdk1.8.0_121 -XX:+UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.cloud.OverseerRolesTest.testOverseerRole

Error Message:
Timed out waiting for overseer state change

Stack Trace:
java.lang.AssertionError: Timed out waiting for overseer state change
at 
__randomizedtesting.SeedInfo.seed([A1DB78548433A3FA:401085C0BF80952B]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.OverseerRolesTest.waitForNewOverseer(OverseerRolesTest.java:62)
at 
org.apache.solr.cloud.OverseerRolesTest.testOverseerRole(OverseerRolesTest.java:140)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 12446 lines...]
   [junit4] Suite: org.apache.solr.cloud.OverseerRolesTest
   [junit4]   2> Creating dataDir: 

[jira] [Commented] (LUCENE-7525) ASCIIFoldingFilter.foldToASCII performance issue due to large compiled method size

2017-01-27 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15842914#comment-15842914
 ] 

Adrien Grand commented on LUCENE-7525:
--

+1 [~steve_rowe]

> ASCIIFoldingFilter.foldToASCII performance issue due to large compiled method 
> size
> --
>
> Key: LUCENE-7525
> URL: https://issues.apache.org/jira/browse/LUCENE-7525
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Affects Versions: 6.2.1
>Reporter: Karl von Randow
> Attachments: ASCIIFoldingFilter.java, ASCIIFolding.java, 
> LUCENE-7525.patch, LUCENE-7525.patch, TestASCIIFolding.java
>
>
> The {{ASCIIFoldingFilter.foldToASCII}} method has an enormous switch 
> statement and is too large for the HotSpot compiler to compile; causing a 
> performance problem.
> The method is about 13K compiled, versus the 8KB HotSpot limit. So splitting 
> the method in half works around the problem.
> In my tests splitting the method in half resulted in a 5X performance 
> increase.
> In the test code below you can see how slow the fold method is, even when it 
> is using the shortcut when the character is less than 0x80, compared to an 
> inline implementation of the same shortcut.
> So a workaround is to split the method. I'm happy to provide a patch. It's a 
> hack, of course. Perhaps using the {{MappingCharFilterFactory}} with an input 
> file as per SOLR-2013 would be a better replacement for this method in this 
> class?
> {code:java}
> public class ASCIIFoldingFilterPerformanceTest {
>   private static final int ITERATIONS = 1_000_000;
>   @Test
>   public void testFoldShortString() {
>   char[] input = "testing".toCharArray();
>   char[] output = new char[input.length * 4];
>   for (int i = 0; i < ITERATIONS; i++) {
>   ASCIIFoldingFilter.foldToASCII(input, 0, output, 0, 
> input.length);
>   }
>   }
>   @Test
>   public void testFoldShortAccentedString() {
>   char[] input = "éúéúøßüäéúéúøßüä".toCharArray();
>   char[] output = new char[input.length * 4];
>   for (int i = 0; i < ITERATIONS; i++) {
>   ASCIIFoldingFilter.foldToASCII(input, 0, output, 0, 
> input.length);
>   }
>   }
>   @Test
>   public void testManualFoldTinyString() {
>   char[] input = "t".toCharArray();
>   char[] output = new char[input.length * 4];
>   for (int i = 0; i < ITERATIONS; i++) {
>   int k = 0;
>   for (int j = 0; j < 1; ++j) {
>   final char c = input[j];
>   if (c < '\u0080') {
>   output[k++] = c;
>   } else {
>   Assert.assertTrue(false);
>   }
>   }
>   }
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-7759) DebugComponent's explain should be implemented as a distributed query

2017-01-27 Thread Alessandro Benedetti (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15842774#comment-15842774
 ] 

Alessandro Benedetti edited comment on SOLR-7759 at 1/27/17 2:16 PM:
-

I didn't try it yet, but would this Naive approach work ?

org/apache/solr/handler/component/QueryComponent.java:318

if ((purpose & ShardRequest.PURPOSE_SET_TERM_STATS) != 0 || (purpose & 
ShardRequest.PURPOSE_GET_DEBUG) != 0) {
  // retrieve from request and update local cache
  statsCache.receiveGlobalStats(req);
}

I will investigate further

P.S. investigating more, of course is not that easy, sorry for the simplistic 
approach


was (Author: alessandro.benedetti):
I didn't try it yet, but would this Naive approach work ?

org/apache/solr/handler/component/QueryComponent.java:318

if ((purpose & ShardRequest.PURPOSE_SET_TERM_STATS) != 0 || (purpose & 
ShardRequest.PURPOSE_GET_DEBUG) != 0) {
  // retrieve from request and update local cache
  statsCache.receiveGlobalStats(req);
}

I will investigate further

> DebugComponent's explain should be implemented as a distributed query
> -
>
> Key: SOLR-7759
> URL: https://issues.apache.org/jira/browse/SOLR-7759
> Project: Solr
>  Issue Type: Bug
>Reporter: Varun Thacker
>
> Currently when we use debugQuery to see the explanation of the matched 
> documents, the query fired to get the statistics for the matched documents is 
> not a distributed query.
> This is a problem when using distributed idf. The actual documents are being 
> scored using the global stats and not per shard stats , but the explain will 
> show us the score by taking into account the stats from the shard where the 
> document belongs to.
> We should try to implement the explain query as a distributed request so that 
> the scores match the actual document scores.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7656) Implement geo box and distance queries using doc values.

2017-01-27 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15842905#comment-15842905
 ] 

Adrien Grand commented on LUCENE-7656:
--

For the record, the nightly benchmarks confirm the speedup. 
http://people.apache.org/~mikemccand/geobench.html#search-distance

> Implement geo box and distance queries using doc values.
> 
>
> Key: LUCENE-7656
> URL: https://issues.apache.org/jira/browse/LUCENE-7656
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Priority: Minor
> Fix For: master (7.0), 6.5
>
> Attachments: LUCENE-7656.patch, LUCENE-7656.patch, LUCENE-7656.patch
>
>
> Having geo box and distance queries available as both point and 
> doc-values-based queries means we could use them with 
> {{IndexOrDocValuesQuery}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7525) ASCIIFoldingFilter.foldToASCII performance issue due to large compiled method size

2017-01-27 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7525?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-7525:
-
Attachment: LUCENE-7525.patch

Same patch, but with comments.

> ASCIIFoldingFilter.foldToASCII performance issue due to large compiled method 
> size
> --
>
> Key: LUCENE-7525
> URL: https://issues.apache.org/jira/browse/LUCENE-7525
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Affects Versions: 6.2.1
>Reporter: Karl von Randow
> Attachments: ASCIIFoldingFilter.java, ASCIIFolding.java, 
> LUCENE-7525.patch, LUCENE-7525.patch, TestASCIIFolding.java
>
>
> The {{ASCIIFoldingFilter.foldToASCII}} method has an enormous switch 
> statement and is too large for the HotSpot compiler to compile; causing a 
> performance problem.
> The method is about 13K compiled, versus the 8KB HotSpot limit. So splitting 
> the method in half works around the problem.
> In my tests splitting the method in half resulted in a 5X performance 
> increase.
> In the test code below you can see how slow the fold method is, even when it 
> is using the shortcut when the character is less than 0x80, compared to an 
> inline implementation of the same shortcut.
> So a workaround is to split the method. I'm happy to provide a patch. It's a 
> hack, of course. Perhaps using the {{MappingCharFilterFactory}} with an input 
> file as per SOLR-2013 would be a better replacement for this method in this 
> class?
> {code:java}
> public class ASCIIFoldingFilterPerformanceTest {
>   private static final int ITERATIONS = 1_000_000;
>   @Test
>   public void testFoldShortString() {
>   char[] input = "testing".toCharArray();
>   char[] output = new char[input.length * 4];
>   for (int i = 0; i < ITERATIONS; i++) {
>   ASCIIFoldingFilter.foldToASCII(input, 0, output, 0, 
> input.length);
>   }
>   }
>   @Test
>   public void testFoldShortAccentedString() {
>   char[] input = "éúéúøßüäéúéúøßüä".toCharArray();
>   char[] output = new char[input.length * 4];
>   for (int i = 0; i < ITERATIONS; i++) {
>   ASCIIFoldingFilter.foldToASCII(input, 0, output, 0, 
> input.length);
>   }
>   }
>   @Test
>   public void testManualFoldTinyString() {
>   char[] input = "t".toCharArray();
>   char[] output = new char[input.length * 4];
>   for (int i = 0; i < ITERATIONS; i++) {
>   int k = 0;
>   for (int j = 0; j < 1; ++j) {
>   final char c = input[j];
>   if (c < '\u0080') {
>   output[k++] = c;
>   } else {
>   Assert.assertTrue(false);
>   }
>   }
>   }
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7525) ASCIIFoldingFilter.foldToASCII performance issue due to large compiled method size

2017-01-27 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15842901#comment-15842901
 ] 

Steve Rowe commented on LUCENE-7525:


{quote}
I think, we can for now replace the large switch statement with a resource 
file. I'd have 2 ideas:
# A UTF-8 encoded file with 2 columns: first column is a single char, 2nd 
column is a series of replacements. I don't really like this approach as it is 
very sensitive to corrumption by editors and hard to commit correct
# A simple file like int => int,int,int // comment, this is easy to parse and 
convert, but backside is that its harder to read the codepoints (for that we 
have a comment)
{quote}

I wrote a Perl script to create {{mapping-FoldToASCII.txt}}, which is usable 
with {{MappingCharFilter}}, from the {{ASCIIFoldingFilter}} code - the script 
is actually embedded in that file, which is included in several of Solr's 
example configsets, e.g. under 
{{solr/server/solr/configsets/sample_techproducts_configs/conf/}}.  Maybe this 
file could be used directly?  It's human friendly, so would allow for easy user 
customization.

> ASCIIFoldingFilter.foldToASCII performance issue due to large compiled method 
> size
> --
>
> Key: LUCENE-7525
> URL: https://issues.apache.org/jira/browse/LUCENE-7525
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Affects Versions: 6.2.1
>Reporter: Karl von Randow
> Attachments: ASCIIFoldingFilter.java, ASCIIFolding.java, 
> LUCENE-7525.patch, TestASCIIFolding.java
>
>
> The {{ASCIIFoldingFilter.foldToASCII}} method has an enormous switch 
> statement and is too large for the HotSpot compiler to compile; causing a 
> performance problem.
> The method is about 13K compiled, versus the 8KB HotSpot limit. So splitting 
> the method in half works around the problem.
> In my tests splitting the method in half resulted in a 5X performance 
> increase.
> In the test code below you can see how slow the fold method is, even when it 
> is using the shortcut when the character is less than 0x80, compared to an 
> inline implementation of the same shortcut.
> So a workaround is to split the method. I'm happy to provide a patch. It's a 
> hack, of course. Perhaps using the {{MappingCharFilterFactory}} with an input 
> file as per SOLR-2013 would be a better replacement for this method in this 
> class?
> {code:java}
> public class ASCIIFoldingFilterPerformanceTest {
>   private static final int ITERATIONS = 1_000_000;
>   @Test
>   public void testFoldShortString() {
>   char[] input = "testing".toCharArray();
>   char[] output = new char[input.length * 4];
>   for (int i = 0; i < ITERATIONS; i++) {
>   ASCIIFoldingFilter.foldToASCII(input, 0, output, 0, 
> input.length);
>   }
>   }
>   @Test
>   public void testFoldShortAccentedString() {
>   char[] input = "éúéúøßüäéúéúøßüä".toCharArray();
>   char[] output = new char[input.length * 4];
>   for (int i = 0; i < ITERATIONS; i++) {
>   ASCIIFoldingFilter.foldToASCII(input, 0, output, 0, 
> input.length);
>   }
>   }
>   @Test
>   public void testManualFoldTinyString() {
>   char[] input = "t".toCharArray();
>   char[] output = new char[input.length * 4];
>   for (int i = 0; i < ITERATIONS; i++) {
>   int k = 0;
>   for (int j = 0; j < 1; ++j) {
>   final char c = input[j];
>   if (c < '\u0080') {
>   output[k++] = c;
>   } else {
>   Assert.assertTrue(false);
>   }
>   }
>   }
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7525) ASCIIFoldingFilter.foldToASCII performance issue due to large compiled method size

2017-01-27 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7525?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-7525:
-
Attachment: LUCENE-7525.patch

I tried to work on a minimal patch that addresses the performance issue. It 
reuses the existing slow method to build a conversion map and then uses the 
conversion map at runtime. It seems to run an order of magnitude faster on my 
machine. I only see it as a short term solution, I think Uwe's plan is better 
for the long term.

> ASCIIFoldingFilter.foldToASCII performance issue due to large compiled method 
> size
> --
>
> Key: LUCENE-7525
> URL: https://issues.apache.org/jira/browse/LUCENE-7525
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Affects Versions: 6.2.1
>Reporter: Karl von Randow
> Attachments: ASCIIFoldingFilter.java, ASCIIFolding.java, 
> LUCENE-7525.patch, TestASCIIFolding.java
>
>
> The {{ASCIIFoldingFilter.foldToASCII}} method has an enormous switch 
> statement and is too large for the HotSpot compiler to compile; causing a 
> performance problem.
> The method is about 13K compiled, versus the 8KB HotSpot limit. So splitting 
> the method in half works around the problem.
> In my tests splitting the method in half resulted in a 5X performance 
> increase.
> In the test code below you can see how slow the fold method is, even when it 
> is using the shortcut when the character is less than 0x80, compared to an 
> inline implementation of the same shortcut.
> So a workaround is to split the method. I'm happy to provide a patch. It's a 
> hack, of course. Perhaps using the {{MappingCharFilterFactory}} with an input 
> file as per SOLR-2013 would be a better replacement for this method in this 
> class?
> {code:java}
> public class ASCIIFoldingFilterPerformanceTest {
>   private static final int ITERATIONS = 1_000_000;
>   @Test
>   public void testFoldShortString() {
>   char[] input = "testing".toCharArray();
>   char[] output = new char[input.length * 4];
>   for (int i = 0; i < ITERATIONS; i++) {
>   ASCIIFoldingFilter.foldToASCII(input, 0, output, 0, 
> input.length);
>   }
>   }
>   @Test
>   public void testFoldShortAccentedString() {
>   char[] input = "éúéúøßüäéúéúøßüä".toCharArray();
>   char[] output = new char[input.length * 4];
>   for (int i = 0; i < ITERATIONS; i++) {
>   ASCIIFoldingFilter.foldToASCII(input, 0, output, 0, 
> input.length);
>   }
>   }
>   @Test
>   public void testManualFoldTinyString() {
>   char[] input = "t".toCharArray();
>   char[] output = new char[input.length * 4];
>   for (int i = 0; i < ITERATIONS; i++) {
>   int k = 0;
>   for (int j = 0; j < 1; ++j) {
>   final char c = input[j];
>   if (c < '\u0080') {
>   output[k++] = c;
>   } else {
>   Assert.assertTrue(false);
>   }
>   }
>   }
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7543) Make changes-to-html target an offline operation

2017-01-27 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7543?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15842886#comment-15842886
 ] 

Steve Rowe commented on LUCENE-7543:


No problem Mano, thanks again for doing the bulk of the work on this issue.

> Make changes-to-html target an offline operation
> 
>
> Key: LUCENE-7543
> URL: https://issues.apache.org/jira/browse/LUCENE-7543
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Steve Rowe
>Assignee: Steve Rowe
> Fix For: master (7.0), 6.0.2, 6.1.1, 5.6, 5.5.4, 6.2.2, 6.4, 
> 6.3.1, 6.5, 6.4.1
>
> Attachments: LUCENE-7543-drop-XML-Simple.patch, LUCENE-7543.patch, 
> LUCENE-7543.patch, LUCENE-7543.patch
>
>
> Currently changes-to-html pulls release dates from JIRA, and so fails when 
> JIRA is inaccessible (e.g. from behind a firewall).
> SOLR-9711 advocates adding a build sysprop to ignore JIRA connection 
> failures, but I'd rather make the operation always offline.
> In an offline discussion, [~hossman] advocated moving Lucene's and Solr's 
> {{doap.rdf}} files, which contain all of the release dates that the 
> changes-to-html now pulls from JIRA, from the CMS Subversion repository 
> (downloadable from the website at http://lucene.apache.org/core/doap.rdf and 
> http://lucene.apache.org/solr/doap.rdf) to the Lucene/Solr git repository. If 
> we did that, then the process could be entirely offline if release dates were 
> taken from the local {{doap.rdf}} files instead of downloaded from JIRA.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6959) Remove ToParentBlockJoinCollector

2017-01-27 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15842885#comment-15842885
 ] 

Michael McCandless commented on LUCENE-6959:


bq. I can add that back.

Thanks [~martijn.v.groningen]!

> Remove ToParentBlockJoinCollector
> -
>
> Key: LUCENE-6959
> URL: https://issues.apache.org/jira/browse/LUCENE-6959
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Attachments: LUCENE_6959.patch, LUCENE_6959.patch, LUCENE_6959.patch, 
> LUCENE_6959.patch, LUCENE-6959.patch
>
>
> This collector uses the getWeight() and getChildren() methods from the passed 
> in Scorer, which are not always available (eg. disjunctions expose fake 
> scorers) hence the need for a dedicated IndexSearcher 
> (ToParentBlockJoinIndexSearcher). Given that this is the only collector in 
> this case, I would like to remove it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [jira] [Commented] (SOLR-4500) How can we integrate LDAP authentiocation with the Solr instance

2017-01-27 Thread David Smiley
Thanks for resolving this mess Steve!  This was so annoying.

On Fri, Jan 27, 2017 at 8:26 AM Steve Rowe  wrote:

> When Otis resolved this old issue, a watcher was sent notification, and
> the watcher’s postmaster auto-replied via email that the message couldn’t
> be delivered.  This auto-reply triggered yet another email to the watcher,
> and a vicious cycle began.  (This same situation happened on SOLR-8029 ten
> days ago.)
>
> The strategy I took on SOLR-8029 was to first delete the watcher that
> triggered the cascade, then delete the auto-posts.  I tried that here, but
> the number of posts in flight grew to a couple hundred, and there is no way
> I know of to bulk delete JIRA comments, so rather than spend literally
> hours clicking on trash cans, I decided to delete the whole issue.
>
> I would have emailed earlier about this, but right after I deleted the
> JIRA my internet went down, so I went to bed.
>
> —-
> Steve
> www.lucidworks.com
>
> > On Jan 27, 2017, at 12:53 AM, postmas...@gbs.pro (JIRA) 
> wrote:
> >
> >
> >[
> https://issues.apache.org/jira/browse/SOLR-4500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15842065#comment-15842065
> ]
> >
> > postmas...@gbs.pro commented on SOLR-4500:
> > --
> >
> > Delivery has failed to these recipients or distribution lists:
> >
> > srividhy...@sella.it
> > Your message wasn't delivered because of security policies. Microsoft
> Exchange will not try to redeliver this message for you. Please provide the
> following diagnostic text to your system administrator.
> >
> >
> >
> >> How can we integrate LDAP authentiocation with the Solr instance
> >> 
> >>
> >>Key: SOLR-4500
> >>URL: https://issues.apache.org/jira/browse/SOLR-4500
> >>Project: Solr
> >> Issue Type: Task
> >>   Affects Versions: 4.1
> >>   Reporter: Srividhya
> >>
> >
> >
> >
> >
> > --
> > This message was sent by Atlassian JIRA
> > (v6.3.4#6332)
> >
> > -
> > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> > For additional commands, e-mail: dev-h...@lucene.apache.org
> >
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
> --
Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
http://www.solrenterprisesearchserver.com


[jira] [Commented] (LUCENE-7465) Add a PatternTokenizer that uses Lucene's RegExp implementation

2017-01-27 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15842872#comment-15842872
 ] 

David Smiley commented on LUCENE-7465:
--

bq. (Adrien) I like the separate factory idea better, it makes it easier to 
evolve those two impls separately, eg. in the case that we decide to deprecate 
PatternTokenizer or to move it to sandbox.

I think the factory isn't going to stand in the way of either tokenizer 
evolving.  A problem with separate factories is that the name 
{{PatternTokenizerFactory}} is already an excellent name, nor does it have 
hints as to how it works.  In general I don't like polluting the namespace with 
different implementations of effectively the same thing; the first impl to show 
up grabs the best name.  The Factory provides an excellent opportunity to 
bridge these multiple implementations.

Yet alas, my arguments aren't swaying anyone so go ahead.

> Add a PatternTokenizer that uses Lucene's RegExp implementation
> ---
>
> Key: LUCENE-7465
> URL: https://issues.apache.org/jira/browse/LUCENE-7465
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: master (7.0), 6.4
>
> Attachments: LUCENE-7465.patch, LUCENE-7465.patch
>
>
> I think there are some nice benefits to a version of PatternTokenizer that 
> uses Lucene's RegExp impl instead of the JDK's:
>   * Lucene's RegExp is compiled to a DFA up front, so if a "too hard" RegExp 
> is attempted the user discovers it up front instead of later on when a 
> "lucky" document arrives
>   * It processes the incoming characters as a stream, only pulling 128 
> characters at a time, vs the existing {{PatternTokenizer}} which currently 
> reads the entire string up front (this has caused heap problems in the past)
>   * It should be fast.
> I named it {{SimplePatternTokenizer}}, and it still needs a factory and 
> improved tests, but I think it's otherwise close.
> It currently does not take a {{group}} parameter because Lucene's RegExps 
> don't yet implement sub group capture.  I think we could add that at some 
> point, but it's a bit tricky.
> This doesn't even have group=-1 support (like String.split) ... I think if we 
> did that we should maybe name it differently 
> ({{SimplePatternSplitTokenizer}}?).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [jira] [Commented] (SOLR-4500) How can we integrate LDAP authentiocation with the Solr instance

2017-01-27 Thread Steve Rowe
When Otis resolved this old issue, a watcher was sent notification, and the 
watcher’s postmaster auto-replied via email that the message couldn’t be 
delivered.  This auto-reply triggered yet another email to the watcher, and a 
vicious cycle began.  (This same situation happened on SOLR-8029 ten days ago.)

The strategy I took on SOLR-8029 was to first delete the watcher that triggered 
the cascade, then delete the auto-posts.  I tried that here, but the number of 
posts in flight grew to a couple hundred, and there is no way I know of to bulk 
delete JIRA comments, so rather than spend literally hours clicking on trash 
cans, I decided to delete the whole issue.

I would have emailed earlier about this, but right after I deleted the JIRA my 
internet went down, so I went to bed.

—-
Steve
www.lucidworks.com

> On Jan 27, 2017, at 12:53 AM, postmas...@gbs.pro (JIRA)  
> wrote:
> 
> 
>[ 
> https://issues.apache.org/jira/browse/SOLR-4500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15842065#comment-15842065
>  ] 
> 
> postmas...@gbs.pro commented on SOLR-4500:
> --
> 
> Delivery has failed to these recipients or distribution lists:
> 
> srividhy...@sella.it
> Your message wasn't delivered because of security policies. Microsoft 
> Exchange will not try to redeliver this message for you. Please provide the 
> following diagnostic text to your system administrator.
> 
> 
> 
>> How can we integrate LDAP authentiocation with the Solr instance
>> 
>> 
>>Key: SOLR-4500
>>URL: https://issues.apache.org/jira/browse/SOLR-4500
>>Project: Solr
>> Issue Type: Task
>>   Affects Versions: 4.1
>>   Reporter: Srividhya
>> 
> 
> 
> 
> 
> --
> This message was sent by Atlassian JIRA
> (v6.3.4#6332)
> 
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
> 


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10038) Spatial Intersect Very Slow For Large Polygon and Large Index

2017-01-27 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15842863#comment-15842863
 ] 

David Smiley commented on SOLR-10038:
-

See my recent conference presentation which has some details: 
https://www.youtube.com/watch?v=nzAH5QEl9hQ   and the code is 
https://github.com/cga-harvard/hhypermap-bop/tree/master/enrich including setup 
scripts.  Of course the details here are often particular to one's 
use-case/project but this should provide a good example to follow.  The 
point-in-polygon searches during data enrichment is working super-fast after 
all the optimizations, like a millisecond or so.

> Spatial Intersect Very Slow For Large Polygon and Large Index
> -
>
> Key: SOLR-10038
> URL: https://issues.apache.org/jira/browse/SOLR-10038
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: spatial
>Affects Versions: 6.4
> Environment: Linux Ubuntu + Solr 6.4.0
>Reporter: samur araujo
>Assignee: David Smiley
>  Labels: spatialsearch
>
> Hi all, I have indexed the entire geonames points (lat/long) with JTS 
> enabled, and I am trying return all points (geonameids) within a certain 
> polygon (e.g. Netherlands country polygon). This query takes 3 minutes to 
> return only 10.000  points. I am using only solr intersect. no facets. no 
> extra fitering.
> Is there any configuration that could slow down such a query to less than 300 
> ms?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10045) Polygon Error: TopologyException: side location conflict

2017-01-27 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15842857#comment-15842857
 ] 

David Smiley commented on SOLR-10045:
-

The "repairRule" applies universally (index & query) to polygons.  The 
Spatial4j related parameters are all universal to both.

> Polygon Error: TopologyException: side location conflict 
> -
>
> Key: SOLR-10045
> URL: https://issues.apache.org/jira/browse/SOLR-10045
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: spatial
>Affects Versions: 6.4
> Environment: ubuntu
>Reporter: samur araujo
>Assignee: David Smiley
>  Labels: spatial
>
> Hi all, solr is giving error when the polygon below is provided.
> The corresponding query in postgis does not return error.
> I guess the polygon is correct but JTS is complaining about it. Maybe some 
> parameter to quick fix it should be turned on at query time. 
> To test this issue, please use the polygon in a solr intersect query.
> The full stack-trace is below. 
> POSTGIS QUERY:
> select  geonameid from geoname where 
> ST_Contains(ST_GeomFromText('SRID=4326;MULTIPOLYGON (((32.30749900 
> 35., 32.30322300 35.00627900, 32.30219300 35.00780500, 32.29575000 
> 35.02125200, 32.27494400 35.04652800, 32.27330400 35.05730400, 32.27308300 
> 35.05883400, 32.27622200 35.08499900, 32.27739000 35.08961100, 32.27875100 
> 35.09480700, 32.28125000 35.09838900, 32.28766600 35.09852600, 32.29630700 
> 35.09372300, 32.30083500 35.09119400, 32.32822000 35.06766500, 32.34716800 
> 35.05444300, 32.36441800 35.04505500, 32.38013800 35.04085900, 32.39761000 
> 35.03883400, 32.40680700 35.03886000, 32.42580400 35.04222100, 32.44783400 
> 35.04938900, 32.4700 35.06508300, 32.48119400 35.07188800, 32.49261100 
> 35.08280600, 32.49597200 35.08602900, 32.50686300 35.09994500, 32.52422300 
> 35.13444500, 32.53080400 35.14286000, 32.54022200 35.15008200, 32.5495 
> 35.15719600, 32.55241800 35.16286100, 32.55305500 35.17233300, 32.55625200 
> 35.17433200, 32.56219500 35.17327900, 32.57241800 35.16808300, 32.57619500 
> 35.16869400, 32.57925000 35.16916700, 32.58097100 35.16944500, 32.58375200 
> 35.17050200, 32.60077700 35.17694500, 32.60519400 35.17686100, 32.61086300 
> 35.17677700, 32.61544400 35.17886000, 32.61644400 35.18019500, 32.62302800 
> 35.18891500, 32.62888700 35.18922000, 32.63694400 35.18644300, 32.65591800 
> 35.19411100, 32.66797300 35.19358400, 32.67680700 35.19175000, 32.69352700 
> 35.18502800, 32.71147200 35.18458200, 32.71505700 35.18366600, 32.73430600 
> 35.17863800, 32.7600 35.17780700, 32.74930600 35.17274900, 32.76791800 
> 35.16338700, 32.78491600 35.16033200, 32.80019400 35.15077600, 32.81186300 
> 35.14816700, 32.82375000 35.14255500, 32.83327900 35.14213900, 32.84858300 
> 35.14313900, 32.85455700 35.14527900, 32.87341700 35.15475100, 32.89905500 
> 35.16991800, 32.91033200 35.17661300, 32.91494400 35.18214000, 32.93172100 
> 35.2200, 32.93602800 35.24338900, 32.94211200 35.26694500, 32.94599900 
> 35.29358300, 32.94603000 35.31564000, 32.94352700 35.33736000, 32.94014000 
> 35.34524900, 32.93197300 35.35311100, 32.92986300 35.35897100, 32.93011100 
> 35.36550100, 32.93052700 35.37602600, 32.92911100 35.38744400, 32.92488900 
> 35.39897200, 32.92602900 35.40247300, 32.93452800 35.40127900, 32.95708500 
> 35.39183400, 32.99041700 35.37372200, 33.01625100 35.36166800, 33.02055700 
> 35.36202600, 33.03461100 35.36325100, 33.04089000 35.36194600, 33.05794500 
> 35.35475200, 33.07447100 35.35097100, 33.08872200 35.35141800, 33.09919400 
> 35.35377900, 33.11125200 35.35758200, 33.11927800 35.36280400, 33.12569400 
> 35.36311000, 33.16613800 35.35286000, 33.17822300 35.34977700, 33.22266800 
> 35.35488900, 33.23064000 35.34908300, 33.24366800 35.34738900, 33.24924900 
> 35.34352900, 33.25522200 35.34219400, 33.26564000 35.34544400, 33.27013800 
> 35.34541700, 33.28274900 35.34141500, 33.28802900 35.34191500, 33.29505500 
> 35.34544400, 33.31063800 35.34141500, 33.31758500 35.33961100, 33.34722100 
> 35.33663900, 33.36752700 35.33069600, 33.37966500 35.33308400, 33.40505600 
> 35.33058200, 33.46064000 35.33266800, 33.46213900 35.33230600, 33.46888700 
> 35.33075000, 33.48777800 35.33274800, 33.49741700 35.33525100, 33.51283300 
> 35.33503000, 33.53344300 35.34080500, 33.57466500 35.34425000, 33.61161000 
> 35.35486200, 33.61258300 35.35514100, 33.61883200 35.35355400, 33.62891800 
> 35.35355400, 33.64697300 35.35597200, 33.68272400 35.36580700, 33.70177800 
> 35.37444300, 33.71402700 35.38416700, 33.75102600 35.39963900, 33.77294500 
> 35.40877900, 33.78458400 35.41088900, 33.80305500 35.41122100, 33.82955600 
> 35.40866900, 33.83972200 35.41186100, 33.85691800 35.41027800, 

[jira] [Comment Edited] (SOLR-10045) Polygon Error: TopologyException: side location conflict

2017-01-27 Thread samur araujo (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15842587#comment-15842587
 ] 

samur araujo edited comment on SOLR-10045 at 1/27/17 1:06 PM:
--

Are the repairing applied at query time? Are the repairing applied in the 
polygon sent in the query?

If I get it right, the repair is apply only at index time, on polygons that are 
being indexed, right?

The problem I mentioned above happens in the polygons sent as a query parameter.


was (Author: samuraraujo-geophy):
Are the repairing applied at query time? Are the repairing applied in the 
polygon sent in the query?

If I get it right, the repair is apply only at index time, on polygons that are 
being indexed, right?



> Polygon Error: TopologyException: side location conflict 
> -
>
> Key: SOLR-10045
> URL: https://issues.apache.org/jira/browse/SOLR-10045
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: spatial
>Affects Versions: 6.4
> Environment: ubuntu
>Reporter: samur araujo
>Assignee: David Smiley
>  Labels: spatial
>
> Hi all, solr is giving error when the polygon below is provided.
> The corresponding query in postgis does not return error.
> I guess the polygon is correct but JTS is complaining about it. Maybe some 
> parameter to quick fix it should be turned on at query time. 
> To test this issue, please use the polygon in a solr intersect query.
> The full stack-trace is below. 
> POSTGIS QUERY:
> select  geonameid from geoname where 
> ST_Contains(ST_GeomFromText('SRID=4326;MULTIPOLYGON (((32.30749900 
> 35., 32.30322300 35.00627900, 32.30219300 35.00780500, 32.29575000 
> 35.02125200, 32.27494400 35.04652800, 32.27330400 35.05730400, 32.27308300 
> 35.05883400, 32.27622200 35.08499900, 32.27739000 35.08961100, 32.27875100 
> 35.09480700, 32.28125000 35.09838900, 32.28766600 35.09852600, 32.29630700 
> 35.09372300, 32.30083500 35.09119400, 32.32822000 35.06766500, 32.34716800 
> 35.05444300, 32.36441800 35.04505500, 32.38013800 35.04085900, 32.39761000 
> 35.03883400, 32.40680700 35.03886000, 32.42580400 35.04222100, 32.44783400 
> 35.04938900, 32.4700 35.06508300, 32.48119400 35.07188800, 32.49261100 
> 35.08280600, 32.49597200 35.08602900, 32.50686300 35.09994500, 32.52422300 
> 35.13444500, 32.53080400 35.14286000, 32.54022200 35.15008200, 32.5495 
> 35.15719600, 32.55241800 35.16286100, 32.55305500 35.17233300, 32.55625200 
> 35.17433200, 32.56219500 35.17327900, 32.57241800 35.16808300, 32.57619500 
> 35.16869400, 32.57925000 35.16916700, 32.58097100 35.16944500, 32.58375200 
> 35.17050200, 32.60077700 35.17694500, 32.60519400 35.17686100, 32.61086300 
> 35.17677700, 32.61544400 35.17886000, 32.61644400 35.18019500, 32.62302800 
> 35.18891500, 32.62888700 35.18922000, 32.63694400 35.18644300, 32.65591800 
> 35.19411100, 32.66797300 35.19358400, 32.67680700 35.19175000, 32.69352700 
> 35.18502800, 32.71147200 35.18458200, 32.71505700 35.18366600, 32.73430600 
> 35.17863800, 32.7600 35.17780700, 32.74930600 35.17274900, 32.76791800 
> 35.16338700, 32.78491600 35.16033200, 32.80019400 35.15077600, 32.81186300 
> 35.14816700, 32.82375000 35.14255500, 32.83327900 35.14213900, 32.84858300 
> 35.14313900, 32.85455700 35.14527900, 32.87341700 35.15475100, 32.89905500 
> 35.16991800, 32.91033200 35.17661300, 32.91494400 35.18214000, 32.93172100 
> 35.2200, 32.93602800 35.24338900, 32.94211200 35.26694500, 32.94599900 
> 35.29358300, 32.94603000 35.31564000, 32.94352700 35.33736000, 32.94014000 
> 35.34524900, 32.93197300 35.35311100, 32.92986300 35.35897100, 32.93011100 
> 35.36550100, 32.93052700 35.37602600, 32.92911100 35.38744400, 32.92488900 
> 35.39897200, 32.92602900 35.40247300, 32.93452800 35.40127900, 32.95708500 
> 35.39183400, 32.99041700 35.37372200, 33.01625100 35.36166800, 33.02055700 
> 35.36202600, 33.03461100 35.36325100, 33.04089000 35.36194600, 33.05794500 
> 35.35475200, 33.07447100 35.35097100, 33.08872200 35.35141800, 33.09919400 
> 35.35377900, 33.11125200 35.35758200, 33.11927800 35.36280400, 33.12569400 
> 35.36311000, 33.16613800 35.35286000, 33.17822300 35.34977700, 33.22266800 
> 35.35488900, 33.23064000 35.34908300, 33.24366800 35.34738900, 33.24924900 
> 35.34352900, 33.25522200 35.34219400, 33.26564000 35.34544400, 33.27013800 
> 35.34541700, 33.28274900 35.34141500, 33.28802900 35.34191500, 33.29505500 
> 35.34544400, 33.31063800 35.34141500, 33.31758500 35.33961100, 33.34722100 
> 35.33663900, 33.36752700 35.33069600, 33.37966500 35.33308400, 33.40505600 
> 35.33058200, 33.46064000 35.33266800, 33.46213900 35.33230600, 33.46888700 
> 35.33075000, 33.48777800 35.33274800, 33.49741700 35.33525100, 33.51283300 
> 

[jira] [Comment Edited] (SOLR-10045) Polygon Error: TopologyException: side location conflict

2017-01-27 Thread samur araujo (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15842587#comment-15842587
 ] 

samur araujo edited comment on SOLR-10045 at 1/27/17 1:06 PM:
--

Are the repairing applied at query time? Are the repairing applied in the 
polygon sent in the query?

If I get it right, the repair is apply only at index time, on polygons that are 
being indexed, right?




was (Author: samuraraujo-geophy):
Are the repair applied to query time? Are the repair applied in the polygon 
send in the query?

If I get it right, the repair is apply only at index time, on polygons that are 
being indexed, right?



> Polygon Error: TopologyException: side location conflict 
> -
>
> Key: SOLR-10045
> URL: https://issues.apache.org/jira/browse/SOLR-10045
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: spatial
>Affects Versions: 6.4
> Environment: ubuntu
>Reporter: samur araujo
>Assignee: David Smiley
>  Labels: spatial
>
> Hi all, solr is giving error when the polygon below is provided.
> The corresponding query in postgis does not return error.
> I guess the polygon is correct but JTS is complaining about it. Maybe some 
> parameter to quick fix it should be turned on at query time. 
> To test this issue, please use the polygon in a solr intersect query.
> The full stack-trace is below. 
> POSTGIS QUERY:
> select  geonameid from geoname where 
> ST_Contains(ST_GeomFromText('SRID=4326;MULTIPOLYGON (((32.30749900 
> 35., 32.30322300 35.00627900, 32.30219300 35.00780500, 32.29575000 
> 35.02125200, 32.27494400 35.04652800, 32.27330400 35.05730400, 32.27308300 
> 35.05883400, 32.27622200 35.08499900, 32.27739000 35.08961100, 32.27875100 
> 35.09480700, 32.28125000 35.09838900, 32.28766600 35.09852600, 32.29630700 
> 35.09372300, 32.30083500 35.09119400, 32.32822000 35.06766500, 32.34716800 
> 35.05444300, 32.36441800 35.04505500, 32.38013800 35.04085900, 32.39761000 
> 35.03883400, 32.40680700 35.03886000, 32.42580400 35.04222100, 32.44783400 
> 35.04938900, 32.4700 35.06508300, 32.48119400 35.07188800, 32.49261100 
> 35.08280600, 32.49597200 35.08602900, 32.50686300 35.09994500, 32.52422300 
> 35.13444500, 32.53080400 35.14286000, 32.54022200 35.15008200, 32.5495 
> 35.15719600, 32.55241800 35.16286100, 32.55305500 35.17233300, 32.55625200 
> 35.17433200, 32.56219500 35.17327900, 32.57241800 35.16808300, 32.57619500 
> 35.16869400, 32.57925000 35.16916700, 32.58097100 35.16944500, 32.58375200 
> 35.17050200, 32.60077700 35.17694500, 32.60519400 35.17686100, 32.61086300 
> 35.17677700, 32.61544400 35.17886000, 32.61644400 35.18019500, 32.62302800 
> 35.18891500, 32.62888700 35.18922000, 32.63694400 35.18644300, 32.65591800 
> 35.19411100, 32.66797300 35.19358400, 32.67680700 35.19175000, 32.69352700 
> 35.18502800, 32.71147200 35.18458200, 32.71505700 35.18366600, 32.73430600 
> 35.17863800, 32.7600 35.17780700, 32.74930600 35.17274900, 32.76791800 
> 35.16338700, 32.78491600 35.16033200, 32.80019400 35.15077600, 32.81186300 
> 35.14816700, 32.82375000 35.14255500, 32.83327900 35.14213900, 32.84858300 
> 35.14313900, 32.85455700 35.14527900, 32.87341700 35.15475100, 32.89905500 
> 35.16991800, 32.91033200 35.17661300, 32.91494400 35.18214000, 32.93172100 
> 35.2200, 32.93602800 35.24338900, 32.94211200 35.26694500, 32.94599900 
> 35.29358300, 32.94603000 35.31564000, 32.94352700 35.33736000, 32.94014000 
> 35.34524900, 32.93197300 35.35311100, 32.92986300 35.35897100, 32.93011100 
> 35.36550100, 32.93052700 35.37602600, 32.92911100 35.38744400, 32.92488900 
> 35.39897200, 32.92602900 35.40247300, 32.93452800 35.40127900, 32.95708500 
> 35.39183400, 32.99041700 35.37372200, 33.01625100 35.36166800, 33.02055700 
> 35.36202600, 33.03461100 35.36325100, 33.04089000 35.36194600, 33.05794500 
> 35.35475200, 33.07447100 35.35097100, 33.08872200 35.35141800, 33.09919400 
> 35.35377900, 33.11125200 35.35758200, 33.11927800 35.36280400, 33.12569400 
> 35.36311000, 33.16613800 35.35286000, 33.17822300 35.34977700, 33.22266800 
> 35.35488900, 33.23064000 35.34908300, 33.24366800 35.34738900, 33.24924900 
> 35.34352900, 33.25522200 35.34219400, 33.26564000 35.34544400, 33.27013800 
> 35.34541700, 33.28274900 35.34141500, 33.28802900 35.34191500, 33.29505500 
> 35.34544400, 33.31063800 35.34141500, 33.31758500 35.33961100, 33.34722100 
> 35.33663900, 33.36752700 35.33069600, 33.37966500 35.33308400, 33.40505600 
> 35.33058200, 33.46064000 35.33266800, 33.46213900 35.33230600, 33.46888700 
> 35.33075000, 33.48777800 35.33274800, 33.49741700 35.33525100, 33.51283300 
> 35.33503000, 33.53344300 35.34080500, 33.57466500 35.34425000, 33.61161000 
> 35.35486200, 

[jira] [Commented] (LUCENE-6959) Remove ToParentBlockJoinCollector

2017-01-27 Thread Martijn van Groningen (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15842806#comment-15842806
 ] 

Martijn van Groningen commented on LUCENE-6959:
---

bq. I can take a crack at putting back some of the child hit checking there, if 
you all haven't started on that yet?

I can add that back.

> Remove ToParentBlockJoinCollector
> -
>
> Key: LUCENE-6959
> URL: https://issues.apache.org/jira/browse/LUCENE-6959
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Attachments: LUCENE_6959.patch, LUCENE_6959.patch, LUCENE_6959.patch, 
> LUCENE_6959.patch, LUCENE-6959.patch
>
>
> This collector uses the getWeight() and getChildren() methods from the passed 
> in Scorer, which are not always available (eg. disjunctions expose fake 
> scorers) hence the need for a dedicated IndexSearcher 
> (ToParentBlockJoinIndexSearcher). Given that this is the only collector in 
> this case, I would like to remove it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Windows (64bit/jdk1.8.0_121) - Build # 6372 - Unstable!

2017-01-27 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/6372/
Java: 64bit/jdk1.8.0_121 -XX:-UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.cloud.PeerSyncReplicationTest.test

Error Message:
timeout waiting to see all nodes active

Stack Trace:
java.lang.AssertionError: timeout waiting to see all nodes active
at 
__randomizedtesting.SeedInfo.seed([92E5CD90A2216E35:1AB1F24A0CDD03CD]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.PeerSyncReplicationTest.waitTillNodesActive(PeerSyncReplicationTest.java:326)
at 
org.apache.solr.cloud.PeerSyncReplicationTest.bringUpDeadNodeAndEnsureNoReplication(PeerSyncReplicationTest.java:277)
at 
org.apache.solr.cloud.PeerSyncReplicationTest.forceNodeFailureAndDoPeerSync(PeerSyncReplicationTest.java:259)
at 
org.apache.solr.cloud.PeerSyncReplicationTest.test(PeerSyncReplicationTest.java:138)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 

[jira] [Commented] (SOLR-7759) DebugComponent's explain should be implemented as a distributed query

2017-01-27 Thread Alessandro Benedetti (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15842774#comment-15842774
 ] 

Alessandro Benedetti commented on SOLR-7759:


I didn't try it yet, but would this Naive approach work ?

org/apache/solr/handler/component/QueryComponent.java:318

if ((purpose & ShardRequest.PURPOSE_SET_TERM_STATS) != 0 || (purpose & 
ShardRequest.PURPOSE_GET_DEBUG) != 0) {
  // retrieve from request and update local cache
  statsCache.receiveGlobalStats(req);
}

I will investigate further

> DebugComponent's explain should be implemented as a distributed query
> -
>
> Key: SOLR-7759
> URL: https://issues.apache.org/jira/browse/SOLR-7759
> Project: Solr
>  Issue Type: Bug
>Reporter: Varun Thacker
>
> Currently when we use debugQuery to see the explanation of the matched 
> documents, the query fired to get the statistics for the matched documents is 
> not a distributed query.
> This is a problem when using distributed idf. The actual documents are being 
> scored using the global stats and not per shard stats , but the explain will 
> show us the score by taking into account the stats from the shard where the 
> document belongs to.
> We should try to implement the explain query as a distributed request so that 
> the scores match the actual document scores.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7465) Add a PatternTokenizer that uses Lucene's RegExp implementation

2017-01-27 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15842694#comment-15842694
 ] 

Dawid Weiss commented on LUCENE-7465:
-

bq. I think this is interesting, but let's explore it on a future issue?

Absolutely!



> Add a PatternTokenizer that uses Lucene's RegExp implementation
> ---
>
> Key: LUCENE-7465
> URL: https://issues.apache.org/jira/browse/LUCENE-7465
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: master (7.0), 6.4
>
> Attachments: LUCENE-7465.patch, LUCENE-7465.patch
>
>
> I think there are some nice benefits to a version of PatternTokenizer that 
> uses Lucene's RegExp impl instead of the JDK's:
>   * Lucene's RegExp is compiled to a DFA up front, so if a "too hard" RegExp 
> is attempted the user discovers it up front instead of later on when a 
> "lucky" document arrives
>   * It processes the incoming characters as a stream, only pulling 128 
> characters at a time, vs the existing {{PatternTokenizer}} which currently 
> reads the entire string up front (this has caused heap problems in the past)
>   * It should be fast.
> I named it {{SimplePatternTokenizer}}, and it still needs a factory and 
> improved tests, but I think it's otherwise close.
> It currently does not take a {{group}} parameter because Lucene's RegExps 
> don't yet implement sub group capture.  I think we could add that at some 
> point, but it's a bit tricky.
> This doesn't even have group=-1 support (like String.split) ... I think if we 
> did that we should maybe name it differently 
> ({{SimplePatternSplitTokenizer}}?).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7465) Add a PatternTokenizer that uses Lucene's RegExp implementation

2017-01-27 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15842611#comment-15842611
 ] 

Michael McCandless commented on LUCENE-7465:


Whoa, this issue almost dropped past the event horizon on my TODO list!  I'll 
revive the patch and push soon ...

bq. I think it'd be more interesting to actually write a (simple!) matcher on 
top of a non-determinized Automaton

I think this is interesting, but let's explore it on a future issue?

> Add a PatternTokenizer that uses Lucene's RegExp implementation
> ---
>
> Key: LUCENE-7465
> URL: https://issues.apache.org/jira/browse/LUCENE-7465
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: master (7.0), 6.4
>
> Attachments: LUCENE-7465.patch, LUCENE-7465.patch
>
>
> I think there are some nice benefits to a version of PatternTokenizer that 
> uses Lucene's RegExp impl instead of the JDK's:
>   * Lucene's RegExp is compiled to a DFA up front, so if a "too hard" RegExp 
> is attempted the user discovers it up front instead of later on when a 
> "lucky" document arrives
>   * It processes the incoming characters as a stream, only pulling 128 
> characters at a time, vs the existing {{PatternTokenizer}} which currently 
> reads the entire string up front (this has caused heap problems in the past)
>   * It should be fast.
> I named it {{SimplePatternTokenizer}}, and it still needs a factory and 
> improved tests, but I think it's otherwise close.
> It currently does not take a {{group}} parameter because Lucene's RegExps 
> don't yet implement sub group capture.  I think we could add that at some 
> point, but it's a bit tricky.
> This doesn't even have group=-1 support (like String.split) ... I think if we 
> did that we should maybe name it differently 
> ({{SimplePatternSplitTokenizer}}?).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6959) Remove ToParentBlockJoinCollector

2017-01-27 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15842606#comment-15842606
 ] 

Michael McCandless commented on LUCENE-6959:


New patch looks great, except for the {{TestBlockJoin}} tests ... I can take a 
crack at putting back some of the child hit checking there, if you all haven't 
started on that yet?

> Remove ToParentBlockJoinCollector
> -
>
> Key: LUCENE-6959
> URL: https://issues.apache.org/jira/browse/LUCENE-6959
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Attachments: LUCENE_6959.patch, LUCENE_6959.patch, LUCENE_6959.patch, 
> LUCENE_6959.patch, LUCENE-6959.patch
>
>
> This collector uses the getWeight() and getChildren() methods from the passed 
> in Scorer, which are not always available (eg. disjunctions expose fake 
> scorers) hence the need for a dedicated IndexSearcher 
> (ToParentBlockJoinIndexSearcher). Given that this is the only collector in 
> this case, I would like to remove it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10045) Polygon Error: TopologyException: side location conflict

2017-01-27 Thread samur araujo (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15842587#comment-15842587
 ] 

samur araujo commented on SOLR-10045:
-

Are the repair applied to query time? Are the repair applied in the polygon 
send in the query?

If I get it right, the repair is apply only at index time, on polygons that are 
being indexed, right?



> Polygon Error: TopologyException: side location conflict 
> -
>
> Key: SOLR-10045
> URL: https://issues.apache.org/jira/browse/SOLR-10045
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: spatial
>Affects Versions: 6.4
> Environment: ubuntu
>Reporter: samur araujo
>Assignee: David Smiley
>  Labels: spatial
>
> Hi all, solr is giving error when the polygon below is provided.
> The corresponding query in postgis does not return error.
> I guess the polygon is correct but JTS is complaining about it. Maybe some 
> parameter to quick fix it should be turned on at query time. 
> To test this issue, please use the polygon in a solr intersect query.
> The full stack-trace is below. 
> POSTGIS QUERY:
> select  geonameid from geoname where 
> ST_Contains(ST_GeomFromText('SRID=4326;MULTIPOLYGON (((32.30749900 
> 35., 32.30322300 35.00627900, 32.30219300 35.00780500, 32.29575000 
> 35.02125200, 32.27494400 35.04652800, 32.27330400 35.05730400, 32.27308300 
> 35.05883400, 32.27622200 35.08499900, 32.27739000 35.08961100, 32.27875100 
> 35.09480700, 32.28125000 35.09838900, 32.28766600 35.09852600, 32.29630700 
> 35.09372300, 32.30083500 35.09119400, 32.32822000 35.06766500, 32.34716800 
> 35.05444300, 32.36441800 35.04505500, 32.38013800 35.04085900, 32.39761000 
> 35.03883400, 32.40680700 35.03886000, 32.42580400 35.04222100, 32.44783400 
> 35.04938900, 32.4700 35.06508300, 32.48119400 35.07188800, 32.49261100 
> 35.08280600, 32.49597200 35.08602900, 32.50686300 35.09994500, 32.52422300 
> 35.13444500, 32.53080400 35.14286000, 32.54022200 35.15008200, 32.5495 
> 35.15719600, 32.55241800 35.16286100, 32.55305500 35.17233300, 32.55625200 
> 35.17433200, 32.56219500 35.17327900, 32.57241800 35.16808300, 32.57619500 
> 35.16869400, 32.57925000 35.16916700, 32.58097100 35.16944500, 32.58375200 
> 35.17050200, 32.60077700 35.17694500, 32.60519400 35.17686100, 32.61086300 
> 35.17677700, 32.61544400 35.17886000, 32.61644400 35.18019500, 32.62302800 
> 35.18891500, 32.62888700 35.18922000, 32.63694400 35.18644300, 32.65591800 
> 35.19411100, 32.66797300 35.19358400, 32.67680700 35.19175000, 32.69352700 
> 35.18502800, 32.71147200 35.18458200, 32.71505700 35.18366600, 32.73430600 
> 35.17863800, 32.7600 35.17780700, 32.74930600 35.17274900, 32.76791800 
> 35.16338700, 32.78491600 35.16033200, 32.80019400 35.15077600, 32.81186300 
> 35.14816700, 32.82375000 35.14255500, 32.83327900 35.14213900, 32.84858300 
> 35.14313900, 32.85455700 35.14527900, 32.87341700 35.15475100, 32.89905500 
> 35.16991800, 32.91033200 35.17661300, 32.91494400 35.18214000, 32.93172100 
> 35.2200, 32.93602800 35.24338900, 32.94211200 35.26694500, 32.94599900 
> 35.29358300, 32.94603000 35.31564000, 32.94352700 35.33736000, 32.94014000 
> 35.34524900, 32.93197300 35.35311100, 32.92986300 35.35897100, 32.93011100 
> 35.36550100, 32.93052700 35.37602600, 32.92911100 35.38744400, 32.92488900 
> 35.39897200, 32.92602900 35.40247300, 32.93452800 35.40127900, 32.95708500 
> 35.39183400, 32.99041700 35.37372200, 33.01625100 35.36166800, 33.02055700 
> 35.36202600, 33.03461100 35.36325100, 33.04089000 35.36194600, 33.05794500 
> 35.35475200, 33.07447100 35.35097100, 33.08872200 35.35141800, 33.09919400 
> 35.35377900, 33.11125200 35.35758200, 33.11927800 35.36280400, 33.12569400 
> 35.36311000, 33.16613800 35.35286000, 33.17822300 35.34977700, 33.22266800 
> 35.35488900, 33.23064000 35.34908300, 33.24366800 35.34738900, 33.24924900 
> 35.34352900, 33.25522200 35.34219400, 33.26564000 35.34544400, 33.27013800 
> 35.34541700, 33.28274900 35.34141500, 33.28802900 35.34191500, 33.29505500 
> 35.34544400, 33.31063800 35.34141500, 33.31758500 35.33961100, 33.34722100 
> 35.33663900, 33.36752700 35.33069600, 33.37966500 35.33308400, 33.40505600 
> 35.33058200, 33.46064000 35.33266800, 33.46213900 35.33230600, 33.46888700 
> 35.33075000, 33.48777800 35.33274800, 33.49741700 35.33525100, 33.51283300 
> 35.33503000, 33.53344300 35.34080500, 33.57466500 35.34425000, 33.61161000 
> 35.35486200, 33.61258300 35.35514100, 33.61883200 35.35355400, 33.62891800 
> 35.35355400, 33.64697300 35.35597200, 33.68272400 35.36580700, 33.70177800 
> 35.37444300, 33.71402700 35.38416700, 33.75102600 35.39963900, 33.77294500 
> 35.40877900, 33.78458400 35.41088900, 33.80305500 35.41122100, 33.82955600 

[jira] [Commented] (SOLR-10037) (non-original) Solr Admin UI > query tab > unexpected url above results

2017-01-27 Thread Christine Poerschke (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15842586#comment-15842586
 ] 

Christine Poerschke commented on SOLR-10037:


bq. ... we could also reopen SOLR-9584 and fold this fix into ...

That sounds good to me. Thanks for figuring this out and fixing.

> (non-original) Solr Admin UI > query tab > unexpected url above results
> ---
>
> Key: SOLR-10037
> URL: https://issues.apache.org/jira/browse/SOLR-10037
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: UI
>Affects Versions: 6.5, master (7.0)
>Reporter: Christine Poerschke
>Assignee: Jan Høydahl
>Priority: Minor
> Fix For: 6.5, master (7.0)
>
> Attachments: SOLR-10037.patch, SOLR-10037.patch
>
>
> To reproduce, in a browser run a search from the query tab and then notice 
> the url shown above the results
> {code}
> # actual:   http://localhost:8983techproducts/select?indent=on=*:*=json
> # expected: 
> http://localhost:8983/solr/techproducts/select?q=*%3A*=json=true
> {code}
> (We had noticed this when using the (master branch) Admin UI during the 
> [London Lucene Hackday for Full 
> Fact|https://www.meetup.com/Apache-Lucene-Solr-London-User-Group/events/236356241/]
>  on Friday, I just tried to reproduce both on master (reproducible with 
> non-original version only) and on branch_6_4 (not reproducible) and search 
> for an existing open issue found no apparent match.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-9972) SpellCheckComponent to return collations and suggestions as a JSON object rather than a list

2017-01-27 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke resolved SOLR-9972.
---
Resolution: Fixed

[~dragonyui] - thanks for creating this issue.

[~jdyer] - thanks for code reviewing.

> SpellCheckComponent to return collations and suggestions as a JSON object 
> rather than a list
> 
>
> Key: SOLR-9972
> URL: https://issues.apache.org/jira/browse/SOLR-9972
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Response Writers
>Reporter: Ricky Oktavianus Lazuardy
>Assignee: Christine Poerschke
>Priority: Minor
> Fix For: 6.5
>
> Attachments: SOLR-9972-hunch-no-test.patch, SOLR-9972-impact, 
> SOLR-9972-impact.out, SOLR-9972.patch, 
> SOLR-9972-with-test-after-SOLR-9975.patch
>
>
> original title: JSON-Specific Parameters arrntv causing some error for 
> spellcheck component
> So I tried using the new array named list arrntv from solr 6.5 jenkins build
> but the json returned was broken when it returned response for spellcheck 
> with word break.
> for example :
> {code:javascript}
>  {"name":"collation",{
> "type":"str","value":"collationQuery":"indomie kuing",
> "hits":81,
> "misspellingsAndCorrections":
> [
>   {"name":"indomee","type":"str","value":"indomie"},
>   {"name":"kuih","type":"str","value":"kuing"}
> ]}
>  }
> {code}
> as you may see that "collationQuery":"indomie kuing" was considered as value 
> thus causing the json to fail.
> i think the correct json was :
> {code:javascript}
> {"name":"collation",
> "type":"object",
> "value":{
> "collationQuery":"indomie kuing",
> "hits":81,
> "misspellingsAndCorrections":
> [
>   {"name":"indomee","type":"str","value":"indomie"},
>   {"name":"kuih","type":"str","value":"kuing"}
> ]}
>  }
> {code}
> sorry for bad grammar english was not my first language and i know that 
> object was not supported by current arrntv options.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10048) Distributed result set paging sometimes yields incorrect results

2017-01-27 Thread Markus Jelsma (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Markus Jelsma updated SOLR-10048:
-
Attachment: DistributedPagedQueryComponentTest.java

> Distributed result set paging sometimes yields incorrect results
> 
>
> Key: SOLR-10048
> URL: https://issues.apache.org/jira/browse/SOLR-10048
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SearchComponents - other
>Affects Versions: 6.4
>Reporter: Markus Jelsma
>Priority: Critical
> Fix For: 6.4.1, master (7.0)
>
> Attachments: DistributedPagedQueryComponentTest.java
>
>
> This bug appeared in 6.4 and i spotted it yesterday when i upgraded a my 
> project. It has, amongst others,an extension of QueryComponent, its unit test 
> failed, but never in any predictable way and always in another spot.
> The test is very straightforward, it indexes a bunch of silly documents and 
> then executes a series of getAll() queries. An array of ids is collected and 
> stored for comparison. Then, the same query is executed again but it pages 
> through the entire result set.
> It then compares ids, the id at position N must be the same as id NUM_PAGE * 
> PAGE_SIZE + M (where M is the position of the result in the paged set). The 
> comparison sometimes failes.
> I'll attach the test for 6.4 shortly. If it passes, just try it again (or 
> increase maxDocs). It can pass over ten times in a row, but it can also fail 
> ten times in a row.
> You should see this if it fails, but probably with different values for 
> expected and actual. Below was a few minutes ago, now i can't seem to 
> reproduce it anymore.
> {code}
>[junit4] FAILURE 25.1s | 
> DistributedPagedQueryComponentTest.testTheCrazyPager <<<
>[junit4]> Throwable #1: java.lang.AssertionError: ids misaligned 
> expected:<406> but was:<811>
>[junit4]>at 
> __randomizedtesting.SeedInfo.seed([97A7F02D1E4ACF75:7493130F03129E6D]:0)
>[junit4]>at 
> org.apache.solr.handler.component.DistributedPagedQueryComponentTest.testTheCrazyPager(DistributedPagedQueryComponentTest.java:83)
>[junit4]>at 
> org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992)
>[junit4]>at 
> org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
>[junit4]>at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-10048) Distributed result set paging sometimes yields incorrect results

2017-01-27 Thread Markus Jelsma (JIRA)
Markus Jelsma created SOLR-10048:


 Summary: Distributed result set paging sometimes yields incorrect 
results
 Key: SOLR-10048
 URL: https://issues.apache.org/jira/browse/SOLR-10048
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: SearchComponents - other
Affects Versions: 6.4
Reporter: Markus Jelsma
Priority: Critical
 Fix For: 6.4.1, master (7.0)


This bug appeared in 6.4 and i spotted it yesterday when i upgraded a my 
project. It has, amongst others,an extension of QueryComponent, its unit test 
failed, but never in any predictable way and always in another spot.

The test is very straightforward, it indexes a bunch of silly documents and 
then executes a series of getAll() queries. An array of ids is collected and 
stored for comparison. Then, the same query is executed again but it pages 
through the entire result set.
It then compares ids, the id at position N must be the same as id NUM_PAGE * 
PAGE_SIZE + M (where M is the position of the result in the paged set). The 
comparison sometimes failes.

I'll attach the test for 6.4 shortly. If it passes, just try it again (or 
increase maxDocs). It can pass over ten times in a row, but it can also fail 
ten times in a row.

You should see this if it fails, but probably with different values for 
expected and actual. Below was a few minutes ago, now i can't seem to reproduce 
it anymore.

{code}
   [junit4] FAILURE 25.1s | 
DistributedPagedQueryComponentTest.testTheCrazyPager <<<
   [junit4]> Throwable #1: java.lang.AssertionError: ids misaligned 
expected:<406> but was:<811>
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([97A7F02D1E4ACF75:7493130F03129E6D]:0)
   [junit4]>at 
org.apache.solr.handler.component.DistributedPagedQueryComponentTest.testTheCrazyPager(DistributedPagedQueryComponentTest.java:83)
   [junit4]>at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992)
   [junit4]>at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
   [junit4]>at java.lang.Thread.run(Thread.java:745)
{code}




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6959) Remove ToParentBlockJoinCollector

2017-01-27 Thread Martijn van Groningen (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn van Groningen updated LUCENE-6959:
--
Attachment: LUCENE_6959.patch

I've updated the patch. Thanks for reviewing!

bq. should it take the childQuery into account for equals/hashcode?

Oops, I forgot to add that back when removing `origChildQuery `.

bq. it looks buggy to me that we do not convert parentDocId to 
parentDocId-context.docBase in the scorer?

Good catch. I didn't catch this in the initially, but after running the 
provided test in the patch a 100 times it did fail, because the `parentDocId` 
wasn't converted.

bq. you use ConstantScoreWeight but then return a Scorer that actually scores, 
you should extend Weight directly instead.

Good point, I've changed that.

bq. Let's remove the "we"?

Done.

> Remove ToParentBlockJoinCollector
> -
>
> Key: LUCENE-6959
> URL: https://issues.apache.org/jira/browse/LUCENE-6959
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Attachments: LUCENE_6959.patch, LUCENE_6959.patch, LUCENE_6959.patch, 
> LUCENE_6959.patch, LUCENE-6959.patch
>
>
> This collector uses the getWeight() and getChildren() methods from the passed 
> in Scorer, which are not always available (eg. disjunctions expose fake 
> scorers) hence the need for a dedicated IndexSearcher 
> (ToParentBlockJoinIndexSearcher). Given that this is the only collector in 
> this case, I would like to remove it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7661) Speed up LatLonPointInPolygonQuery

2017-01-27 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-7661:
-
Attachment: LUCENE-7661.patch

Here is a patch. I am seeing the following speedups with 
IndexAndSearchOpenStreetMaps:
 - poly 5: +0%
 - poly 50: +8%
 - polyMedium: +49%
 - polyRussia: +13%

It seems to help with complex polygons.

> Speed up LatLonPointInPolygonQuery
> --
>
> Key: LUCENE-7661
> URL: https://issues.apache.org/jira/browse/LUCENE-7661
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-7661.patch
>
>
> We could apply the same idea as LUCENE-7656 to LatLonPointInPolygonQuery.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-7661) Speed up LatLonPointInPolygonQuery

2017-01-27 Thread Adrien Grand (JIRA)
Adrien Grand created LUCENE-7661:


 Summary: Speed up LatLonPointInPolygonQuery
 Key: LUCENE-7661
 URL: https://issues.apache.org/jira/browse/LUCENE-7661
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor


We could apply the same idea as LUCENE-7656 to LatLonPointInPolygonQuery.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7543) Make changes-to-html target an offline operation

2017-01-27 Thread Mano Kovacs (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7543?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15842432#comment-15842432
 ] 

Mano Kovacs commented on LUCENE-7543:
-

[~steve_rowe], as you also wrote, I made it lowercase to match the doap files. 
I was trying to verify if it was used anywhere else, but I missed the links. 
Sorry for the trouble and thank you for the fix.

> Make changes-to-html target an offline operation
> 
>
> Key: LUCENE-7543
> URL: https://issues.apache.org/jira/browse/LUCENE-7543
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Steve Rowe
>Assignee: Steve Rowe
> Fix For: master (7.0), 6.0.2, 6.1.1, 5.6, 5.5.4, 6.2.2, 6.4, 
> 6.3.1, 6.5, 6.4.1
>
> Attachments: LUCENE-7543-drop-XML-Simple.patch, LUCENE-7543.patch, 
> LUCENE-7543.patch, LUCENE-7543.patch
>
>
> Currently changes-to-html pulls release dates from JIRA, and so fails when 
> JIRA is inaccessible (e.g. from behind a firewall).
> SOLR-9711 advocates adding a build sysprop to ignore JIRA connection 
> failures, but I'd rather make the operation always offline.
> In an offline discussion, [~hossman] advocated moving Lucene's and Solr's 
> {{doap.rdf}} files, which contain all of the release dates that the 
> changes-to-html now pulls from JIRA, from the CMS Subversion repository 
> (downloadable from the website at http://lucene.apache.org/core/doap.rdf and 
> http://lucene.apache.org/solr/doap.rdf) to the Lucene/Solr git repository. If 
> we did that, then the process could be entirely offline if release dates were 
> taken from the local {{doap.rdf}} files instead of downloaded from JIRA.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org