[jira] [Updated] (LUCENE-6459) [suggest] Query Interface for suggest API

2015-05-26 Thread Areek Zillur (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6459?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Areek Zillur updated LUCENE-6459:
-
Description: 
This patch factors out common indexing/search API used by the recently 
introduced [NRTSuggester|https://issues.apache.org/jira/browse/LUCENE-6339]. 
The motivation is to provide a query interface for FST-based fields 
(*SuggestField* and *ContextSuggestField*) 
to enable suggestion scoring and more powerful automaton queries. 

Previously, only prefix ‘queries’ with index-time weights were supported but we 
can also support:

* Prefix queries expressed as regular expressions:  get suggestions that match 
multiple prefixes
  ** *Example:* _star\[wa\|tr\]_ matches _starwars_ and _startrek_
* Fuzzy Prefix queries supporting scoring: get typo tolerant suggestions scored 
by how close they are to the query prefix
** *Example:* querying for _seper_ will score _separate_ higher then 
_superstitious_
* Context Queries: get suggestions boosted and/or filtered based on their 
indexed contexts (meta data)
** *Boost example:* get typo tolerant suggestions on song names with prefix 
_like a roling_ boosting songs with 
genre _rock_ and _indie_
** *Filter example:* get suggestion on all file names starting with _finan_ 
only for _user1_ and _user2_

h3. Suggest API

{code}
SuggestIndexSearcher searcher = new SuggestIndexSearcher(reader);
CompletionQuery query = ...
TopSuggestDocs suggest = searcher.suggest(query, num);
{code}

h3. CompletionQuery

*CompletionQuery* is used to query *SuggestField* and *ContextSuggestField*. A 
*CompletionQuery* produces a *CompletionWeight*, 
which allows *CompletionQuery* implementations to pass in an automaton that 
will be intersected with a FST and allows boosting and 
meta data extraction from the intersected partial paths. A *CompletionWeight* 
produces a *CompletionScorer*. A *CompletionScorer* 
executes a Top N search against the FST with the provided automaton, scoring 
and filtering all matched paths. 

h4. PrefixCompletionQuery
Return documents with values that match the prefix of an analyzed term text 
Documents are sorted according to their suggest field weight. 
{code}
PrefixCompletionQuery(Analyzer analyzer, Term term)
{code}

h4. RegexCompletionQuery
Return documents with values that match the prefix of a regular expression
Documents are sorted according to their suggest field weight.
{code}
RegexCompletionQuery(Term term)
{code}

h4. FuzzyCompletionQuery
Return documents with values that has prefixes within a specified edit distance 
of an analyzed term text.
Documents are ‘boosted’ by the number of matching prefix letters of the 
suggestion with respect to the original term text.

{code}
FuzzyCompletionQuery(Analyzer analyzer, Term term)
{code}

h5. Scoring
{{suggestion_weight * boost}}
where {{suggestion_weight}} and {{boost}} are all integers. 
{{boost = # of prefix characters matched}}

h4. ContextQuery
Return documents that match a {{CompletionQuery}} filtered and/or boosted by 
provided context(s). 
{code}
ContextQuery(CompletionQuery query)
contextQuery.addContext(CharSequence context, int boost, boolean exact)
{code}

*NOTE:* {{ContextQuery}} should be used with {{ContextSuggestField}} to query 
suggestions boosted and/or filtered by contexts.
Running {{ContextQuery}} against a {{SuggestField}} will error out.


h5. Scoring
{{suggestion_weight  * context_boost}}
where {{suggestion_weight}} and {{context_boost}} are all integers

When used with {{FuzzyCompletionQuery}},
{{suggestion_weight * (context_boost + fuzzy_boost)}}


h3. Context Suggest Field
To use {{ContextQuery}}, use {{ContextSuggestField}} instead of 
{{SuggestField}}. Any {{CompletionQuery}} can be used with 
{{ContextSuggestField}}, the default behaviour is to return suggestions from 
*all* contexts. {{Context}} for every completion hit 
can be accessed through {{SuggestScoreDoc#context}}.
{code}
ContextSuggestField(String name, CollectionCharSequence contexts, String 
value, int weight) 
{code}

  was:
This patch factors out common indexing/search API used by the recently 
introduced [NRTSuggester|https://issues.apache.org/jira/browse/LUCENE-6339]. 
The motivation is to provide a query interface for FST-based fields 
(*SuggestField* and *ContextSuggestField*) 
to enable suggestion scoring and more powerful automaton queries. 

Previously, only prefix ‘queries’ with index-time weights were supported but we 
can also support:

* Prefix queries expressed as regular expressions:  get suggestions that match 
multiple prefixes
  ** *Example:* _star\[wa\|tr\]_ matches _starwars_ and _startrek_
* Fuzzy Prefix queries supporting scoring: get typo tolerant suggestions scored 
by how close they are to the query prefix
** *Example:* querying for _seper_ will score _separate_ higher then 
_superstitious_
* Context Queries: get suggestions boosted and/or filtered based on 

[jira] [Commented] (SOLR-6273) Cross Data Center Replication

2015-05-26 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14560186#comment-14560186
 ] 

ASF subversion and git services commented on SOLR-6273:
---

Commit 1681893 from [~erickoerickson] in branch 'dev/trunk'
[ https://svn.apache.org/r1681893 ]

SOLR-6273: re-ignoring failed tests

 Cross Data Center Replication
 -

 Key: SOLR-6273
 URL: https://issues.apache.org/jira/browse/SOLR-6273
 Project: Solr
  Issue Type: New Feature
Reporter: Yonik Seeley
Assignee: Erick Erickson
 Attachments: SOLR-6273-trunk-testfix1.patch, SOLR-6273-trunk.patch, 
 SOLR-6273-trunk.patch, SOLR-6273.patch, SOLR-6273.patch, SOLR-6273.patch, 
 SOLR-6273.patch


 This is the master issue for Cross Data Center Replication (CDCR)
 described at a high level here: 
 http://heliosearch.org/solr-cross-data-center-replication/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Welcome Timothy Potter to the PMC

2015-05-26 Thread Koji Sekiguchi

Welcome Tim!

Koji

On 2015/05/27 0:10, Steve Rowe wrote:

I'm pleased to announce that Timothy Potter has accepted the PMC’s invitation 
to join.

Welcome Tim!

Steve


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org






-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Windows (64bit/jdk1.8.0_45) - Build # 4857 - Failure!

2015-05-26 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/4857/
Java: 64bit/jdk1.8.0_45 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

4 tests failed.
FAILED:  org.apache.solr.cloud.CdcrReplicationDistributedZkTest.doTest

Error Message:
expected:st[art]ed but was:st[opp]ed

Stack Trace:
org.junit.ComparisonFailure: expected:st[art]ed but was:st[opp]ed
at 
__randomizedtesting.SeedInfo.seed([BD40B68CFC19CD:A7F9F812E1470A74]:0)
at org.junit.Assert.assertEquals(Assert.java:125)
at org.junit.Assert.assertEquals(Assert.java:147)
at 
org.apache.solr.cloud.BaseCdcrDistributedZkTest.assertState(BaseCdcrDistributedZkTest.java:255)
at 
org.apache.solr.cloud.CdcrReplicationDistributedZkTest.doTestTargetCollectionNotAvailable(CdcrReplicationDistributedZkTest.java:114)
at 
org.apache.solr.cloud.CdcrReplicationDistributedZkTest.doTest(CdcrReplicationDistributedZkTest.java:43)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:960)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:935)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 

[JENKINS] Lucene-Solr-NightlyTests-5.2 - Build # 4 - Failure

2015-05-26 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-5.2/4/

5 tests failed.
REGRESSION:  org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test

Error Message:
Captured an uncaught exception in thread: Thread[id=2882, name=collection4, 
state=RUNNABLE, group=TGRP-CollectionsAPIDistributedZkTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=2882, name=collection4, state=RUNNABLE, 
group=TGRP-CollectionsAPIDistributedZkTest]
Caused by: 
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:47758: collection already exists: 
awholynewstresscollection_collection4_1
at __randomizedtesting.SeedInfo.seed([7CFB8B19B54BFF91]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:560)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:235)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:227)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:376)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:328)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1086)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:856)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:799)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1220)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.createCollection(AbstractFullDistribZkTestBase.java:1621)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.createCollection(AbstractFullDistribZkTestBase.java:1642)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest$1CollectionThread.run(CollectionsAPIDistributedZkTest.java:877)


REGRESSION:  org.apache.solr.cloud.SyncSliceTest.test

Error Message:
timeout waiting to see all nodes active

Stack Trace:
java.lang.AssertionError: timeout waiting to see all nodes active
at 
__randomizedtesting.SeedInfo.seed([7CFB8B19B54BFF91:F4AFB4C31BB79269]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.SyncSliceTest.waitTillAllNodesActive(SyncSliceTest.java:256)
at org.apache.solr.cloud.SyncSliceTest.test(SyncSliceTest.java:184)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:960)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:935)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 

[JENKINS] Lucene-Solr-Tests-trunk-Java8 - Build # 17 - Failure

2015-05-26 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-trunk-Java8/17/

1 tests failed.
FAILED:  org.apache.solr.cloud.CdcrRequestHandlerTest.doTest

Error Message:
expected:st[art]ed but was:st[opp]ed

Stack Trace:
org.junit.ComparisonFailure: expected:st[art]ed but was:st[opp]ed
at 
__randomizedtesting.SeedInfo.seed([480F43E26A71FAD7:EF4BFB4607CAE96E]:0)
at org.junit.Assert.assertEquals(Assert.java:125)
at org.junit.Assert.assertEquals(Assert.java:147)
at 
org.apache.solr.cloud.BaseCdcrDistributedZkTest.assertState(BaseCdcrDistributedZkTest.java:255)
at 
org.apache.solr.cloud.CdcrRequestHandlerTest.doTestLifeCycleActions(CdcrRequestHandlerTest.java:55)
at 
org.apache.solr.cloud.CdcrRequestHandlerTest.doTest(CdcrRequestHandlerTest.java:39)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:960)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:935)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 

[jira] [Commented] (SOLR-7594) TestSolr4Spatial2.testRptWithGeometryField failure

2015-05-26 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14560367#comment-14560367
 ] 

ASF subversion and git services commented on SOLR-7594:
---

Commit 1681902 from [~dsmiley] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1681902 ]

SOLR-7594: Fix test bug on RptWithGeometryField's cache state
The bug was that I can't compare the segment count; I should compare cache keys

 TestSolr4Spatial2.testRptWithGeometryField failure
 --

 Key: SOLR-7594
 URL: https://issues.apache.org/jira/browse/SOLR-7594
 Project: Solr
  Issue Type: Bug
  Components: spatial
Affects Versions: Trunk, 5.2
Reporter: Steve Rowe
Assignee: David Smiley

 The seed fails for me on branch_5x and trunk:
 {noformat}
[junit4]   2 NOTE: reproduce with: ant test  -Dtestcase=TestSolr4Spatial2 
 -Dtests.method=testRptWithGeometryField -Dtests.seed=3073201A99DE8699 
 -Dtests.slow=true -Dtests.locale=be_BY -Dtests.timezone=America/Maceio 
 -Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1
[junit4] FAILURE 0.53s | TestSolr4Spatial2.testRptWithGeometryField 
[junit4] Throwable #1: org.junit.ComparisonFailure: expected:[2] 
 but was:[1]
[junit4]  at 
 __randomizedtesting.SeedInfo.seed([3073201A99DE8699:166498ECA48FDFFB]:0)
[junit4]  at 
 org.apache.solr.search.TestSolr4Spatial2.testRptWithGeometryField(TestSolr4Spatial2.java:140)
[junit4]  at java.lang.Thread.run(Thread.java:745)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-6502) Spatial RectIntersectionTestHelper should only require one of each relation type to complete

2015-05-26 Thread David Smiley (JIRA)
David Smiley created LUCENE-6502:


 Summary: Spatial RectIntersectionTestHelper should only require 
one of each relation type to complete
 Key: LUCENE-6502
 URL: https://issues.apache.org/jira/browse/LUCENE-6502
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/spatial
Reporter: David Smiley
Assignee: David Smiley


The RectIntersectionTestHelper requires a minimum number of occurrences of each 
relation type before it passes, and there are a minimum number of attempts.  
But this can be a bit much, and too often it can cause a spurious test failure 
that isn't really a bug.  Instead, it should simply try to find at least one of 
every case in a minimum number of tries.

This would solve this bug today: 
http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/12825/





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.8.0_45) - Build # 12837 - Failure!

2015-05-26 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/12837/
Java: 64bit/jdk1.8.0_45 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.cloud.CdcrRequestHandlerTest.doTest

Error Message:
expected:st[opp]ed but was:st[art]ed

Stack Trace:
org.junit.ComparisonFailure: expected:st[opp]ed but was:st[art]ed
at 
__randomizedtesting.SeedInfo.seed([E21C27F2FD672CF4:45589F5690DC3F4D]:0)
at org.junit.Assert.assertEquals(Assert.java:125)
at org.junit.Assert.assertEquals(Assert.java:147)
at 
org.apache.solr.cloud.BaseCdcrDistributedZkTest.assertState(BaseCdcrDistributedZkTest.java:255)
at 
org.apache.solr.cloud.CdcrRequestHandlerTest.doTestLifeCycleActions(CdcrRequestHandlerTest.java:69)
at 
org.apache.solr.cloud.CdcrRequestHandlerTest.doTest(CdcrRequestHandlerTest.java:39)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:960)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:935)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 

[jira] [Commented] (SOLR-6273) Cross Data Center Replication

2015-05-26 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14560392#comment-14560392
 ] 

ASF subversion and git services commented on SOLR-6273:
---

Commit 1681904 from [~erickoerickson] in branch 'dev/trunk'
[ https://svn.apache.org/r1681904 ]

SOLR-6273: disable more failing tests now that we have logs

 Cross Data Center Replication
 -

 Key: SOLR-6273
 URL: https://issues.apache.org/jira/browse/SOLR-6273
 Project: Solr
  Issue Type: New Feature
Reporter: Yonik Seeley
Assignee: Erick Erickson
 Attachments: SOLR-6273-trunk-testfix1.patch, SOLR-6273-trunk.patch, 
 SOLR-6273-trunk.patch, SOLR-6273.patch, SOLR-6273.patch, SOLR-6273.patch, 
 SOLR-6273.patch


 This is the master issue for Cross Data Center Replication (CDCR)
 described at a high level here: 
 http://heliosearch.org/solr-cross-data-center-replication/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-7377) SOLR Streaming Expressions

2015-05-26 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein closed SOLR-7377.

Resolution: Fixed

 SOLR Streaming Expressions
 --

 Key: SOLR-7377
 URL: https://issues.apache.org/jira/browse/SOLR-7377
 Project: Solr
  Issue Type: Improvement
  Components: clients - java
Reporter: Dennis Gove
Priority: Minor
 Fix For: Trunk, 5.2

 Attachments: SOLR-7377.patch, SOLR-7377.patch, SOLR-7377.patch, 
 SOLR-7377.patch, SOLR-7377.patch, SOLR-7377.patch, SOLR-7377.patch, 
 SOLR-7377.patch, SOLR-7377.patch, SOLR-7377.patch


 It would be beneficial to add an expression-based interface to Streaming API 
 described in SOLR-7082. Right now that API requires streaming requests to 
 come in from clients as serialized bytecode of the streaming classes. The 
 suggestion here is to support string expressions which describe the streaming 
 operations the client wishes to perform. 
 {code:java}
 search(collection1, q=*:*, fl=id,fieldA,fieldB, sort=fieldA asc)
 {code}
 With this syntax in mind, one can now express arbitrarily complex stream 
 queries with a single string.
 {code:java}
 // merge two distinct searches together on common fields
 merge(
   search(collection1, q=id:(0 3 4), fl=id,a_s,a_i,a_f, sort=a_f asc, a_s 
 asc),
   search(collection2, q=id:(1 2), fl=id,a_s,a_i,a_f, sort=a_f asc, a_s 
 asc),
   on=a_f asc, a_s asc)
 // find top 20 unique records of a search
 top(
   n=20,
   unique(
 search(collection1, q=*:*, fl=id,a_s,a_i,a_f, sort=a_f desc),
 over=a_f desc),
   sort=a_f desc)
 {code}
 The syntax would support
 1. Configurable expression names (eg. via solrconfig.xml one can map unique 
 to a class implementing a Unique stream class) This allows users to build 
 their own streams and use as they wish.
 2. Named parameters (of both simple and expression types)
 3. Unnamed, type-matched parameters (to support requiring N streams as 
 arguments to another stream)
 4. Positional parameters
 The main goal here is to make streaming as accessible as possible and define 
 a syntax for running complex queries across large distributed systems.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7594) TestSolr4Spatial2.testRptWithGeometryField failure

2015-05-26 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14560366#comment-14560366
 ] 

ASF subversion and git services commented on SOLR-7594:
---

Commit 1681901 from [~dsmiley] in branch 'dev/trunk'
[ https://svn.apache.org/r1681901 ]

SOLR-7594: Fix test bug on RptWithGeometryField's cache state
The bug was that I can't compare the segment count; I should compare cache keys

 TestSolr4Spatial2.testRptWithGeometryField failure
 --

 Key: SOLR-7594
 URL: https://issues.apache.org/jira/browse/SOLR-7594
 Project: Solr
  Issue Type: Bug
  Components: spatial
Affects Versions: Trunk, 5.2
Reporter: Steve Rowe
Assignee: David Smiley

 The seed fails for me on branch_5x and trunk:
 {noformat}
[junit4]   2 NOTE: reproduce with: ant test  -Dtestcase=TestSolr4Spatial2 
 -Dtests.method=testRptWithGeometryField -Dtests.seed=3073201A99DE8699 
 -Dtests.slow=true -Dtests.locale=be_BY -Dtests.timezone=America/Maceio 
 -Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1
[junit4] FAILURE 0.53s | TestSolr4Spatial2.testRptWithGeometryField 
[junit4] Throwable #1: org.junit.ComparisonFailure: expected:[2] 
 but was:[1]
[junit4]  at 
 __randomizedtesting.SeedInfo.seed([3073201A99DE8699:166498ECA48FDFFB]:0)
[junit4]  at 
 org.apache.solr.search.TestSolr4Spatial2.testRptWithGeometryField(TestSolr4Spatial2.java:140)
[junit4]  at java.lang.Thread.run(Thread.java:745)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-7594) TestSolr4Spatial2.testRptWithGeometryField failure

2015-05-26 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley resolved SOLR-7594.

   Resolution: Fixed
Fix Version/s: 5.2

This was a bug in the test that occurred in certain conditions; I fixed it.

 TestSolr4Spatial2.testRptWithGeometryField failure
 --

 Key: SOLR-7594
 URL: https://issues.apache.org/jira/browse/SOLR-7594
 Project: Solr
  Issue Type: Bug
  Components: spatial
Affects Versions: Trunk, 5.2
Reporter: Steve Rowe
Assignee: David Smiley
 Fix For: 5.2


 The seed fails for me on branch_5x and trunk:
 {noformat}
[junit4]   2 NOTE: reproduce with: ant test  -Dtestcase=TestSolr4Spatial2 
 -Dtests.method=testRptWithGeometryField -Dtests.seed=3073201A99DE8699 
 -Dtests.slow=true -Dtests.locale=be_BY -Dtests.timezone=America/Maceio 
 -Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1
[junit4] FAILURE 0.53s | TestSolr4Spatial2.testRptWithGeometryField 
[junit4] Throwable #1: org.junit.ComparisonFailure: expected:[2] 
 but was:[1]
[junit4]  at 
 __randomizedtesting.SeedInfo.seed([3073201A99DE8699:166498ECA48FDFFB]:0)
[junit4]  at 
 org.apache.solr.search.TestSolr4Spatial2.testRptWithGeometryField(TestSolr4Spatial2.java:140)
[junit4]  at java.lang.Thread.run(Thread.java:745)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7555) Display total space and available space in Admin

2015-05-26 Thread Marius Grama (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14560398#comment-14560398
 ] 

Marius Grama commented on SOLR-7555:


[~epugh] maybe this information could be of help when trying to get the file 
system statistics in hdfs : 

- https://hadoop.apache.org/docs/current/api/org/apache/hadoop/fs/FsStatus.html
- 
https://hadoop.apache.org/docs/current/api/org/apache/hadoop/fs/FileSystem.html#getStatus%28org.apache.hadoop.fs.Path%29

 Display total space and available space in Admin
 

 Key: SOLR-7555
 URL: https://issues.apache.org/jira/browse/SOLR-7555
 Project: Solr
  Issue Type: Improvement
  Components: web gui
Affects Versions: 5.1
Reporter: Eric Pugh
Assignee: Erik Hatcher
Priority: Minor
 Fix For: 5.2

 Attachments: SOLR-7555-display_disk_space.patch, SOLR-7555.patch


 Frequently I have access to the Solr Admin console, but not the underlying 
 server, and I'm curious how much space remains available.   This little patch 
 exposes total Volume size as well as the usable space remaining:
 !https://monosnap.com/file/VqlReekCFwpK6utI3lP18fbPqrGI4b.png!
 I'm not sure if this is the best place to put this, as every shard will share 
 the same data, so maybe it should be on the top level Dashboard?  Also not 
 sure what to call the fields! 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-6503) QueryWrapperFilter discards the IndexReaderContext when delegating to the wrapped query

2015-05-26 Thread Trejkaz (JIRA)
Trejkaz created LUCENE-6503:
---

 Summary: QueryWrapperFilter discards the IndexReaderContext when 
delegating to the wrapped query
 Key: LUCENE-6503
 URL: https://issues.apache.org/jira/browse/LUCENE-6503
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/search
Affects Versions: 4.10.4
Reporter: Trejkaz


Suppose I have a working {{Filter}} which depends on the context within the 
composite reader, e.g., one which has a global BitSet of the docs which match 
but needs to know the docBase and maxDoc for the individual reader in order to 
return the correct set to the caller.

This is wrapped into a {{ConstantScoreQuery}} in order to become part of a 
{{BooleanQuery}} tree.

At some other layer, the entire query tree is wrapped back into a 
{{QueryWrapperFilter}} by some other code which wants to cache the results as a 
Filter.

QueryWrapperFilter has code like this:

{code}
  @Override
  public DocIdSet getDocIdSet(final AtomicReaderContext context, final Bits 
acceptDocs) throws IOException {
// get a private context that is used to rewrite, createWeight and score 
eventually
final AtomicReaderContext privateContext = context.reader().getContext();
final Weight weight = new 
IndexSearcher(privateContext).createNormalizedWeight(query);
return new DocIdSet() {
  @Override
  public DocIdSetIterator iterator() throws IOException {
return weight.scorer(privateContext, acceptDocs);
  }
  @Override
  public boolean isCacheable() { return false; }
};
  }
{code}

The call to {{reader().getContext()}} returns an {{AtomicReaderContext}} whose 
parent is not correctly set.

This is then passed to {{Weight#scorer}} which eventually arrives at 
{{ConstantScoreQuery#scorer}}, which calls {{Filter#getDocIdSet}}.

So our innermost {{Filter}} receives an {{AtomicReaderContext}} whose top-level 
{{IndexReader}} is not the actual top-level reader. This was detected in our 
code because we use a special subclass of DirectoryReader for our top-level 
reader and thus the filter failed. (Had it not failed, it would have silently 
returned the wrong results.)

The fix I have applied locally is to change the call to:

{code}
return weight.scorer(context, acceptDocs);
{code}

This does appear to be working, but I'm not really sure if it's OK to build the 
IndexSearcher using one context while passing another context to the scorer.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6487) Add WGS84 capability to geo3d support

2015-05-26 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14560411#comment-14560411
 ] 

ASF subversion and git services commented on LUCENE-6487:
-

Commit 1681907 from [~dsmiley] in branch 'dev/branches/lucene6487'
[ https://svn.apache.org/r1681907 ]

LUCENE-6487: Geo3D with WGS84 in-progress with David's mods
(PlanetModel, and refactor of Geo3dShapeRectRelationTestCase)

 Add WGS84 capability to geo3d support
 -

 Key: LUCENE-6487
 URL: https://issues.apache.org/jira/browse/LUCENE-6487
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/spatial
Reporter: Karl Wright
 Attachments: LUCENE-6487.patch, LUCENE-6487.patch, LUCENE-6487.patch, 
 LUCENE-6487.patch


 WGS84 compatibility has been requested for geo3d.  This involves working with 
 an ellipsoid rather than a unit sphere.  The general formula for an ellipsoid 
 is:
 x^2/a^2 + y^2/b^2 + z^2/c^2 = 1



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6487) Add WGS84 capability to geo3d support

2015-05-26 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14560413#comment-14560413
 ] 

David Smiley commented on LUCENE-6487:
--

bq. Suggestion: (1) Create a branch., ...

Great suggestion -- I did those steps.

It's a shame the patch didn't deal well with the rename of GeoBaseBBox.

 Add WGS84 capability to geo3d support
 -

 Key: LUCENE-6487
 URL: https://issues.apache.org/jira/browse/LUCENE-6487
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/spatial
Reporter: Karl Wright
 Attachments: LUCENE-6487.patch, LUCENE-6487.patch, LUCENE-6487.patch, 
 LUCENE-6487.patch


 WGS84 compatibility has been requested for geo3d.  This involves working with 
 an ellipsoid rather than a unit sphere.  The general formula for an ellipsoid 
 is:
 x^2/a^2 + y^2/b^2 + z^2/c^2 = 1



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-MacOSX (64bit/jdk1.8.0) - Build # 2349 - Still Failing!

2015-05-26 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-MacOSX/2349/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.cloud.CdcrRequestHandlerTest.doTest

Error Message:
expected:st[art]ed but was:st[opp]ed

Stack Trace:
org.junit.ComparisonFailure: expected:st[art]ed but was:st[opp]ed
at 
__randomizedtesting.SeedInfo.seed([77A6918BEBBEE1C0:D0E2292F8605F279]:0)
at org.junit.Assert.assertEquals(Assert.java:125)
at org.junit.Assert.assertEquals(Assert.java:147)
at 
org.apache.solr.cloud.BaseCdcrDistributedZkTest.assertState(BaseCdcrDistributedZkTest.java:255)
at 
org.apache.solr.cloud.CdcrRequestHandlerTest.doTestLifeCycleActions(CdcrRequestHandlerTest.java:55)
at 
org.apache.solr.cloud.CdcrRequestHandlerTest.doTest(CdcrRequestHandlerTest.java:39)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:960)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:935)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 

Re: [JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.8.0_45) - Build # 12825 - Failure!

2015-05-26 Thread david.w.smi...@gmail.com
This is not a real bug; it’s the test asserting it finds a minimum number
relation types over many random shapes… but that can sometimes be asking
too much.  I filed an issue to improve, which I’ll get to shortly:
LUCENE-6502 - Spatial RectIntersectionTestHelper should only require one of
each relation type to complete
https://issues.apache.org/jira/browse/LUCENE-6502

On Tue, May 26, 2015 at 3:37 AM Policeman Jenkins Server 
jenk...@thetaphi.de wrote:

 Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/12825/
 Java: 64bit/jdk1.8.0_45 -XX:-UseCompressedOops -XX:+UseParallelGC

 1 tests failed.
 FAILED:
 org.apache.lucene.spatial.spatial4j.Geo3dShapeRectRelationTest.testGeoBBoxRect

 Error Message:
 Did not find enough contains/within/intersection/disjoint/bounds cases in
 a reasonable number of random attempts. CWIDbD:
 3235(22),20(22),8927(22),1129(22),9315(22)  Laps exceeded 22626

 Stack Trace:
 java.lang.AssertionError: Did not find enough
 contains/within/intersection/disjoint/bounds cases in a reasonable number
 of random attempts. CWIDbD: 3235(22),20(22),8927(22),1129(22),9315(22)
 Laps exceeded 22626
 at
 __randomizedtesting.SeedInfo.seed([20CB0E5500D415D2:46EA66D24E76B8C]:0)
 at org.junit.Assert.fail(Assert.java:93)
 at
 org.apache.lucene.spatial.spatial4j.RectIntersectionTestHelper.testRelateWithRectangle(RectIntersectionTestHelper.java:96)
 at
 org.apache.lucene.spatial.spatial4j.Geo3dShapeRectRelationTest.testGeoBBoxRect(Geo3dShapeRectRelationTest.java:145)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
 at
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:497)
 at
 com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
 at
 com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
 at
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at
 com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
 at
 com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
 at
 com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
 at
 com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
 at
 com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
 at
 com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
 at
 com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
 at
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at
 com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
 at
 com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
 at
 com.carrotsearch.randomizedtesting.ThreadLeakControl$2.evaluate(ThreadLeakControl.java:401)
 at
 com.carrotsearch.randomizedtesting.RandomizedRunner.runSuite(RandomizedRunner.java:651)
 at
 com.carrotsearch.randomizedtesting.RandomizedRunner.access$200(RandomizedRunner.java:138)
 at
 com.carrotsearch.randomizedtesting.RandomizedRunner$1.run(RandomizedRunner.java:568)




 Build Log:
 [...truncated 8160 lines...]
[junit4] Suite:
 org.apache.lucene.spatial.spatial4j.Geo3dShapeRectRelationTest
[junit4]   1 Laps: 23218 CWIDbD: 11893,51,5896,1732,3646
[junit4]   1 Laps: 1560 CWIDbD: 140,3,632,304,481
[junit4] FAILURE 1.46s J0 | Geo3dShapeRectRelationTest.testGeoBBoxRect
 
[junit4] Throwable #1: java.lang.AssertionError: Did not find
 enough contains/within/intersection/disjoint/bounds cases in a reasonable
 number of random attempts. CWIDbD:
 3235(22),20(22),8927(22),1129(22),9315(22)  Laps exceeded 22626
[junit4]at
 __randomizedtesting.SeedInfo.seed([20CB0E5500D415D2:46EA66D24E76B8C]:0)
[junit4]at
 org.apache.lucene.spatial.spatial4j.RectIntersectionTestHelper.testRelateWithRectangle(RectIntersectionTestHelper.java:96)
[junit4]at
 org.apache.lucene.spatial.spatial4j.Geo3dShapeRectRelationTest.testGeoBBoxRect(Geo3dShapeRectRelationTest.java:145)
[junit4]   1 Laps: 16334 CWIDbD: 4216,8,7024,2976,2110
[junit4] Completed [26/28] on J0 in 4.68s, 6 tests, 1 failure 
 FAILURES!

 [...truncated 17 lines...]
 BUILD FAILED
 /home/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:526: The
 following error 

[jira] [Commented] (SOLR-7183) SaslZkACLProviderTest reproducible failures due to poor locale blacklisting

2015-05-26 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14560461#comment-14560461
 ] 

Anshum Gupta commented on SOLR-7183:


LGTM. I'll run the tests and commit.

 SaslZkACLProviderTest reproducible failures due to poor locale blacklisting
 ---

 Key: SOLR-7183
 URL: https://issues.apache.org/jira/browse/SOLR-7183
 Project: Solr
  Issue Type: Bug
Reporter: Hoss Man
Assignee: Gregory Chanan
 Attachments: SOLR-7183.patch


 SaslZkACLProviderTest has this blacklist of locales...
 {code}
   // These Locales don't generate dates that are compatibile with Hadoop 
 MiniKdc.
   protected final static ListString brokenLocales =
 Arrays.asList(
   th_TH_TH_#u-nu-thai,
   ja_JP_JP_#u-ca-japanese,
   hi_IN);
 {code}
 ..but this list is incomplete -- notably because it only focuses on one 
 specific Thai variant, and then does a string Locale.toString() comparison.  
 so at a minimum {{-Dtests.locale=th_TH}} also fails - i suspect there are 
 other variants that will fail as well
 * if there is a bug in Hadoop MiniKdc then that bug should be filed in 
 jira, and there should be Solr jira that refers to it -- the Solr jira URL 
 needs to be included her in the test case so developers in the future can 
 understand the context and have some idea of if/when the third-party lib bug 
 is fixed
 * if we need to work around some Locales because of this bug, then Locale 
 comparisons need be based on whatever aspects of the Locale are actually 
 problematic
 see for example SOLR-6387  this commit: 
 https://svn.apache.org/viewvc/lucene/dev/branches/branch_4x/solr/contrib/morphlines-core/src/test/org/apache/solr/morphlines/solr/AbstractSolrMorphlineZkTestBase.java?r1=1618676r2=1618675pathrev=1618676
 Or SOLR-6991 + TIKA-1526  this commit: 
 https://svn.apache.org/viewvc/lucene/dev/branches/lucene_solr_5_0/solr/contrib/extraction/src/test/org/apache/solr/handler/extraction/ExtractingRequestHandlerTest.java?r1=1653708r2=1653707pathrev=1653708



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7468) Kerberos authentication module

2015-05-26 Thread Ishan Chattopadhyaya (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7468?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Chattopadhyaya updated SOLR-7468:
---
Attachment: SOLR-7468-alt-test.patch

It seems, sometimes, hadoop-auth is unable to apply the DEFAULT name rule to 
principals. Possible reason could be that it cannot determine minikdc's default 
realm. Due to this, there are 500 errors in the tests.

I've explicitly added a name rule which should suffice for the tests. Also, 
enabling back the original test (which [~anshumg] disabled in previous patch), 
so it can be tested with this name rules fix.

 Kerberos authentication module
 --

 Key: SOLR-7468
 URL: https://issues.apache.org/jira/browse/SOLR-7468
 Project: Solr
  Issue Type: New Feature
  Components: security
Reporter: Ishan Chattopadhyaya
Assignee: Anshum Gupta
 Fix For: 5.2

 Attachments: SOLR-7468-alt-test.patch, SOLR-7468-alt-test.patch, 
 SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, 
 SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, 
 SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, 
 SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, 
 SOLR-7468.patch, SOLR-7468.patch


 SOLR-7274 introduces a pluggable authentication framework. This issue 
 provides a Kerberos plugin implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.8.0_45) - Build #12782 - Failure!

2015-05-26 Thread Noble Paul
Do you mean in trunk ?

On Mon, May 25, 2015 at 9:53 PM, Erick Erickson erickerick...@gmail.com wrote:
 The problem went away for me. I took sledgehammer approach and deleted
 the entire tree locally and checked out the tree fresh.

 On Mon, May 25, 2015 at 4:37 AM, Noble Paul noble.p...@gmail.com wrote:
 I'v edone a clean checkout I still see these errors

 meta http-equiv=Content-Type content=text/html; charset=UTF-8/

 titleError 500 Server Error/title

 /head

 bodyh2HTTP ERROR 500/h2

 pProblem accessing /solr/.system_shard1_replica2/update. Reason:

 preServer Error/pre/ph3Caused
 by:/h3prejava.lang.NoSuchFieldError: totalTermCount

 at 
 org.apache.lucene.index.BufferedUpdatesStream.applyDeletesAndUpdates(BufferedUpdatesStream.java:277)

 at 
 org.apache.lucene.index.IndexWriter.applyAllDeletesAndUpdates(IndexWriter.java:3032)

 at 
 org.apache.lucene.index.IndexWriter.maybeApplyDeletes(IndexWriter.java:3018)

 at 
 org.apache.lucene.index.IndexWriter.prepareCommitInternal(IndexWriter.java:2707)

 at org.apache.lucene.index.IndexWriter.commitInternal(IndexWriter.java:2852)

 at org.apache.lucene.index.IndexWriter.commit(IndexWriter.java:2819)

 at 
 org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:586)

 On Sat, May 23, 2015 at 11:52 AM, Erick Erickson
 erickerick...@gmail.com wrote:
 thanks! I'll give it a whirl. It's particularly weird b/c my pro runs
 everything just fine, my laptop fails all the time.

 On Fri, May 22, 2015 at 10:10 PM, Ishan Chattopadhyaya
 ichattopadhy...@gmail.com wrote:
 Not sure if it is related or helpful, but while debugging tests for
 SOLR-7468 yesterday, I encountered this
 java.lang.NoSuchFieldError:
 totalTermCount

 few times, I had to forcefully clean at root of the project and it worked. 
 I
 remember Anshum had to do that clean thing more than once to make it work
 and he remarked don't ask why.

 Sent from my Windows Phone
 
 From: Erick Erickson
 Sent: ‎5/‎23/‎2015 6:15 AM
 To: dev@lucene.apache.org
 Subject: Re: [JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.8.0_45) - Build
 #12782 - Failure!

 OK, this is somewhat weird. I still have the original tree that I
 checked in from which was up to date before I committed the code and
 the tests run from there fine. But a current trunk fails every time.
 Now, the machine it works on is my Mac Pro, and the failures are on my
 MacBook so there may be something going on there.

 I've got to leave for a while, I'll copy the tree that works on the
 Pro, update the copy and see if this test fails when I get back. If
 they fail, I can diff the trees to see what changed and see if I can
 make any sense out of this.

 I can always @Ignore this test to cut down on the noise, probably do
 that tonight if I don't have any revelations.

 I see this stack trace which makes no sense to me whatsoever (see the
 lines with lots of * in front). I looked at where the code
 originates (BufferedUpdatesStream[277]) and it looks like this:

 if (coalescedUpdates != null  coalescedUpdates.totalTermCount != 0) {

 And it's telling me there's no such field? Wha

 Which is freaking me out since I don't see how this would trigger the
 exception. Is this a red herring? And, of course, this doesn't fail in
 IntelliJ but it does fail every time from the shell. Shhh.

 Of course if this were something fundamental to Lucene, it seems like
 this would be failing all over the place so I assume it's something to
 do with CDCR... But what do I know?

 1:56434/source_collection_shard2_replica1/commit_end_point=truewt=javabinversion=2expungeDeletes=false}
 status=0 QTime=8
 *   [junit4]   2 143699 T370 n:127.0.0.1:56443_
 c:source_collection s:shard1 r:core_node3
 x:source_collection_shard1_replica1 C122 oasc.SolrException.log ERROR
 null:java.lang.RuntimeException: java.lang.NoSuchFieldError:
 totalTermCount
[junit4]   2 at
 org.apache.solr.servlet.HttpSolrCall.sendError(HttpSolrCall.java:579)
[junit4]   2 at
 org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:451)
[junit4]   2 at
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:227)
[junit4]   2 at
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:196)
[junit4]   2 at
 org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
[junit4]   2 at
 org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:105)
[junit4]   2 at
 org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
[junit4]   2 at
 org.eclipse.jetty.servlets.UserAgentFilter.doFilter(UserAgentFilter.java:83)
[junit4]   2 at
 org.eclipse.jetty.servlets.GzipFilter.doFilter(GzipFilter.java:300)
[junit4]   2 at
 org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
[junit4]   2 at
 

[jira] [Commented] (SOLR-7468) Kerberos authentication module

2015-05-26 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14558735#comment-14558735
 ] 

Anshum Gupta commented on SOLR-7468:


The test suite + independent test passed for me with the fix. Since I figured 
that it may have been passing for me due to my settings in the /etc/krb5.conf, 
I've moved that file and I'm running the tests without that file.

 Kerberos authentication module
 --

 Key: SOLR-7468
 URL: https://issues.apache.org/jira/browse/SOLR-7468
 Project: Solr
  Issue Type: New Feature
  Components: security
Reporter: Ishan Chattopadhyaya
Assignee: Anshum Gupta
 Fix For: 5.2

 Attachments: SOLR-7468-alt-test.patch, SOLR-7468-alt-test.patch, 
 SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, 
 SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, 
 SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, 
 SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, 
 SOLR-7468.patch, SOLR-7468.patch


 SOLR-7274 introduces a pluggable authentication framework. This issue 
 provides a Kerberos plugin implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.8.0_45) - Build # 12644 - Failure!

2015-05-26 Thread Dawid Weiss
 Right, but I've had about 10 successful runs even since my last checkin.

This does not mean the code is correct, only that you were lucky :)
And the fact it still failed in spite of your efforts is not something
to be ashamed of -- it's a sign you did a lot and there's *still*
something wrong.

The thing with randomized testing and test harness is that it's
supposed to make your life easier -- to uncover things you wouldn't
think about (or wouldn't have a chance to test, as is the case with
filesystem emulation layers). Resigning from all this infrastructure
and writing tests in plain JUnit runner would be dodging the problem,
not solving it. Sure, it's not easy. And sure, it's a pain in the
arse. But it's also gratifying to know you nailed the problem once you
find it.

Dawid

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6499) WindowsFS misses to remove open file handle if file is concurrently deleted

2015-05-26 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14558834#comment-14558834
 ] 

Dawid Weiss commented on LUCENE-6499:
-

Typo in regeistered.  Wrt. cyclicbarrier or countdownlatch - the first one 
keeps an even starting line for all the threads involved; with countdownlatch 
you could have the countDown() thread proceed long before any other threads 
reach await. In practice I don't think this makes any difference.

 WindowsFS misses to remove open file handle if file is concurrently deleted
 ---

 Key: LUCENE-6499
 URL: https://issues.apache.org/jira/browse/LUCENE-6499
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/test-framework
Affects Versions: 5.1
Reporter: Simon Willnauer
 Fix For: Trunk, 5.3

 Attachments: LUCENE-6499.patch, LUCENE-6499.patch


 WindowsFs has some race conditions when files are concurrently opened and 
 deleted. A file might be successfully opened while concurrently deleted which 
 should be prevented by the WindowsFS with an IOException / access denied. The 
 problem is that we try to remove the leaked file handle form the internal map 
 on close which fails since we fail to read the key from the filesystem since 
 it has already been deleted. This manifests in subsequent `access denied` 
 exceptions even though all streams on the file are closed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-trunk - Build # 693 - Still Failing

2015-05-26 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-trunk/693/

No tests ran.

Build Log:
[...truncated 9836 lines...]
   [junit4] Suite: org.apache.solr.cloud.CollectionsAPIDistributedZkTest
   [junit4]   2 Creating dataDir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-trunk/solr/build/solr-core/test/J1/temp/solr.cloud.CollectionsAPIDistributedZkTest
 3A6821C4380CD3A8-001/init-core-data-001
   [junit4]   2 166078 T841 oas.SolrTestCaseJ4.buildSSLConfig Randomized ssl 
(false) and clientAuth (false)
   [junit4]   2 166078 T841 oas.BaseDistributedSearchTestCase.initHostContext 
Setting hostContext system property: /
   [junit4]   2 166102 T841 oasc.ZkTestServer.run STARTING ZK TEST SERVER
   [junit4]   2 166103 T842 oasc.ZkTestServer$2$1.setClientPort client 
port:0.0.0.0/0.0.0.0:0
   [junit4]   2 166103 T842 oasc.ZkTestServer$ZKServerMain.runFromConfig 
Starting server
   [junit4]   2 166203 T841 oasc.ZkTestServer.run start zk server on port:50627
   [junit4]   2 166203 T841 
oascc.SolrZkClient.createZkCredentialsToAddAutomatically Using default 
ZkCredentialsProvider
   [junit4]   2 166219 T841 oascc.ConnectionManager.waitForConnected Waiting 
for client to connect to ZooKeeper
   [junit4]   2 166222 T849 oascc.ConnectionManager.process Watcher 
org.apache.solr.common.cloud.ConnectionManager@79188e64 
name:ZooKeeperConnection Watcher:127.0.0.1:50627 got event WatchedEvent 
state:SyncConnected type:None path:null path:null type:None
   [junit4]   2 166222 T841 oascc.ConnectionManager.waitForConnected Client is 
connected to ZooKeeper
   [junit4]   2 166223 T841 oascc.SolrZkClient.createZkACLProvider Using 
default ZkACLProvider
   [junit4]   2 166223 T841 oascc.SolrZkClient.makePath makePath: /solr
   [junit4]   2 166232 T841 
oascc.SolrZkClient.createZkCredentialsToAddAutomatically Using default 
ZkCredentialsProvider
   [junit4]   2 166233 T841 oascc.ConnectionManager.waitForConnected Waiting 
for client to connect to ZooKeeper
   [junit4]   2 166244 T852 oascc.ConnectionManager.process Watcher 
org.apache.solr.common.cloud.ConnectionManager@67536413 
name:ZooKeeperConnection Watcher:127.0.0.1:50627/solr got event WatchedEvent 
state:SyncConnected type:None path:null path:null type:None
   [junit4]   2 166244 T841 oascc.ConnectionManager.waitForConnected Client is 
connected to ZooKeeper
   [junit4]   2 166245 T841 oascc.SolrZkClient.createZkACLProvider Using 
default ZkACLProvider
   [junit4]   2 166245 T841 oascc.SolrZkClient.makePath makePath: 
/collections/collection1
   [junit4]   2 166248 T841 oascc.SolrZkClient.makePath makePath: 
/collections/collection1/shards
   [junit4]   2 166250 T841 oascc.SolrZkClient.makePath makePath: 
/collections/control_collection
   [junit4]   2 166252 T841 oascc.SolrZkClient.makePath makePath: 
/collections/control_collection/shards
   [junit4]   2 166253 T841 oasc.AbstractZkTestCase.putConfig put 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-trunk/solr/core/src/test-files/solr/collection1/conf/solrconfig-tlog.xml
 to /configs/conf1/solrconfig.xml
   [junit4]   2 166259 T841 oascc.SolrZkClient.makePath makePath: 
/configs/conf1/solrconfig.xml
   [junit4]   2 166262 T841 oasc.AbstractZkTestCase.putConfig put 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-trunk/solr/core/src/test-files/solr/collection1/conf/schema.xml
 to /configs/conf1/schema.xml
   [junit4]   2 166262 T841 oascc.SolrZkClient.makePath makePath: 
/configs/conf1/schema.xml
   [junit4]   2 166278 T841 oasc.AbstractZkTestCase.putConfig put 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-trunk/solr/core/src/test-files/solr/collection1/conf/solrconfig.snippet.randomindexconfig.xml
 to /configs/conf1/solrconfig.snippet.randomindexconfig.xml
   [junit4]   2 166278 T841 oascc.SolrZkClient.makePath makePath: 
/configs/conf1/solrconfig.snippet.randomindexconfig.xml
   [junit4]   2 166284 T841 oasc.AbstractZkTestCase.putConfig put 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-trunk/solr/core/src/test-files/solr/collection1/conf/stopwords.txt
 to /configs/conf1/stopwords.txt
   [junit4]   2 166285 T841 oascc.SolrZkClient.makePath makePath: 
/configs/conf1/stopwords.txt
   [junit4]   2 166286 T841 oasc.AbstractZkTestCase.putConfig put 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-trunk/solr/core/src/test-files/solr/collection1/conf/protwords.txt
 to /configs/conf1/protwords.txt
   [junit4]   2 166286 T841 oascc.SolrZkClient.makePath makePath: 
/configs/conf1/protwords.txt
   [junit4]   2 166288 T841 oasc.AbstractZkTestCase.putConfig put 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-trunk/solr/core/src/test-files/solr/collection1/conf/currency.xml
 to /configs/conf1/currency.xml
   [junit4]   2 166288 T841 oascc.SolrZkClient.makePath makePath: 
/configs/conf1/currency.xml
   [junit4]   2 166289 T841 oasc.AbstractZkTestCase.putConfig put 

Re: [JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.8.0_45) - Build # 12644 - Failure!

2015-05-26 Thread Dawid Weiss
Ah, ok. Yes, I didn't track the context that much, I know Mark's been
trying to straighten out those tests but I don't follow that closely
-- too much going on in my own field.

Dawid

On Tue, May 26, 2015 at 10:36 AM, Anshum Gupta ans...@anshumgupta.net wrote:
 I think you misunderstood me there. I'm not talking about not using the test
 framework at all, but parts of it. e.g. how the test using
 MiniSolrCloudCluster follows a different approach as compared to other
 SolrCloud tests. I forgot to update here but I've finally figure why it
 never failed for me (I had a default realm set in my /etc/krb5.conf file on
 my machine).
 So yes, I'm just trying to find a way to test this part in the correct
 manner, and it may just involve an approach that is different from what most
 tests currently use. I hope that makes sense.

 On Tue, May 26, 2015 at 12:07 AM, Dawid Weiss dawid.we...@cs.put.poznan.pl
 wrote:

  Right, but I've had about 10 successful runs even since my last checkin.

 This does not mean the code is correct, only that you were lucky :)
 And the fact it still failed in spite of your efforts is not something
 to be ashamed of -- it's a sign you did a lot and there's *still*
 something wrong.

 The thing with randomized testing and test harness is that it's
 supposed to make your life easier -- to uncover things you wouldn't
 think about (or wouldn't have a chance to test, as is the case with
 filesystem emulation layers). Resigning from all this infrastructure
 and writing tests in plain JUnit runner would be dodging the problem,
 not solving it. Sure, it's not easy. And sure, it's a pain in the
 arse. But it's also gratifying to know you nailed the problem once you
 find it.

 Dawid

 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org




 --
 Anshum Gupta

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-7591) Bug with heatmaps

2015-05-26 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-7591?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Håvard Wahl Kongsgård closed SOLR-7591.
---
Resolution: Not A Problem

 Bug with heatmaps
 -

 Key: SOLR-7591
 URL: https://issues.apache.org/jira/browse/SOLR-7591
 Project: Solr
  Issue Type: Bug
  Components: spatial
Affects Versions: 5.1
Reporter: Håvard Wahl Kongsgård
Assignee: David Smiley
  Labels: heatmap

 Hi, I have been experimenting with the new heatmap facet in solr 5. When I 
 use grid level 6 I notice a bug.
 with level 7, I get for example
 rows: 6
 cols: 26
 XmaX: 10.7514953613 Xmin 10.7157897949
 YmaX: 59.9317932129 Ymin 59.9235534668
 Cell size
 0.00137329101562 x 0.00137329101562
 with level 6
 rows: 11
 cols: 26
 X maX: 10.8435058594 Xmin 10.5578613281
 Y maX: 59.9468994141 Ymin 59.8864746094
 Cell size
 0.010986328125 x 0.0054931640625
 notice the cell size 
 I could be that my code is faulty, but this works for all other grid levels



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7591) Bug with heatmaps

2015-05-26 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-7591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14558899#comment-14558899
 ] 

Håvard Wahl Kongsgård commented on SOLR-7591:
-

Aha, thanks. That's was the problem, maybe add something in the documentation 
about this issue :)

 Bug with heatmaps
 -

 Key: SOLR-7591
 URL: https://issues.apache.org/jira/browse/SOLR-7591
 Project: Solr
  Issue Type: Bug
  Components: spatial
Affects Versions: 5.1
Reporter: Håvard Wahl Kongsgård
Assignee: David Smiley
  Labels: heatmap

 Hi, I have been experimenting with the new heatmap facet in solr 5. When I 
 use grid level 6 I notice a bug.
 with level 7, I get for example
 rows: 6
 cols: 26
 XmaX: 10.7514953613 Xmin 10.7157897949
 YmaX: 59.9317932129 Ymin 59.9235534668
 Cell size
 0.00137329101562 x 0.00137329101562
 with level 6
 rows: 11
 cols: 26
 X maX: 10.8435058594 Xmin 10.5578613281
 Y maX: 59.9468994141 Ymin 59.8864746094
 Cell size
 0.010986328125 x 0.0054931640625
 notice the cell size 
 I could be that my code is faulty, but this works for all other grid levels



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.8.0_45) - Build # 12825 - Failure!

2015-05-26 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/12825/
Java: 64bit/jdk1.8.0_45 -XX:-UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  
org.apache.lucene.spatial.spatial4j.Geo3dShapeRectRelationTest.testGeoBBoxRect

Error Message:
Did not find enough contains/within/intersection/disjoint/bounds cases in a 
reasonable number of random attempts. CWIDbD: 
3235(22),20(22),8927(22),1129(22),9315(22)  Laps exceeded 22626

Stack Trace:
java.lang.AssertionError: Did not find enough 
contains/within/intersection/disjoint/bounds cases in a reasonable number of 
random attempts. CWIDbD: 3235(22),20(22),8927(22),1129(22),9315(22)  Laps 
exceeded 22626
at 
__randomizedtesting.SeedInfo.seed([20CB0E5500D415D2:46EA66D24E76B8C]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.lucene.spatial.spatial4j.RectIntersectionTestHelper.testRelateWithRectangle(RectIntersectionTestHelper.java:96)
at 
org.apache.lucene.spatial.spatial4j.Geo3dShapeRectRelationTest.testGeoBBoxRect(Geo3dShapeRectRelationTest.java:145)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$2.evaluate(ThreadLeakControl.java:401)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSuite(RandomizedRunner.java:651)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$200(RandomizedRunner.java:138)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$1.run(RandomizedRunner.java:568)




Build Log:
[...truncated 8160 lines...]
   [junit4] Suite: 
org.apache.lucene.spatial.spatial4j.Geo3dShapeRectRelationTest
   [junit4]   1 Laps: 23218 CWIDbD: 11893,51,5896,1732,3646
   [junit4]   1 Laps: 1560 CWIDbD: 140,3,632,304,481
   [junit4] FAILURE 1.46s J0 | Geo3dShapeRectRelationTest.testGeoBBoxRect 
   [junit4] Throwable #1: java.lang.AssertionError: Did not find enough 
contains/within/intersection/disjoint/bounds cases in a reasonable number of 
random attempts. CWIDbD: 3235(22),20(22),8927(22),1129(22),9315(22)  Laps 
exceeded 22626
   [junit4]at 
__randomizedtesting.SeedInfo.seed([20CB0E5500D415D2:46EA66D24E76B8C]:0)
   [junit4]at 
org.apache.lucene.spatial.spatial4j.RectIntersectionTestHelper.testRelateWithRectangle(RectIntersectionTestHelper.java:96)
   [junit4]at 
org.apache.lucene.spatial.spatial4j.Geo3dShapeRectRelationTest.testGeoBBoxRect(Geo3dShapeRectRelationTest.java:145)
   [junit4]   1 Laps: 16334 CWIDbD: 4216,8,7024,2976,2110
   [junit4] Completed [26/28] on J0 in 4.68s, 6 tests, 1 failure  FAILURES!

[...truncated 17 lines...]
BUILD FAILED
/home/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:526: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:474: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:61: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-trunk-Linux/extra-targets.xml:39: The 
following error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/build.xml:466: The 
following error occurred while 

[jira] [Updated] (LUCENE-6487) Add WGS84 capability to geo3d support

2015-05-26 Thread Karl Wright (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6487?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karl Wright updated LUCENE-6487:

Attachment: LUCENE-6487.patch

 Add WGS84 capability to geo3d support
 -

 Key: LUCENE-6487
 URL: https://issues.apache.org/jira/browse/LUCENE-6487
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/spatial
Reporter: Karl Wright
 Attachments: LUCENE-6487.patch, LUCENE-6487.patch, LUCENE-6487.patch


 WGS84 compatibility has been requested for geo3d.  This involves working with 
 an ellipsoid rather than a unit sphere.  The general formula for an ellipsoid 
 is:
 x^2/a^2 + y^2/b^2 + z^2/c^2 = 1



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6487) Add WGS84 capability to geo3d support

2015-05-26 Thread Karl Wright (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14558879#comment-14558879
 ] 

Karl Wright commented on LUCENE-6487:
-

Applying your patch against trunk yields:

{code}
[mkdir] Created dir: 
C:\wip\lucene\lucene-6487\lucene\build\spatial\classes\java
[javac] Compiling 106 source files to 
C:\wip\lucene\lucene-6487\lucene\build\spatial\classes\java
[javac] 
C:\wip\lucene\lucene-6487\lucene\spatial\src\java\org\apache\lucene\spatial\spatial4j\geo3d\GeoDegenerateHorizontalLine.java:27:
 error: cannot find symbol
[javac] public class GeoDegenerateHorizontalLine extends GeoBaseBBox {
[javac]  ^
[javac]   symbol: class GeoBaseBBox
[javac] 
C:\wip\lucene\lucene-6487\lucene\spatial\src\java\org\apache\lucene\spatial\spatial4j\geo3d\GeoDegenerateLatitudeZone.java:26:
 error: cannot find symbol
[javac] public class GeoDegenerateLatitudeZone extends GeoBaseBBox {
[javac]^
[javac]   symbol: class GeoBaseBBox
[javac] 
C:\wip\lucene\lucene-6487\lucene\spatial\src\java\org\apache\lucene\spatial\spatial4j\geo3d\GeoDegenerateLongitudeSlice.java:25:
 error: cannot find symbol
[javac] public class GeoDegenerateLongitudeSlice extends GeoBaseBBox {
[javac]  ^
[javac]   symbol: class GeoBaseBBox
[javac] 
C:\wip\lucene\lucene-6487\lucene\spatial\src\java\org\apache\lucene\spatial\spatial4j\geo3d\GeoDegenerateVerticalLine.java:25:
 error: cannot find symbol
[javac] public class GeoDegenerateVerticalLine extends GeoBaseBBox {
[javac]^
[javac]   symbol: class GeoBaseBBox
[javac] 
C:\wip\lucene\lucene-6487\lucene\spatial\src\java\org\apache\lucene\spatial\spatial4j\geo3d\GeoLatitudeZone.java:25:
 error: cannot find symbol
[javac] public class GeoLatitudeZone extends GeoBaseBBox {
[javac]  ^
[javac]   symbol: class GeoBaseBBox
[javac] 
C:\wip\lucene\lucene-6487\lucene\spatial\src\java\org\apache\lucene\spatial\spatial4j\geo3d\GeoLongitudeSlice.java:27:
 error: cannot find symbol
[javac] public class GeoLongitudeSlice extends GeoBaseBBox {
[javac]^
[javac]   symbol: class GeoBaseBBox
[javac] 
C:\wip\lucene\lucene-6487\lucene\spatial\src\java\org\apache\lucene\spatial\spatial4j\geo3d\GeoNorthLatitudeZone.java:25:
 error: cannot find symbol
[javac] public class GeoNorthLatitudeZone extends GeoBaseBBox {
[javac]   ^
[javac]   symbol: class GeoBaseBBox
[javac] 
C:\wip\lucene\lucene-6487\lucene\spatial\src\java\org\apache\lucene\spatial\spatial4j\geo3d\GeoNorthRectangle.java:28:
 error: cannot find symbol
[javac] public class GeoNorthRectangle extends GeoBaseBBox {
...
{code}

Clearly we are out of sync again.

I'm attaching a new patch with my changes, but they do not include your changes 
for that reason.



 Add WGS84 capability to geo3d support
 -

 Key: LUCENE-6487
 URL: https://issues.apache.org/jira/browse/LUCENE-6487
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/spatial
Reporter: Karl Wright
 Attachments: LUCENE-6487.patch, LUCENE-6487.patch


 WGS84 compatibility has been requested for geo3d.  This involves working with 
 an ellipsoid rather than a unit sphere.  The general formula for an ellipsoid 
 is:
 x^2/a^2 + y^2/b^2 + z^2/c^2 = 1



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6500) ParallelCompositeReader does not always call closed listeners

2015-05-26 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14558892#comment-14558892
 ] 

Uwe Schindler commented on LUCENE-6500:
---

It should be, but indexReaders call decRef() on close() and that complains if 
it gets  0.

 ParallelCompositeReader does not always call closed listeners
 -

 Key: LUCENE-6500
 URL: https://issues.apache.org/jira/browse/LUCENE-6500
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Attachments: LUCENE-6500.patch


 CompositeParallelReader misses to call closed listeners when the reader which 
 is provided at construction time does not wrap leaf readers directly, such as 
 a multi reader over directory readers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6500) ParallelCompositeReader does not always call closed listeners

2015-05-26 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14558853#comment-14558853
 ] 

Uwe Schindler commented on LUCENE-6500:
---

Oh oh, yes I completely rewrote this class in 4.0... I'll take a look.

 ParallelCompositeReader does not always call closed listeners
 -

 Key: LUCENE-6500
 URL: https://issues.apache.org/jira/browse/LUCENE-6500
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Attachments: LUCENE-6500.patch


 CompositeParallelReader misses to call closed listeners when the reader which 
 is provided at construction time does not wrap leaf readers directly, such as 
 a multi reader over directory readers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.8.0_45) - Build # 12644 - Failure!

2015-05-26 Thread Anshum Gupta
I think you misunderstood me there. I'm not talking about not using the
test framework at all, but parts of it. e.g. how the test
using MiniSolrCloudCluster follows a different approach as compared to
other SolrCloud tests. I forgot to update here but I've finally figure why
it never failed for me (I had a default realm set in my /etc/krb5.conf file
on my machine).
So yes, I'm just trying to find a way to test this part in the correct
manner, and it may just involve an approach that is different from what
most tests currently use. I hope that makes sense.

On Tue, May 26, 2015 at 12:07 AM, Dawid Weiss dawid.we...@cs.put.poznan.pl
wrote:

  Right, but I've had about 10 successful runs even since my last checkin.

 This does not mean the code is correct, only that you were lucky :)
 And the fact it still failed in spite of your efforts is not something
 to be ashamed of -- it's a sign you did a lot and there's *still*
 something wrong.

 The thing with randomized testing and test harness is that it's
 supposed to make your life easier -- to uncover things you wouldn't
 think about (or wouldn't have a chance to test, as is the case with
 filesystem emulation layers). Resigning from all this infrastructure
 and writing tests in plain JUnit runner would be dodging the problem,
 not solving it. Sure, it's not easy. And sure, it's a pain in the
 arse. But it's also gratifying to know you nailed the problem once you
 find it.

 Dawid

 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org




-- 
Anshum Gupta


[jira] [Commented] (LUCENE-6500) ParallelCompositeReader does not always call closed listeners

2015-05-26 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14558874#comment-14558874
 ] 

Uwe Schindler commented on LUCENE-6500:
---

The anonymous subclassing was added to explicitely don't call closelisteners on 
the synthetic readers. The same code is there 2 times... (for composites and 
for atomics).

Closing all subreaders is done in ParallelCompositeReader's doClose() where it 
iterates over all real childs which are passed in ctor.

I think the new problem is that (as you say) the close listeners may be 
registered on the synthetic leaves... I have to think about how to better 
handle this, with your patch it may now happen that a reader is closed multiple 
times. Which is not a problem in your test, but could be in reality.

In addition there is now (after your patch) an assymetry between atomic leaves 
and composite sub readers... Does't the same problem apply to atomic readers?

Unfrotuentaly my Eclipse strikes like Deutsche Bahn and says GC overhead 
limit. I am unable to dig...

 ParallelCompositeReader does not always call closed listeners
 -

 Key: LUCENE-6500
 URL: https://issues.apache.org/jira/browse/LUCENE-6500
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Attachments: LUCENE-6500.patch


 CompositeParallelReader misses to call closed listeners when the reader which 
 is provided at construction time does not wrap leaf readers directly, such as 
 a multi reader over directory readers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6500) ParallelCompositeReader does not always call closed listeners

2015-05-26 Thread Ryan Ernst (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14558887#comment-14558887
 ] 

Ryan Ernst commented on LUCENE-6500:


bq. Which is not a problem in your test, but could be in reality

Isn't the java contract for {{Closeable}} that {{close()}} is idempotent?


 ParallelCompositeReader does not always call closed listeners
 -

 Key: LUCENE-6500
 URL: https://issues.apache.org/jira/browse/LUCENE-6500
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Attachments: LUCENE-6500.patch


 CompositeParallelReader misses to call closed listeners when the reader which 
 is provided at construction time does not wrap leaf readers directly, such as 
 a multi reader over directory readers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6500) ParallelCompositeReader does not always call closed listeners

2015-05-26 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14558911#comment-14558911
 ] 

Uwe Schindler commented on LUCENE-6500:
---

Here is the problem, why the patch is problematic:
The current logic is fine, but fails - as Adrien says - when you have a 
ParallelMultiReader on a MultiReader of another CompositeReaders.
His fix works fine, if you have closeSubReader=false on the parallel on and 
also on the child MultiReaders. in that case it will fail with too many decRefs.
I am not sure how the best fix looks like. It this issue urgent? I have to 
think a night about it :-) I would prefer to have a better solution alltgether, 
this refcounting is horrible...

 ParallelCompositeReader does not always call closed listeners
 -

 Key: LUCENE-6500
 URL: https://issues.apache.org/jira/browse/LUCENE-6500
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Attachments: LUCENE-6500.patch


 CompositeParallelReader misses to call closed listeners when the reader which 
 is provided at construction time does not wrap leaf readers directly, such as 
 a multi reader over directory readers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6499) WindowsFS misses to remove open file handle if file is concurrently deleted

2015-05-26 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14558788#comment-14558788
 ] 

Uwe Schindler commented on LUCENE-6499:
---

bq. I am not super happy since we now sync on top of an IO operation but I 
don't think there is a much easier solution unless we wanna use like an array 
of locks and the hash of the path to lock on two stages

WindowsFS is for tests only, so I think this limitation is fine. With the 
synchronization the atomic operation are really atomic (for the Lucene tests) 
and that is all what counts. I am sure, you and Robert now can be hired by M$ 
to implement their filesystems! :-)

Just one question about the test: You are using a CyclicBarrier(2) here. Is 
this just your own preference or is there a specific reason? Most tests like 
this use a CountDownLatch(1) that is decremented by the child thread once its 
ready so the main thread starts to fire it with some load. It's not some 
critique, I never used CyclicBarrier, so it is just my own interest...

 WindowsFS misses to remove open file handle if file is concurrently deleted
 ---

 Key: LUCENE-6499
 URL: https://issues.apache.org/jira/browse/LUCENE-6499
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/test-framework
Affects Versions: 5.1
Reporter: Simon Willnauer
 Fix For: Trunk, 5.3

 Attachments: LUCENE-6499.patch, LUCENE-6499.patch


 WindowsFs has some race conditions when files are concurrently opened and 
 deleted. A file might be successfully opened while concurrently deleted which 
 should be prevented by the WindowsFS with an IOException / access denied. The 
 problem is that we try to remove the leaked file handle form the internal map 
 on close which fails since we fail to read the key from the filesystem since 
 it has already been deleted. This manifests in subsequent `access denied` 
 exceptions even though all streams on the file are closed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6500) ParallelCompositeReader does not always call closed listeners

2015-05-26 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-6500:
-
Attachment: LUCENE-6500.patch

Here is a patch: it does not prevent closing sub readers anymore in the 
composite case, so that closed listeners will be called on the underlying 
parallel leaf readers which are exposed through {{leaves()}} and may be used to 
register closed listeners.

[~thetaphi] svn blame indicates that you should be quite familiar with this 
class, would you mind having a look? Thanks!

 ParallelCompositeReader does not always call closed listeners
 -

 Key: LUCENE-6500
 URL: https://issues.apache.org/jira/browse/LUCENE-6500
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Attachments: LUCENE-6500.patch


 CompositeParallelReader misses to call closed listeners when the reader which 
 is provided at construction time does not wrap leaf readers directly, such as 
 a multi reader over directory readers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6500) ParallelCompositeReader does not always call closed listeners

2015-05-26 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14558896#comment-14558896
 ] 

Uwe Schindler commented on LUCENE-6500:
---

Sorry, you are right. It's idempoent! But there is still some problem. I have 
to debug through the code to understand why it was written like that. I just 
remember, it was tricky...

 ParallelCompositeReader does not always call closed listeners
 -

 Key: LUCENE-6500
 URL: https://issues.apache.org/jira/browse/LUCENE-6500
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Attachments: LUCENE-6500.patch


 CompositeParallelReader misses to call closed listeners when the reader which 
 is provided at construction time does not wrap leaf readers directly, such as 
 a multi reader over directory readers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6500) ParallelCompositeReader does not always call closed listeners

2015-05-26 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14558906#comment-14558906
 ] 

Adrien Grand commented on LUCENE-6500:
--

bq. In addition there is now (after your patch) an assymetry between atomic 
leaves and composite sub readers... Does't the same problem apply to atomic 
readers?

Since prepareSubReaders wraps leaves recursively with ParallelCompositeReader 
in the composite case, I was thinking it should be fine, since at least the 
lower-level which is wrapping leaf readers would wrap with a ParallelLeafReader 
that prevents closing of sub readers?

For the record, no existing tests failed with the change I made.

 ParallelCompositeReader does not always call closed listeners
 -

 Key: LUCENE-6500
 URL: https://issues.apache.org/jira/browse/LUCENE-6500
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Attachments: LUCENE-6500.patch


 CompositeParallelReader misses to call closed listeners when the reader which 
 is provided at construction time does not wrap leaf readers directly, such as 
 a multi reader over directory readers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6499) WindowsFS misses to remove open file handle if file is concurrently deleted

2015-05-26 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14558820#comment-14558820
 ] 

Uwe Schindler commented on LUCENE-6499:
---

In any case +1, looks file to me.

 WindowsFS misses to remove open file handle if file is concurrently deleted
 ---

 Key: LUCENE-6499
 URL: https://issues.apache.org/jira/browse/LUCENE-6499
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/test-framework
Affects Versions: 5.1
Reporter: Simon Willnauer
 Fix For: Trunk, 5.3

 Attachments: LUCENE-6499.patch, LUCENE-6499.patch


 WindowsFs has some race conditions when files are concurrently opened and 
 deleted. A file might be successfully opened while concurrently deleted which 
 should be prevented by the WindowsFS with an IOException / access denied. The 
 problem is that we try to remove the leaked file handle form the internal map 
 on close which fails since we fail to read the key from the filesystem since 
 it has already been deleted. This manifests in subsequent `access denied` 
 exceptions even though all streams on the file are closed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-6500) ParallelCompositeReader does not always call closed listeners

2015-05-26 Thread Adrien Grand (JIRA)
Adrien Grand created LUCENE-6500:


 Summary: ParallelCompositeReader does not always call closed 
listeners
 Key: LUCENE-6500
 URL: https://issues.apache.org/jira/browse/LUCENE-6500
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor


CompositeParallelReader misses to call closed listeners when the reader which 
is provided at construction time does not wrap leaf readers directly, such as a 
multi reader over directory readers.





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-6500) ParallelCompositeReader does not always call closed listeners

2015-05-26 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14558911#comment-14558911
 ] 

Uwe Schindler edited comment on LUCENE-6500 at 5/26/15 9:23 AM:


Here is the problem, why the patch is problematic:
The current logic is fine, but fails - as Adrien says - when you have a 
ParallelMultiReader on a MultiReader of another CompositeReaders.
His fix works fine in the test because the MultiReader has closeSubReaders=true 
- so it does no refcounting. If you have closeSubReader=false on the parallel 
on and also on the child MultiReader. in that case it will fail with too many 
decRefs (haven't tried).
I am not sure how the best fix looks like. It this issue urgent? I have to 
think a night about it :-) I would prefer to have a better solution alltgether, 
this refcounting is horrible...


was (Author: thetaphi):
Here is the problem, why the patch is problematic:
The current logic is fine, but fails - as Adrien says - when you have a 
ParallelMultiReader on a MultiReader of another CompositeReaders.
His fix works fine, if you have closeSubReader=false on the parallel on and 
also on the child MultiReaders. in that case it will fail with too many decRefs.
I am not sure how the best fix looks like. It this issue urgent? I have to 
think a night about it :-) I would prefer to have a better solution alltgether, 
this refcounting is horrible...

 ParallelCompositeReader does not always call closed listeners
 -

 Key: LUCENE-6500
 URL: https://issues.apache.org/jira/browse/LUCENE-6500
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Attachments: LUCENE-6500.patch


 CompositeParallelReader misses to call closed listeners when the reader which 
 is provided at construction time does not wrap leaf readers directly, such as 
 a multi reader over directory readers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-trunk-Java8 - Build # 14 - Still Failing

2015-05-26 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-trunk-Java8/14/

2 tests failed.
REGRESSION:  org.apache.solr.cloud.MultiThreadedOCPTest.test

Error Message:
Captured an uncaught exception in thread: Thread[id=9368, 
name=parallelCoreAdminExecutor-4389-thread-15, state=RUNNABLE, 
group=TGRP-MultiThreadedOCPTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=9368, 
name=parallelCoreAdminExecutor-4389-thread-15, state=RUNNABLE, 
group=TGRP-MultiThreadedOCPTest]
at 
__randomizedtesting.SeedInfo.seed([43F7A82DEDFEC004:CBA397F74302ADFC]:0)
Caused by: java.lang.AssertionError: Too many closes on SolrCore
at __randomizedtesting.SeedInfo.seed([43F7A82DEDFEC004]:0)
at org.apache.solr.core.SolrCore.close(SolrCore.java:1150)
at org.apache.solr.common.util.IOUtils.closeQuietly(IOUtils.java:31)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:652)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:611)
at 
org.apache.solr.handler.admin.CoreAdminHandler.handleCreateAction(CoreAdminHandler.java:628)
at 
org.apache.solr.handler.admin.CoreAdminHandler.handleRequestInternal(CoreAdminHandler.java:213)
at 
org.apache.solr.handler.admin.CoreAdminHandler$ParallelCoreAdminHandlerThread.run(CoreAdminHandler.java:1249)
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$1.run(ExecutorUtil.java:148)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)


FAILED:  org.apache.solr.cloud.TestSolrCloudWithKerberos.testKerberizedSolr

Error Message:
No live SolrServers available to handle this 
request:[http://127.0.0.1:49735/vead/p]

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this request:[http://127.0.0.1:49735/vead/p]
at 
__randomizedtesting.SeedInfo.seed([43F7A82DEDFEC004:E8EE7EEAA79F0033]:0)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:355)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1086)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:856)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:799)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:135)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:152)
at 
org.apache.solr.cloud.TestSolrCloudWithKerberos.testKerberizedSolr(TestSolrCloudWithKerberos.java:165)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:960)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:935)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 

[jira] [Commented] (LUCENE-6487) Add WGS84 capability to geo3d support

2015-05-26 Thread Karl Wright (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14558956#comment-14558956
 ] 

Karl Wright commented on LUCENE-6487:
-

Suggestion:
(1) Create a branch.  It's the only way to really do extensive changes with svn.
(2) Commit my changes.
(3) Make your changes and commit those.
(4) We can iterate as needed.


 Add WGS84 capability to geo3d support
 -

 Key: LUCENE-6487
 URL: https://issues.apache.org/jira/browse/LUCENE-6487
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/spatial
Reporter: Karl Wright
 Attachments: LUCENE-6487.patch, LUCENE-6487.patch, LUCENE-6487.patch


 WGS84 compatibility has been requested for geo3d.  This involves working with 
 an ellipsoid rather than a unit sphere.  The general formula for an ellipsoid 
 is:
 x^2/a^2 + y^2/b^2 + z^2/c^2 = 1



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-6500) ParallelCompositeReader does not always call closed listeners

2015-05-26 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14558911#comment-14558911
 ] 

Uwe Schindler edited comment on LUCENE-6500 at 5/26/15 9:23 AM:


Here is the problem, why the patch is problematic:
The current logic is fine, but fails - as Adrien says - when you have a 
ParallelMultiReader on a MultiReader of another CompositeReaders.
His fix works fine in the test because the test's MultiReader has 
closeSubReaders=true - so it does no refcounting. If you have 
closeSubReader=false on the parallel on and also on the child MultiReader. in 
that case it will fail with too many decRefs (haven't tried).
I am not sure how the best fix looks like. It this issue urgent? I have to 
think a night about it :-) I would prefer to have a better solution alltgether, 
this refcounting is horrible...


was (Author: thetaphi):
Here is the problem, why the patch is problematic:
The current logic is fine, but fails - as Adrien says - when you have a 
ParallelMultiReader on a MultiReader of another CompositeReaders.
His fix works fine in the test because the MultiReader has closeSubReaders=true 
- so it does no refcounting. If you have closeSubReader=false on the parallel 
on and also on the child MultiReader. in that case it will fail with too many 
decRefs (haven't tried).
I am not sure how the best fix looks like. It this issue urgent? I have to 
think a night about it :-) I would prefer to have a better solution alltgether, 
this refcounting is horrible...

 ParallelCompositeReader does not always call closed listeners
 -

 Key: LUCENE-6500
 URL: https://issues.apache.org/jira/browse/LUCENE-6500
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Attachments: LUCENE-6500.patch


 CompositeParallelReader misses to call closed listeners when the reader which 
 is provided at construction time does not wrap leaf readers directly, such as 
 a multi reader over directory readers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6499) WindowsFS misses to remove open file handle if file is concurrently deleted

2015-05-26 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14558957#comment-14558957
 ] 

Dawid Weiss commented on LUCENE-6499:
-

That's why I said it doesn't make any practical difference ;)

 WindowsFS misses to remove open file handle if file is concurrently deleted
 ---

 Key: LUCENE-6499
 URL: https://issues.apache.org/jira/browse/LUCENE-6499
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/test-framework
Affects Versions: 5.1
Reporter: Simon Willnauer
 Fix For: Trunk, 5.3

 Attachments: LUCENE-6499.patch, LUCENE-6499.patch


 WindowsFs has some race conditions when files are concurrently opened and 
 deleted. A file might be successfully opened while concurrently deleted which 
 should be prevented by the WindowsFS with an IOException / access denied. The 
 problem is that we try to remove the leaked file handle form the internal map 
 on close which fails since we fail to read the key from the filesystem since 
 it has already been deleted. This manifests in subsequent `access denied` 
 exceptions even though all streams on the file are closed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-6499) WindowsFS misses to remove open file handle if file is concurrently deleted

2015-05-26 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14558957#comment-14558957
 ] 

Dawid Weiss edited comment on LUCENE-6499 at 5/26/15 10:29 AM:
---

That's why I said it didn't make any practical difference ;)


was (Author: dweiss):
That's why I said it doesn't make any practical difference ;)

 WindowsFS misses to remove open file handle if file is concurrently deleted
 ---

 Key: LUCENE-6499
 URL: https://issues.apache.org/jira/browse/LUCENE-6499
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/test-framework
Affects Versions: 5.1
Reporter: Simon Willnauer
 Fix For: Trunk, 5.3

 Attachments: LUCENE-6499.patch, LUCENE-6499.patch


 WindowsFs has some race conditions when files are concurrently opened and 
 deleted. A file might be successfully opened while concurrently deleted which 
 should be prevented by the WindowsFS with an IOException / access denied. The 
 problem is that we try to remove the leaked file handle form the internal map 
 on close which fails since we fail to read the key from the filesystem since 
 it has already been deleted. This manifests in subsequent `access denied` 
 exceptions even though all streams on the file are closed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6500) ParallelCompositeReader does not always call closed listeners

2015-05-26 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14558915#comment-14558915
 ] 

Uwe Schindler commented on LUCENE-6500:
---

bq. For the record, no existing tests failed with the change I made.

You removed this test:

{code:java}
-assertEquals(3, pr.leaves().size());
-
-for(LeafReaderContext cxt : pr.leaves()) {
-  cxt.reader().addReaderClosedListener(new ReaderClosedListener() {
-  @Override
-  public void onClose(IndexReader reader) {
-listenerClosedCount[0]++;
-  }
-});
-}
-pr.close();
-assertEquals(3, listenerClosedCount[0]);
{code}

 ParallelCompositeReader does not always call closed listeners
 -

 Key: LUCENE-6500
 URL: https://issues.apache.org/jira/browse/LUCENE-6500
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Attachments: LUCENE-6500.patch


 CompositeParallelReader misses to call closed listeners when the reader which 
 is provided at construction time does not wrap leaf readers directly, such as 
 a multi reader over directory readers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6500) ParallelCompositeReader does not always call closed listeners

2015-05-26 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14558926#comment-14558926
 ] 

Uwe Schindler commented on LUCENE-6500:
---

Another solution to fix this would be to also add all those deeper nested 
synthetic subreaders to the completeReaderSet (see last line of ctor). In that 
case they can stay with docClose() empty (to not affect refcount). I will try 
this out. I will also add a test for the case I mentioned before.

 ParallelCompositeReader does not always call closed listeners
 -

 Key: LUCENE-6500
 URL: https://issues.apache.org/jira/browse/LUCENE-6500
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Attachments: LUCENE-6500.patch


 CompositeParallelReader misses to call closed listeners when the reader which 
 is provided at construction time does not wrap leaf readers directly, such as 
 a multi reader over directory readers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6500) ParallelCompositeReader does not always call closed listeners

2015-05-26 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14558930#comment-14558930
 ] 

Uwe Schindler commented on LUCENE-6500:
---

A second solution would be to give up on completely mirroring the whole nested 
structure. We could simply create a ParallelCompositeReader with a 
ParallelLeafReader for each leave(), leaving out all inner composites. This 
would simplify the whole thing completely. There is (in my opinion) no reason 
to not flatten the structure. The alignment of readers would still match.

 ParallelCompositeReader does not always call closed listeners
 -

 Key: LUCENE-6500
 URL: https://issues.apache.org/jira/browse/LUCENE-6500
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Attachments: LUCENE-6500.patch


 CompositeParallelReader misses to call closed listeners when the reader which 
 is provided at construction time does not wrap leaf readers directly, such as 
 a multi reader over directory readers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6499) WindowsFS misses to remove open file handle if file is concurrently deleted

2015-05-26 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14558940#comment-14558940
 ] 

Uwe Schindler commented on LUCENE-6499:
---

bq. he first one keeps an even starting line for all the threads involved; with 
countdownlatch you could have the countDown() thread proceed long before any 
other threads reach await.

I agree, if you would have multiple threads starting - you don't know which one 
comes first. But the main thread here is already running; the assumption for 
the shotgun approach used in other tests is that spawning a thread takes some 
time, while a already running thread just runs so always comes earlier to the 
barrier point. I am fine with both solutions :-)

In any case, neither the cyclic barrier nor the latch guarantee that all 
threads really start at same time :-) It is just like with horses waiting for 
the starting gun to be fired. If one of the horses breaks its leg while 
starting or wants to drink something...

 WindowsFS misses to remove open file handle if file is concurrently deleted
 ---

 Key: LUCENE-6499
 URL: https://issues.apache.org/jira/browse/LUCENE-6499
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/test-framework
Affects Versions: 5.1
Reporter: Simon Willnauer
 Fix For: Trunk, 5.3

 Attachments: LUCENE-6499.patch, LUCENE-6499.patch


 WindowsFs has some race conditions when files are concurrently opened and 
 deleted. A file might be successfully opened while concurrently deleted which 
 should be prevented by the WindowsFS with an IOException / access denied. The 
 problem is that we try to remove the leaked file handle form the internal map 
 on close which fails since we fail to read the key from the filesystem since 
 it has already been deleted. This manifests in subsequent `access denied` 
 exceptions even though all streams on the file are closed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-6487) Add WGS84 capability to geo3d support

2015-05-26 Thread Karl Wright (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14558956#comment-14558956
 ] 

Karl Wright edited comment on LUCENE-6487 at 5/26/15 11:02 AM:
---

Suggestion:
(1) Create a branch.  It's the only way to really do extensive changes with svn.
(2) Commit my changes.
(3) Make your changes and commit those.
(4) We can iterate as needed.

Also, FWIW, the only three files I touched were:
Plane.java
Vector.java
GeoPoint.java




was (Author: kwri...@metacarta.com):
Suggestion:
(1) Create a branch.  It's the only way to really do extensive changes with svn.
(2) Commit my changes.
(3) Make your changes and commit those.
(4) We can iterate as needed.


 Add WGS84 capability to geo3d support
 -

 Key: LUCENE-6487
 URL: https://issues.apache.org/jira/browse/LUCENE-6487
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/spatial
Reporter: Karl Wright
 Attachments: LUCENE-6487.patch, LUCENE-6487.patch, LUCENE-6487.patch


 WGS84 compatibility has been requested for geo3d.  This involves working with 
 an ellipsoid rather than a unit sphere.  The general formula for an ellipsoid 
 is:
 x^2/a^2 + y^2/b^2 + z^2/c^2 = 1



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6487) Add WGS84 capability to geo3d support

2015-05-26 Thread Karl Wright (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14558975#comment-14558975
 ] 

Karl Wright commented on LUCENE-6487:
-

Also, when I copy the GeoBaseBBox.java class into place, and delete the old 
GeoBBoxBase class, I still get:

{code}
[javac] 
C:\wip\lucene\lucene-6487\lucene\spatial\src\test\org\apache\lucene\spatial\spatial4j\Geo3dShapeRectRelationTest.java:34:
 error: class Geo3dShapeSphereModelRectRelationTest is public, should be 
declared in a file named Geo3dShapeSphereModelRectRelationTest.java
[javac] public class Geo3dShapeSphereModelRectRelationTest extends 
Geo3dShapeRectRelationTestCase {
[javac]^
[javac] 1 error
{code}

I'm assuming you intended to copy the former class into the latter but somehow 
overwrote the former?  I'll try that and see if I can get it to build...

 Add WGS84 capability to geo3d support
 -

 Key: LUCENE-6487
 URL: https://issues.apache.org/jira/browse/LUCENE-6487
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/spatial
Reporter: Karl Wright
 Attachments: LUCENE-6487.patch, LUCENE-6487.patch, LUCENE-6487.patch


 WGS84 compatibility has been requested for geo3d.  This involves working with 
 an ellipsoid rather than a unit sphere.  The general formula for an ellipsoid 
 is:
 x^2/a^2 + y^2/b^2 + z^2/c^2 = 1



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6487) Add WGS84 capability to geo3d support

2015-05-26 Thread Karl Wright (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6487?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karl Wright updated LUCENE-6487:

Attachment: LUCENE-6487.patch

Resolved outstanding build issues from David's patch and added what was 
requested.


 Add WGS84 capability to geo3d support
 -

 Key: LUCENE-6487
 URL: https://issues.apache.org/jira/browse/LUCENE-6487
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/spatial
Reporter: Karl Wright
 Attachments: LUCENE-6487.patch, LUCENE-6487.patch, LUCENE-6487.patch, 
 LUCENE-6487.patch


 WGS84 compatibility has been requested for geo3d.  This involves working with 
 an ellipsoid rather than a unit sphere.  The general formula for an ellipsoid 
 is:
 x^2/a^2 + y^2/b^2 + z^2/c^2 = 1



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-7593) admin-extra-top not visible after selecting other menu items

2015-05-26 Thread Markus Jelsma (JIRA)
Markus Jelsma created SOLR-7593:
---

 Summary: admin-extra-top not visible after selecting other menu 
items
 Key: SOLR-7593
 URL: https://issues.apache.org/jira/browse/SOLR-7593
 Project: Solr
  Issue Type: Bug
  Components: web gui
Affects Versions: 5.1
Reporter: Markus Jelsma
Priority: Minor
 Fix For: 5.2


Reproduce:

# bin/solr start
# bin/solr create_core -c test -d server/solr/configsets/basic_configs
# echo blablabla  server/solr/test/conf/admin-extra.menu-top.html

Then open http://localhost:8983/solr/#/test and click on any item in the core's 
menu other than Overview, the admin-extra has now disappeared. This is also 
true for menu-bottom.html.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7593) admin-extra-top not visible after selecting other menu items

2015-05-26 Thread Upayavira (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14559056#comment-14559056
 ] 

Upayavira commented on SOLR-7593:
-

Can I ask, what are you using adminExtra for? Solr 5.2 will include an 
AngularJS version of the admin UI, which doesn't yet have support for admin 
extra, partly because I don't understand how/why people would use it. The hope 
is that from 5.3 it will be the default.

If I can understand what your use-case is, I can make sure that the feature is 
supported in the AngularJS version of the UI.

 admin-extra-top not visible after selecting other menu items
 

 Key: SOLR-7593
 URL: https://issues.apache.org/jira/browse/SOLR-7593
 Project: Solr
  Issue Type: Bug
  Components: web gui
Affects Versions: 5.1
Reporter: Markus Jelsma
Priority: Minor
 Fix For: 5.2


 Reproduce:
 # bin/solr start
 # bin/solr create_core -c test -d server/solr/configsets/basic_configs
 # echo blablabla  server/solr/test/conf/admin-extra.menu-top.html
 Then open http://localhost:8983/solr/#/test and click on any item in the 
 core's menu other than Overview, the admin-extra has now disappeared. This is 
 also true for menu-bottom.html.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.8.0_60-ea-b12) - Build # 12828 - Failure!

2015-05-26 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/12828/
Java: 64bit/jdk1.8.0_60-ea-b12 -XX:+UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  
org.apache.solr.cloud.CollectionsAPIAsyncDistributedZkTest.testSolrJAPICalls

Error Message:
Shard split did not complete. Last recorded state: running 
expected:[completed] but was:[running]

Stack Trace:
org.junit.ComparisonFailure: Shard split did not complete. Last recorded state: 
running expected:[completed] but was:[running]
at 
__randomizedtesting.SeedInfo.seed([B495C39FA53AE57E:ECF14FFEA3504DAA]:0)
at org.junit.Assert.assertEquals(Assert.java:125)
at 
org.apache.solr.cloud.CollectionsAPIAsyncDistributedZkTest.testSolrJAPICalls(CollectionsAPIAsyncDistributedZkTest.java:101)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:960)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:935)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Updated] (SOLR-7468) Kerberos authentication module

2015-05-26 Thread Ishan Chattopadhyaya (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7468?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Chattopadhyaya updated SOLR-7468:
---
Attachment: SOLR-7468-alt-test.patch

Updating the patch. Now client and server use different principals 
(s...@example.com and HTTP/127.0@example.com).

I'm running this patch at my jenkins:
1. Jenkins: http://162.244.24.210:8080/job/Anshum-Solr-7468/
Source: https://github.com/anshumg/lucene-solr/tree/SOLR-7468
2. Another version of this patch with external KDC instead of minikdc:
Jenkins: http://162.244.24.210:8080/job/Anshum-Solr-7468-With-External-KDC/
Source: https://github.com/anshumg/lucene-solr/tree/SOLR-7468-with-external-kdc

So far both look to be passing at the moment, but I'll give it a few more runs 
before confirming. If (2) passes consistently, we can infer that there is some 
problem with minikdc that is causing the failures.

[~gchanan], do you have any experience testing the kerberos intergration of 
cloudera's solr with minikdc vs. external kdc? Do you see any problem with the 
SOLR-7468's plugin/test code? Looking forward to your valuable inputs.

 Kerberos authentication module
 --

 Key: SOLR-7468
 URL: https://issues.apache.org/jira/browse/SOLR-7468
 Project: Solr
  Issue Type: New Feature
  Components: security
Reporter: Ishan Chattopadhyaya
Assignee: Anshum Gupta
 Fix For: 5.2

 Attachments: SOLR-7468-alt-test.patch, SOLR-7468-alt-test.patch, 
 SOLR-7468-alt-test.patch, SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, 
 SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, 
 SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, 
 SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, 
 SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch


 SOLR-7274 introduces a pluggable authentication framework. This issue 
 provides a Kerberos plugin implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.8.0_45) - Build # 12644 - Failure!

2015-05-26 Thread Ishan Chattopadhyaya
 The thing with randomized testing and test harness is that it's
 supposed to make your life easier -- to uncover things you wouldn't
 think about

I am not sure randomized testing is of any help here. Isolated runs of this
test always passes for every seed. During full suite runs, the test
sometimes passes and sometimes fails. Hence, I've not been able to set my
debugger on the test and reproduce. I've added another patch at SOLR-7468,
which Anshum and I are testing right now. Hopefully that fixes it.

 This does not mean the code is correct, only that you were lucky :)
I'm beginning to lose faith in hadoop-minikdc and hence we're also testing
the same thing using an external KDC, to make sure there's no code issue.



On Tue, May 26, 2015 at 2:15 PM, Dawid Weiss dawid.we...@cs.put.poznan.pl
wrote:

 Ah, ok. Yes, I didn't track the context that much, I know Mark's been
 trying to straighten out those tests but I don't follow that closely
 -- too much going on in my own field.

 Dawid

 On Tue, May 26, 2015 at 10:36 AM, Anshum Gupta ans...@anshumgupta.net
 wrote:
  I think you misunderstood me there. I'm not talking about not using the
 test
  framework at all, but parts of it. e.g. how the test using
  MiniSolrCloudCluster follows a different approach as compared to other
  SolrCloud tests. I forgot to update here but I've finally figure why it
  never failed for me (I had a default realm set in my /etc/krb5.conf file
 on
  my machine).
  So yes, I'm just trying to find a way to test this part in the correct
  manner, and it may just involve an approach that is different from what
 most
  tests currently use. I hope that makes sense.
 
  On Tue, May 26, 2015 at 12:07 AM, Dawid Weiss 
 dawid.we...@cs.put.poznan.pl
  wrote:
 
   Right, but I've had about 10 successful runs even since my last
 checkin.
 
  This does not mean the code is correct, only that you were lucky :)
  And the fact it still failed in spite of your efforts is not something
  to be ashamed of -- it's a sign you did a lot and there's *still*
  something wrong.
 
  The thing with randomized testing and test harness is that it's
  supposed to make your life easier -- to uncover things you wouldn't
  think about (or wouldn't have a chance to test, as is the case with
  filesystem emulation layers). Resigning from all this infrastructure
  and writing tests in plain JUnit runner would be dodging the problem,
  not solving it. Sure, it's not easy. And sure, it's a pain in the
  arse. But it's also gratifying to know you nailed the problem once you
  find it.
 
  Dawid
 
  -
  To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
  For additional commands, e-mail: dev-h...@lucene.apache.org
 
 
 
 
  --
  Anshum Gupta

 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org




[jira] [Issue Comment Deleted] (LUCENE-6500) ParallelCompositeReader does not always call closed listeners

2015-05-26 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated LUCENE-6500:
--
Comment: was deleted

(was: bq. For the record, no existing tests failed with the change I made.

You removed this test:

{code:java}
-assertEquals(3, pr.leaves().size());
-
-for(LeafReaderContext cxt : pr.leaves()) {
-  cxt.reader().addReaderClosedListener(new ReaderClosedListener() {
-  @Override
-  public void onClose(IndexReader reader) {
-listenerClosedCount[0]++;
-  }
-});
-}
-pr.close();
-assertEquals(3, listenerClosedCount[0]);
{code})

 ParallelCompositeReader does not always call closed listeners
 -

 Key: LUCENE-6500
 URL: https://issues.apache.org/jira/browse/LUCENE-6500
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Attachments: LUCENE-6500.patch


 CompositeParallelReader misses to call closed listeners when the reader which 
 is provided at construction time does not wrap leaf readers directly, such as 
 a multi reader over directory readers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6500) ParallelCompositeReader does not always call closed listeners

2015-05-26 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14559022#comment-14559022
 ] 

Adrien Grand commented on LUCENE-6500:
--

bq. It this issue urgent?

No, I don't think it's urgent. The reason I opened it is just that we hit a 
test failure in Elasticsearch which was due to core closed listeners not being 
called, and the root cause was this issue. But we don't use 
ParallelCompositeReader, it was introduced by LuceneTestCase.newSearcher.

bq. You removed this test:

I think it's still here, I just refactored the test to make it easier to test 
all combinations of `closeSubReaders` and whether sub readers are leaf or 
composite readers. You can try to revert changes in src/test and 
TestParallelCompositeReader will still pass.

bq. Another solution to fix this would be to also add all those deeper nested 
synthetic subreaders to the completeReaderSet (see last line of ctor). In that 
case they can stay with docClose() empty (to not affect refcount). I will try 
this out.

Oh I see, so the bug is that we are not adding all created synthetic readers to 
this set currently. This sounds like a good fix to me.

I liked the fact that you mentioned the second option would simplify the whole 
thing but I'm also afraid this would be a more significant change. So maybe we 
can first apply your first idea to fix the bug and later think about whether it 
would make sense to flatten the whole IndexReader structure to simplify this 
class.

 ParallelCompositeReader does not always call closed listeners
 -

 Key: LUCENE-6500
 URL: https://issues.apache.org/jira/browse/LUCENE-6500
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Attachments: LUCENE-6500.patch


 CompositeParallelReader misses to call closed listeners when the reader which 
 is provided at construction time does not wrap leaf readers directly, such as 
 a multi reader over directory readers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6500) ParallelCompositeReader does not always call closed listeners

2015-05-26 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14559025#comment-14559025
 ] 

Uwe Schindler commented on LUCENE-6500:
---

Hi I implemented the last approach. Makes code much simpler. The 
ParallelComposite Reader now has the same leaves, but the structure was 
flattened. By that, the inner composite readers are never wrapped. We just 
build a new reader with the same leaves as the original, but flattened 
structure. I will now just add a test that it also works with MultiReaders that 
are marked as non-closed (which are incRefed and decRefed on close).

 ParallelCompositeReader does not always call closed listeners
 -

 Key: LUCENE-6500
 URL: https://issues.apache.org/jira/browse/LUCENE-6500
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Attachments: LUCENE-6500.patch


 CompositeParallelReader misses to call closed listeners when the reader which 
 is provided at construction time does not wrap leaf readers directly, such as 
 a multi reader over directory readers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6500) ParallelCompositeReader does not always call closed listeners

2015-05-26 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated LUCENE-6500:
--
Attachment: LUCENE-6500-flatten.patch

Here is my patch. The code that builds the reader got much easier (as nice side 
effect). This is also much easier to understand.

[~jpountz]: what do you think?

We can also open a new issue that takes care of flattening and this one just 
says broken by the new one.

 ParallelCompositeReader does not always call closed listeners
 -

 Key: LUCENE-6500
 URL: https://issues.apache.org/jira/browse/LUCENE-6500
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Attachments: LUCENE-6500-flatten.patch, LUCENE-6500.patch


 CompositeParallelReader misses to call closed listeners when the reader which 
 is provided at construction time does not wrap leaf readers directly, such as 
 a multi reader over directory readers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.8.0_45) - Build # 12644 - Failure!

2015-05-26 Thread Mark Miller
FWIW, this test almost always fails on my local Jenkins
machine: org.apache.solr.cloud.TestSolrCloudWithKerberos.testKerberizedSolr
(Failed 16 times in the last 21 runs. Stability: 23 %)

I've also seen it fail 2 or 3 times on my primary dev machine in the last
couple days.

- Mark

On Tue, May 26, 2015 at 9:05 AM Ishan Chattopadhyaya 
ichattopadhy...@gmail.com wrote:

  The thing with randomized testing and test harness is that it's
  supposed to make your life easier -- to uncover things you wouldn't
  think about

 I am not sure randomized testing is of any help here. Isolated runs of
 this test always passes for every seed. During full suite runs, the test
 sometimes passes and sometimes fails. Hence, I've not been able to set my
 debugger on the test and reproduce. I've added another patch at SOLR-7468,
 which Anshum and I are testing right now. Hopefully that fixes it.


  This does not mean the code is correct, only that you were lucky :)
 I'm beginning to lose faith in hadoop-minikdc and hence we're also testing
 the same thing using an external KDC, to make sure there's no code issue.



 On Tue, May 26, 2015 at 2:15 PM, Dawid Weiss dawid.we...@cs.put.poznan.pl
  wrote:

 Ah, ok. Yes, I didn't track the context that much, I know Mark's been
 trying to straighten out those tests but I don't follow that closely
 -- too much going on in my own field.

 Dawid

 On Tue, May 26, 2015 at 10:36 AM, Anshum Gupta ans...@anshumgupta.net
 wrote:
  I think you misunderstood me there. I'm not talking about not using the
 test
  framework at all, but parts of it. e.g. how the test using
  MiniSolrCloudCluster follows a different approach as compared to other
  SolrCloud tests. I forgot to update here but I've finally figure why it
  never failed for me (I had a default realm set in my /etc/krb5.conf
 file on
  my machine).
  So yes, I'm just trying to find a way to test this part in the correct
  manner, and it may just involve an approach that is different from what
 most
  tests currently use. I hope that makes sense.
 
  On Tue, May 26, 2015 at 12:07 AM, Dawid Weiss 
 dawid.we...@cs.put.poznan.pl
  wrote:
 
   Right, but I've had about 10 successful runs even since my last
 checkin.
 
  This does not mean the code is correct, only that you were lucky :)
  And the fact it still failed in spite of your efforts is not something
  to be ashamed of -- it's a sign you did a lot and there's *still*
  something wrong.
 
  The thing with randomized testing and test harness is that it's
  supposed to make your life easier -- to uncover things you wouldn't
  think about (or wouldn't have a chance to test, as is the case with
  filesystem emulation layers). Resigning from all this infrastructure
  and writing tests in plain JUnit runner would be dodging the problem,
  not solving it. Sure, it's not easy. And sure, it's a pain in the
  arse. But it's also gratifying to know you nailed the problem once you
  find it.
 
  Dawid
 
  -
  To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
  For additional commands, e-mail: dev-h...@lucene.apache.org
 
 
 
 
  --
  Anshum Gupta

 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org





[jira] [Created] (SOLR-7597) TestRandomFaceting.testRandomFaceting failure

2015-05-26 Thread Steve Rowe (JIRA)
Steve Rowe created SOLR-7597:


 Summary: TestRandomFaceting.testRandomFaceting failure
 Key: SOLR-7597
 URL: https://issues.apache.org/jira/browse/SOLR-7597
 Project: Solr
  Issue Type: Bug
  Components: faceting
Affects Versions: Trunk, 5.2
Reporter: Steve Rowe


Reproduces on trunk and branch_5x:

{noformat}
ant test  -Dtestcase=TestRandomFaceting -Dtests.method=testRandomFaceting 
-Dtests.seed=2513EC725E45D7D5 -Dtests.slow=true -Dtests.locale=es_CU 
-Dtests.timezone=ROK -Dtests.asserts=true -Dtests.file.encoding=US-ASCII
{noformat}

{noformat}
   [junit4]   2 12030 T11 C18 oasc.SolrException.log ERROR 
org.apache.solr.common.SolrException: org.apache.solr.search.SyntaxError: 
Cannot parse 'id:(OPIM JTRN JUIQ BVUJ JZFO HEIX UXEU IGVD GCIY ZYEV GDJV YTWX 
LMEF IWFQ JLKK CUUN UNRP ZCUP CVDM GHJD WAHV TCDX ZVDY NAJM NRIX JDCL XZWR MVKC 
YXIB VCFT AEDD XYRW TZRV LYKR ODDS LSUF VTNE FPOF ALPR AAOF VAOW LUQW OVNK LKWJ 
MOQE HXPC IHXD GRXC SKBD EFFJ OEJC LPJH MORL KETU CMBS ALPE QSMT ROCO ZSFL ELUP 
MLTC RNXM XIZZ XIBV ZYIN GHRZ MWJK UCOL REOP OYPZ FTGY NJLY IYPP GHVN PRCW KUCW 
LENR ABVU GQWC BFDB QWYK WNGI ZPWD BDNC GYJL GOZA JRSC CKEM TJVV KEMQ IHQU HNIQ 
HLKJ UWEM TCHG EUQI ZUBH RONB BZDN KWJR BQZN IPYD OSLQ HCBQ MIZR DPYV YTLV ZVAD 
SUYR ZXCC ZTYW XWVA NKIL AXOK GJOB MYLO JMUU DTDI DFLY TPCE MWOH PXRM JJBG AWIQ 
QMEG XNEW STED NKPW HJDB QVWQ MDTW JNXX BCFL MJPL XPWR XXXB PFMN FWLC CTCI YRNY 
ZETI KVCG PBAC UJCZ IIWL KQWT WSHW OYPB IVEQ OHOC YCAR MZJM SYGM JNHQ USBZ MLMO 
RIBP VCGN UATJ JLSX XZCR YKVB DQTT JLXW XQKA LFGU TKSI ISXG IECJ TSZN ICDA CNLE 
EOFY MYUJ NFST FKCH RSMU XPUK ZAIO SZVS DJAJ BDMV OFQV LVWP BOJB OZWN UTDB GMLY 
RJGI HBYX ZRLW VMBZ JOKG OZSL JDCM FLJZ JIIR VUMR BIKZ OVSU FRKK CLYQ EQTN QKQR 
AUYT TEHX ESUL ERYX YDYZ HRUZ VNJR GUPH RJBB ECJF NEUE FHZS YWZD DTGG KNHF BBIR 
BNBK FLRO BXNC EOVD BCCK IQNX FZGN PMRY JDCB BMEW CCUF XYZN YKDV VMRL KTOO NZAF 
XMBF VYFS AQUV GUFG PRTD YPRM QRGX JXOG MZYE GAQF ICPQ LCPM XXPI OSDZ VTEP RTMB 
DNHW FNZH DGGZ QICW XZZD WAZV RKVP WUCK SIOJ TWDE XWON PGJM GDRU KRYM YHXA SKTW 
UNCV YWJS KRXO IXFL VZFQ YIKQ RIWZ XJSM AWNU VIRQ YTJR DNGD AIDE GAVB NAYK ACJM 
JXOP GHKW BDGQ ORQD VXLF WQAX JZHM GASX CDBH EANO KYDK SPSL TFHV HFHI AYBX HCTT 
HOYE YWDJ YFDB YFNQ ZGNF DSSV SCUT IPLI NHZL HYED VTGA DGQT NRDA KMFR LIVF HGCH 
ZXKV OZHF ETKX HEQX JUYP UVRJ FSRL EUOY XAER JMJK TQQE SZRD FTFS ZHSH AXGC LLOF 
DQFO ORPW EBTN RXRE WUCC SACF TQQM ECTU VIBW YRHJ YANS QYYC DLFW KFZP QPTA GTYN 
DLSH HEBE BKQR ZKNC ADVM KOBU TJEC RUYU DBPW PPLF QOAE NSZQ YPLN ROMQ ZXDM FZDP 
IAWX XLHC PAKI CQSX QTOE KLLR QYVM EENH SGBP BNCT PJRU ONVH BKTC QBSA NIDU PQYZ 
ESRO HPSN DICT VMUU RXLC BCKG XPHV SJDA BWBB GTBV ZDTW VGEL PMLP QBJJ AFLF FBNX 
LFCU OYBD PXSQ UCLB KOEZ JZHT JDGF XNUP GUVM BJVC ZLFS OYBK YCAP LWDW ZAAP SNIA 
QWLY OAIB CVZH EPWB WPJK NEBO QEQQ DIQS FYYT REAY IHSC HTMF IYBF GNWP AOPO COLA 
HVDT MBCU HEHJ JVZE WDZA GZMD HSNQ WBGQ EDAS JCVP TFHW KCLS FBDZ YADX GWAF UNMF 
YAAB MAYF SEVC JXQR MMHM CLNC JZYG OYXT MGSB TTPW AOEK OKPO DJOU MBOL WNML QNMN 
SNLI EFRM ZTDE NFBW BABV MZYB UQZR NFWU GOTZ VJIB VMSB REON BNLD KIMC RABW WFYS 
GQVX SKEC YGZY XRTP YEEK JIBD HUDU JEPT VEYH JYQK TMKK PAWS JITH PMAO TPTZ WEYZ 
DDUJ IGIN PLJT UVCI WHUD FAOA TMDV YBDF SGHO ULJY HOIY VLJU FRLE GCJJ TSQG CAFT 
HCLB YMKE ASHE YLCO KOLP KVWS EMQL ELDN KWSF KTEG SMOB ZEWF CPQG JRIC CHNN GPTI 
XJBP KUGV SBPS QYIG EJBW UYKD EMYP YVVP CRLR HYFS ODEV LHUB MTEP CXOY QETZ RAQK 
CZBD ZGHU WMEP LBUU OKPM VZMV EDKB ONHB FVBQ VIVC ETEM NCSI ZYEL KTHU HVYM HNFW 
YLEB HOPD GSBH VJVE FFVD SEAX ZRJH DQSF VEUR WUDN BYNW EBET YPVJ SPYS UYQR PUYQ 
CJLU VTHG FBNW TXOE AYSJ IVBU CUDZ QJLL XDQC VAEU RGLA BHFZ WAJW ANBX GBPB NEVF 
PFOX ONQB HXQH DRQH VCSE GIAZ CGAA UZPL LDMW VQDV YSOG LTDO VEOA IKRM GGTX ERKK 
RDFP ZPSU ZUPA FCYI CJLK HWJS UNNF QAWB GPYN FOLP VSZK JDGE YNLI KUUZ LZPP NNQU 
PSLS OUUR BNIZ FAIU CCZB DDIA UZYF CDOS DRJD SJBA UMNM FUWT BBHG UJYC VNVK LKJJ 
KKAE RFLP RRJU UBZL XGLA FAER AKIZ IXIA HIVQ YFOC MUPX ICIU HSFN HJGW GXFU CHEP 
EJYJ LHOR DORN XRSI JKXX PXDS WSHT NYUX SCGQ JFJX WNZM HHIC IREW ZHFF JAEN WIYL 
POZR RKHB XQGY LTDM SLLM ILGO UMQS PKBN VZCA QIGT KPSW BWAR RNBY IHTO HHPU FOSA 
AHCE KEBI WGBP SZIH QZSA PWIC EIUL CXLA LONX DKCC LSSD YITB HHVL MDLR XFNZ QDLR 
AWVK JVVX CQUD JHBL HEBJ HPQY NBFA PZIX FJHM PDPC GYZV YKKY OGAE RQWY FVXG QPBP 
PJHD WASM IXHB XPCW RPEK RGIY MXEC RJXT PFTW EKKR DFVE DRUA QKPT JCQR HJAL PEIW 
KNJE ONFL NBEK ATMK SCRS WBSZ IIBI ZSHG LJBK AKGZ COGL PJDQ ETXH THAT WZVS DUWW 
IVGL CMUY LUNG SGUJ LWBI UCCZ IWVT TMVH CNEV JHKU UGRA BQDH RAEU JHIH QYHY ALNK 
NGZP WXHL HOCH LMWR FGGC TZUE JDOJ JLOA WLTG MSEV JRSU UMIQ ZWNW HEGI AUYO KWGB 
NDHQ LREJ POJZ GZBK RSHE WCEA TPDH CFWD EFGY SCBK HFLB VNON FXLD EQQZ MGCZ ZWJT 
MBWE RFVU DOSP RNKO KRCC DSNU KVCN LVVP OMNX VDQE CGDN DOCU YSDE ZOLE WUVJ IEAS 
EQSA ROBH ELWT GEMQ USKI MSCA IWSW HZVY VCXS TFLW DLED LVKM 

[jira] [Commented] (SOLR-7587) TestSpellCheckResponse stalled and never timed out -- possible VersionBucket bug? (5.2 branch)

2015-05-26 Thread Timothy Potter (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14559516#comment-14559516
 ] 

Timothy Potter commented on SOLR-7587:
--

Have a little more information about what caused this failure. Had to dig into 
the JavaDoc for ReentrantReadWriteLock a bit and found this little gem:

{quote}
Reentrancy also allows downgrading from the write lock to a read lock, by 
acquiring the write lock, then the read lock and then releasing the write lock. 
However, upgrading from a read lock to the write lock is not possible.
{quote}

All the test failures because of this situation occurred during a commit. 
Commits acquire a read-lock on the VersionInfo object (see 
{{DistributedUpdateProcessor#versionAdd}} method). My code introduced the need 
for acquiring the write-lock and as we learned above, you can't upgrade a 
read-lock to a write-lock. The problem is where I had this code; specifically I 
hung it off of the code that handles {{firstSearcher}} events, since I need a 
searcher in order to lookup the max value from the index to seed version 
buckets with. But all this seems like the test should fail consistently every 
time, which is not the case. So clearly there's some timing involved with this 
fail. This code only fires when {{currSearcher == null}} and I don't get how 
that could be at the point where the test is sending a commit (see below)?

{code}
at org.apache.solr.update.VersionInfo.blockUpdates(VersionInfo.java:118)
at org.apache.solr.update.UpdateLog.onFirstSearcher(UpdateLog.java:1604)
at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1810)
at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1505)
at 
org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:617)
- locked 0xf6f09a10 (a java.lang.Object)
at 
org.apache.solr.update.processor.RunUpdateProcessor.processCommit(RunUpdateProcessorFactory.java:95)
at 
org.apache.solr.update.processor.UpdateRequestProcessor.processCommit(UpdateRequestProcessor.java:64)
at 
org.apache.solr.update.processor.DistributedUpdateProcessor.doLocalCommit(DistributedUpdateProcessor.java:1635)
at 
org.apache.solr.update.processor.DistributedUpdateProcessor.processCommit(DistributedUpdateProcessor.java:1612)
at 
org.apache.solr.update.processor.LogUpdateProcessor.processCommit(LogUpdateProcessorFactory.java:161)
at 
org.apache.solr.handler.RequestHandlerUtils.handleCommit(RequestHandlerUtils.java:69)
at 
org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:68)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:2051)
at 
org.apache.solr.client.solrj.embedded.EmbeddedSolrServer.request(EmbeddedSolrServer.java:179)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:135)
at org.apache.solr.client.solrj.SolrClient.commit(SolrClient.java:483)
at org.apache.solr.client.solrj.SolrClient.commit(SolrClient.java:502)
at 
org.apache.solr.client.solrj.response.TestSpellCheckResponse.testSpellCheckResponse(TestSpellCheckResponse.java:51)
{code}

The searcher gets registered in futures but seems unlikely that the test should 
get this far before the searcher opened during core initialization is set to 
the currSearcher. At any rate, the patch I submitted moves the bucket seeding 
code (which needs a write-lock) out of the firstSearcher code path and into the 
SolrCore ctor, which fixes the issue of needing a write-lock when a read-lock 
as already been acquired for a commit operation. It's still a question in my 
mind as to how the test can get to sending a commit when {{currSearcher == 
null}} ... any thoughts on that?

 TestSpellCheckResponse stalled and never timed out -- possible VersionBucket 
 bug? (5.2 branch)
 --

 Key: SOLR-7587
 URL: https://issues.apache.org/jira/browse/SOLR-7587
 Project: Solr
  Issue Type: Bug
Reporter: Hoss Man
Assignee: Timothy Potter
Priority: Blocker
 Fix For: 5.2

 Attachments: SOLR-7587.patch, jstack.1.txt, jstack.2.txt, 
 junit4-J0-20150522_181244_599.events, junit4-J0-20150522_181244_599.spill, 
 junit4-J0-20150522_181244_599.suites


 On the 5.2 branch (r1681250), I encountered a solrj test stalled for over 110 
 minutes before i finally killed it...
 {noformat}
[junit4] Suite: org.apache.solr.common.util.TestRetryUtil
[junit4] Completed [55/60] on J1 in 1.04s, 1 test
[junit4] 
[junit4] HEARTBEAT J0 PID(12147@tray): 2015-05-22T18:14:56, stalled for  
 121s at: 

[jira] [Commented] (SOLR-7583) API to download snapshot files/restore via upload

2015-05-26 Thread Greg Solovyev (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14559530#comment-14559530
 ] 

Greg Solovyev commented on SOLR-7583:
-


Downloading a zipped snapshot:

http://solr-host:solr-port/collection-name/replication?command=downloadbackupname=snaphostnamewt=filestream

{code:java}
InputStream stream = null;
URL url = new URL(requestURL);
stream = url.openStream();
FastInputStream is = new FastInputStream((InputStream) stream);  
String fileName = downloaded. + backupName + .zip;
String savePath = 
createTempDir().resolve(fileName).toAbsolutePath().toString();
File outFile = new File(savePath);
FileOutputStream fos = new FileOutputStream(outFile);
Long totalRead = 0L;
try {
  byte[] longbytes = new byte[8];
  is.readFully(longbytes);
  Long fileSize = readLong(longbytes);
  while(fileSize  totalRead) {
//store bytes representing packet size here
byte[] intbytes = new byte[4]; 
//read packet size
is.readFully(intbytes);
int packetSize = readInt(intbytes);
//read the packet
byte[] buf = new byte[packetSize];
is.readFully(buf, 0, packetSize);
fos.write(buf);
fos.flush();
totalRead+=(long)packetSize;
  }
} finally  {
  //close streams
  IOUtils.closeQuietly(is);
  fos.close();  
}
{code}


 API to download snapshot files/restore via upload
 -

 Key: SOLR-7583
 URL: https://issues.apache.org/jira/browse/SOLR-7583
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Reporter: Greg Solovyev
 Attachments: SOLR-7583.patch, SOLR-7583.patch


 What we are looking for:
 SolrCloud and Solr should have APIs to download a snapshot via HTTP. 
 For single node Solr, this API will find a snapshot and stream it back over 
 HTTP. For SolrCloud, this API will find a Replica that has the snapshot with 
 requested name and stream the snapshot from that replica. Since there are 
 multiple files inside a snapshot, the API should probably zip the snapshot 
 folder before sending it back to the client.
 Why we need this:
 this will allow us to create and fetch fully contained archives of customer 
 data where each backup archive will contain Solr index as well as other 
 customer data (DB, metadata, files, etc).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7583) API to download snapshot files/restore via upload

2015-05-26 Thread Greg Solovyev (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Greg Solovyev updated SOLR-7583:

Attachment: SOLR-7583.patch

Updated patch for trunk. Patch includes:
 - API + unit tests for downloading zipped snapshot
 - API + unit tests for restoring a core by uploading snapshot files
 - API + unit tests for restoring a core by uploading a zipped snapshot 
directory


 API to download snapshot files/restore via upload
 -

 Key: SOLR-7583
 URL: https://issues.apache.org/jira/browse/SOLR-7583
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Reporter: Greg Solovyev
 Attachments: SOLR-7583.patch, SOLR-7583.patch


 What we are looking for:
 SolrCloud and Solr should have APIs to download a snapshot via HTTP. 
 For single node Solr, this API will find a snapshot and stream it back over 
 HTTP. For SolrCloud, this API will find a Replica that has the snapshot with 
 requested name and stream the snapshot from that replica. Since there are 
 multiple files inside a snapshot, the API should probably zip the snapshot 
 folder before sending it back to the client.
 Why we need this:
 this will allow us to create and fetch fully contained archives of customer 
 data where each backup archive will contain Solr index as well as other 
 customer data (DB, metadata, files, etc).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7389) Expose znode version in clusterstatus API

2015-05-26 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14559532#comment-14559532
 ] 

Shalin Shekhar Mangar commented on SOLR-7389:
-

No problem, thank you for your contribution!

 Expose znode version in clusterstatus API
 -

 Key: SOLR-7389
 URL: https://issues.apache.org/jira/browse/SOLR-7389
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Reporter: Shalin Shekhar Mangar
Assignee: Shalin Shekhar Mangar
Priority: Minor
  Labels: difficulty-easy, impact-medium
 Fix For: Trunk, 5.3

 Attachments: SOLR-7389.patch, SOLR-7389.patch, SOLR-7389.patch


 We should expose the znode version of the cluster state for each collection 
 that is returned by the clusterstatus API.
 Apart from giving an idea about when the clusterstatus was executed, this 
 information can be used by non-java clients to use the same _stateVer_ 
 mechanism that SolrJ currently uses for routing requests without watching all 
 cluster states.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7583) API to download snapshot files/restore via upload

2015-05-26 Thread Greg Solovyev (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14559539#comment-14559539
 ] 

Greg Solovyev commented on SOLR-7583:
-

Restoring a core by uploading a zipped snapshot:
{code:java}
ContentStreamUpdateRequest restoreReq = new 
ContentStreamUpdateRequest(/replication);
restoreReq.setParam(ReplicationHandler.COMMAND, 
ReplicationHandler.CMD_RESTORE);
restoreReq.setParam(ReplicationHandler.FILE_FORMAT, 
ReplicationHandler.FILE_FORMAT_ZIP);
restoreReq.addFile(zipFileOutput, application/octet-stream);

HttpSolrClient client = new HttpSolrClient(http://localhost:8983/; + 
restoredCoreName);  
NamedListObject result = client.request(restoreReq);
{code}

 API to download snapshot files/restore via upload
 -

 Key: SOLR-7583
 URL: https://issues.apache.org/jira/browse/SOLR-7583
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Reporter: Greg Solovyev
 Attachments: SOLR-7583.patch, SOLR-7583.patch


 What we are looking for:
 SolrCloud and Solr should have APIs to download a snapshot via HTTP. 
 For single node Solr, this API will find a snapshot and stream it back over 
 HTTP. For SolrCloud, this API will find a Replica that has the snapshot with 
 requested name and stream the snapshot from that replica. Since there are 
 multiple files inside a snapshot, the API should probably zip the snapshot 
 folder before sending it back to the client.
 Why we need this:
 this will allow us to create and fetch fully contained archives of customer 
 data where each backup archive will contain Solr index as well as other 
 customer data (DB, metadata, files, etc).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7583) API to download snapshot files/restore via upload

2015-05-26 Thread Greg Solovyev (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14559544#comment-14559544
 ] 

Greg Solovyev commented on SOLR-7583:
-

Restoring a core by uploading a snapshot folder:
{code:java}
ContentStreamUpdateRequest restoreReq = new 
ContentStreamUpdateRequest(/replication);
restoreReq.setParam(command, ReplicationHandler.CMD_RESTORE);
files = tmpBackupDir.listFiles(); //folder where we have snapshot files
haveFiles = false;
if (files != null) {
for (File f : files) {
if (f != null  f.getName() != null  f.exists()
 f.length()  0) {
haveFiles = true;
restoreReq.addFile(f, application/octet-stream);
}
}
}
{code}

 API to download snapshot files/restore via upload
 -

 Key: SOLR-7583
 URL: https://issues.apache.org/jira/browse/SOLR-7583
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Reporter: Greg Solovyev
 Attachments: SOLR-7583.patch, SOLR-7583.patch


 What we are looking for:
 SolrCloud and Solr should have APIs to download a snapshot via HTTP. 
 For single node Solr, this API will find a snapshot and stream it back over 
 HTTP. For SolrCloud, this API will find a Replica that has the snapshot with 
 requested name and stream the snapshot from that replica. Since there are 
 multiple files inside a snapshot, the API should probably zip the snapshot 
 folder before sending it back to the client.
 Why we need this:
 this will allow us to create and fetch fully contained archives of customer 
 data where each backup archive will contain Solr index as well as other 
 customer data (DB, metadata, files, etc).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7583) API to download snapshot files/restore via upload

2015-05-26 Thread Greg Solovyev (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Greg Solovyev updated SOLR-7583:

Attachment: SOLR-7583.patch

cleaned up the patch

 API to download snapshot files/restore via upload
 -

 Key: SOLR-7583
 URL: https://issues.apache.org/jira/browse/SOLR-7583
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Reporter: Greg Solovyev
 Attachments: SOLR-7583.patch, SOLR-7583.patch, SOLR-7583.patch


 What we are looking for:
 SolrCloud and Solr should have APIs to download a snapshot via HTTP. 
 For single node Solr, this API will find a snapshot and stream it back over 
 HTTP. For SolrCloud, this API will find a Replica that has the snapshot with 
 requested name and stream the snapshot from that replica. Since there are 
 multiple files inside a snapshot, the API should probably zip the snapshot 
 folder before sending it back to the client.
 Why we need this:
 this will allow us to create and fetch fully contained archives of customer 
 data where each backup archive will contain Solr index as well as other 
 customer data (DB, metadata, files, etc).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5955) Add config templates to SolrCloud.

2015-05-26 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-5955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14559571#comment-14559571
 ] 

Tomás Fernández Löbbe commented on SOLR-5955:
-

bq. Any config-set can be a template for creating another config-set. Any 
config-set can be marked as immutable.
+1

 Add config templates to SolrCloud.
 --

 Key: SOLR-5955
 URL: https://issues.apache.org/jira/browse/SOLR-5955
 Project: Solr
  Issue Type: New Feature
Reporter: Mark Miller
 Attachments: SOLR-5955.patch


 You should be able to upload config sets to a templates location and then 
 specify a template as your starting config when creating new collections via 
 REST API. We can have a default template that we ship with.
 This will let you create collections from scratch via REST API, and then you 
 can use things like the schema REST API to customize the template config to 
 your needs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7468) Kerberos authentication module

2015-05-26 Thread Gregory Chanan (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14559570#comment-14559570
 ] 

Gregory Chanan commented on SOLR-7468:
--

bq. Gregory Chanan, do you have any experience testing the kerberos 
intergration of cloudera's solr with minikdc vs. external kdc? Do you see any 
problem with the SOLR-7468's plugin/test code? Looking forward to your valuable 
inputs.

For solr proper, we use external kdc.  The only thing we use the MiniKDC for is 
SOLR-6915.  I should have time to look at this tomorrow.

 Kerberos authentication module
 --

 Key: SOLR-7468
 URL: https://issues.apache.org/jira/browse/SOLR-7468
 Project: Solr
  Issue Type: New Feature
  Components: security
Reporter: Ishan Chattopadhyaya
Assignee: Anshum Gupta
 Fix For: 5.2

 Attachments: SOLR-7468-alt-test.patch, SOLR-7468-alt-test.patch, 
 SOLR-7468-alt-test.patch, SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, 
 SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, 
 SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, 
 SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, 
 SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch


 SOLR-7274 introduces a pluggable authentication framework. This issue 
 provides a Kerberos plugin implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7595) Allow method chaining for all CollectionAdminRequest

2015-05-26 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14559588#comment-14559588
 ] 

ASF subversion and git services commented on SOLR-7595:
---

Commit 1681811 from sha...@apache.org in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1681811 ]

SOLR-7595: Allow method chaining for all CollectionAdminRequests in Solrj

 Allow method chaining for all CollectionAdminRequest
 

 Key: SOLR-7595
 URL: https://issues.apache.org/jira/browse/SOLR-7595
 Project: Solr
  Issue Type: Improvement
  Components: clients - java, SolrJ
Reporter: Shalin Shekhar Mangar
Assignee: Shalin Shekhar Mangar
Priority: Minor
 Fix For: Trunk, 5.3

 Attachments: SOLR-7595.patch


 Allow methods to be chained for all CollectionAdminRequests so that code like 
 the following can be written:
 {code}
 Create createCollectionRequest = new Create()
   .setCollectionName(testasynccollectioncreation)
   .setNumShards(1)
   .setConfigName(conf1)
   .setAsyncId(1001);
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7583) API to download snapshot files/restore via upload

2015-05-26 Thread Greg Solovyev (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Greg Solovyev updated SOLR-7583:

Summary: API to download snapshot files/restore via upload  (was: API to 
download a snapshot by name)

 API to download snapshot files/restore via upload
 -

 Key: SOLR-7583
 URL: https://issues.apache.org/jira/browse/SOLR-7583
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Reporter: Greg Solovyev
 Attachments: SOLR-7583.patch


 What we are looking for:
 SolrCloud and Solr should have APIs to download a snapshot via HTTP. 
 For single node Solr, this API will find a snapshot and stream it back over 
 HTTP. For SolrCloud, this API will find a Replica that has the snapshot with 
 requested name and stream the snapshot from that replica. Since there are 
 multiple files inside a snapshot, the API should probably zip the snapshot 
 folder before sending it back to the client.
 Why we need this:
 this will allow us to create and fetch fully contained archives of customer 
 data where each backup archive will contain Solr index as well as other 
 customer data (DB, metadata, files, etc).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-7567) Replication handler to support restore via upload

2015-05-26 Thread Greg Solovyev (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7567?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Greg Solovyev resolved SOLR-7567.
-
Resolution: Duplicate

patch for SOLR-7585 includes this feature

 Replication handler to support restore via upload
 -

 Key: SOLR-7567
 URL: https://issues.apache.org/jira/browse/SOLR-7567
 Project: Solr
  Issue Type: Sub-task
Reporter: Greg Solovyev
 Fix For: Trunk

 Attachments: SOLR-7567.patch, SOLR-7567.patch


 Sometimes the snapshot is not available on a file system that can be accessed 
 by Solr or SolrCloud. It would be useful to be able to send snapshot  files 
 to Solr over HTTP.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-7567) Replication handler to support restore via upload

2015-05-26 Thread Greg Solovyev (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14559503#comment-14559503
 ] 

Greg Solovyev edited comment on SOLR-7567 at 5/26/15 6:04 PM:
--

patch for SOLR-7583 includes this feature


was (Author: grishick):
patch for SOLR-7585 includes this feature

 Replication handler to support restore via upload
 -

 Key: SOLR-7567
 URL: https://issues.apache.org/jira/browse/SOLR-7567
 Project: Solr
  Issue Type: Sub-task
Reporter: Greg Solovyev
 Fix For: Trunk

 Attachments: SOLR-7567.patch, SOLR-7567.patch


 Sometimes the snapshot is not available on a file system that can be accessed 
 by Solr or SolrCloud. It would be useful to be able to send snapshot  files 
 to Solr over HTTP.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Configsets and Config APIs in Solr

2015-05-26 Thread Tomás Fernández Löbbe
I think there are still open questions based on people comments in this
email thread and in SOLR-5955. Seems like the concept of mutable/immutable
ConfigSet should be supported but not forced. Then, what is an mutable
ConfigSet? one that can be edited via API calls? then, should those Config
APIs be at the collection level or as a different API? Do we also want to
support Collection-specific configuration changes?

Accepting that shared ConfigSet can be edited by doing collection-specific
operations is a bad idea I think.

Tomás

On Mon, May 25, 2015 at 6:58 AM, Noble Paul noble.p...@gmail.com wrote:

  but I think it would be better if the mutable part is placed under
 /collections/x/..., otherwise /configs

 it makes sense.

 The problem we have is managedschema currently writes to the same

 If we could change the managedschema behavior somehow it would have been
 better

 On Fri, May 22, 2015 at 10:21 PM, Tomás Fernández Löbbe
 tomasflo...@gmail.com wrote:
  TLDR: we should think about this as configset base vs per-collection
 diff,
  not as immutable base vs per-collection mutable.
 
  Makes sense, I was mostly thinking of it being immutable from the current
  Config APIs. Editing a configset for multiple collection is a valid and
  useful feature, the problem is doing that from inside one collection's
 API
  call.
 
  So then the question becomes, do we want an API that can *also* make
  collection-specific changes to a shared config?
 
  If we feel there is no need for collection-specific config changes, I'm
 OK,
  but again, the API should be outside of the collection, like a Configset
  API. The generate configset based on X should also be a command of this
  API. In addition, this could allow users to edit a configset that's not
  currently being used by any collection.
 
  Tomás
 
 
  On Fri, May 22, 2015 at 7:10 AM, Yonik Seeley ysee...@gmail.com wrote:
 
  Makes sense Greg.
 
  Just looking at it from the ZK perspective (APIs aside), the original
  idea behind referencing a config set by name was so that you could
  change it in one place and everyone relying on it would get the
  changes.
 
  If one wants collections to have separate independent config sets they
  can already do that.
 
  So then the question becomes, do we want an API that can *also* make
  collection-specific changes to a shared config?
 
  An alternative would be a command to make a copy of a config set, and
  a command to switch a specific collection to use that new config set.
  Then any further changes would be collection specific.  That's sort of
  like SOLR-5955 - config templates - but you can template off of any
  other config set, at any point in time.  Actually, that type of
  functionality seems generally useful regardless.
 
  -Yonik
 
 
  On Thu, May 21, 2015 at 8:07 PM, Gregory Chanan gcha...@cloudera.com
  wrote:
   I'm +1 on the general idea, but I'm not convinced about the
   mutable/immutable separation.
  
   Do we not think it is valid to modify a single config(set) that
 affects
   multiple collections?  I can imagine a case where my data with the
 same
   config(set) is partitioned into many different collections, whether by
   date,
   sorted order, etc. that all use the same underlying config(set).
 Let's
   say
   I have collections partitioned by month and I decide I want to add
   another
   field; I don't want to have to modify
   jan/schema
   feb/schema
   mar/schema
   etc.
  
   I just want to modify the single underlying config(set).  You can
   imagine
   having a configset API that let's me do that, so if I wanted to
 modify a
   single collection's config I would call:
   jan/schema
   but if i wanted to modify the underlying config(set) I would call:
   configset/month_partitioned_config
  
   My point is this: if the problem is that it is confusing to have
   configsets
   modified when you make collection-level calls, then we should fix that
   (I'm
   100% in agreement with that, btw).  You can fix that by having a
   configset
   and a per-collection diff; defining the configset as immutable doesn't
   solve
   the problem, only locks us into a implementation that doesn't support
   the
   use case above.  I'm not even saying we should implement a configset
   API,
   only that defining this as an immutable vs mutable implementation
 blocks
   us
   from doing that.
  
   TLDR: we should think about this as configset base vs per-collection
   diff,
   not as immutable base vs per-collection mutable.
  
   Thoughts?
   Greg
  
  
   On Tue, May 19, 2015 at 10:52 AM, Tomás Fernández Löbbe
   tomasflo...@gmail.com wrote:
  
   I created https://issues.apache.org/jira/browse/SOLR-7570
  
   On Fri, May 15, 2015 at 10:31 AM, Alan Woodward a...@flax.co.uk
   wrote:
  
   +1
  
   A nice way of doing it would be to make it part of the
   SolrResourceLoader
   interface.  The ZK resource loader could check in the
   collection-specific
   zknode first, and then under configs/, and we could 

Re: Welcome Timothy Potter to the PMC

2015-05-26 Thread Timothy Potter
Thank you everyone! It's an honor to be invited to the PMC and to work
with such a great community!

On Tue, May 26, 2015 at 11:43 AM, Tomás Fernández Löbbe
tomasflo...@gmail.com wrote:
 Congratulations Tim!

 On Tue, May 26, 2015 at 10:25 AM, Mark Miller markrmil...@gmail.com wrote:

 Congrats!

 - Mark


 On Tue, May 26, 2015 at 11:10 AM Steve Rowe sar...@gmail.com wrote:

 I'm pleased to announce that Timothy Potter has accepted the PMC’s
 invitation to join.

 Welcome Tim!

 Steve


 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7595) Allow method chaining for all CollectionAdminRequest

2015-05-26 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14559574#comment-14559574
 ] 

ASF subversion and git services commented on SOLR-7595:
---

Commit 1681808 from sha...@apache.org in branch 'dev/trunk'
[ https://svn.apache.org/r1681808 ]

SOLR-7595: Allow method chaining for all CollectionAdminRequests in Solrj

 Allow method chaining for all CollectionAdminRequest
 

 Key: SOLR-7595
 URL: https://issues.apache.org/jira/browse/SOLR-7595
 Project: Solr
  Issue Type: Improvement
  Components: clients - java, SolrJ
Reporter: Shalin Shekhar Mangar
Assignee: Shalin Shekhar Mangar
Priority: Minor
 Fix For: Trunk, 5.3

 Attachments: SOLR-7595.patch


 Allow methods to be chained for all CollectionAdminRequests so that code like 
 the following can be written:
 {code}
 Create createCollectionRequest = new Create()
   .setCollectionName(testasynccollectioncreation)
   .setNumShards(1)
   .setConfigName(conf1)
   .setAsyncId(1001);
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Welcome Timothy Potter to the PMC

2015-05-26 Thread Erick Erickson
Welcome Tim!

On Tue, May 26, 2015 at 8:30 AM, Shalin Shekhar Mangar
shalinman...@gmail.com wrote:
 Congrats and welcome Tim!

 On Tue, May 26, 2015 at 8:40 PM, Steve Rowe sar...@gmail.com wrote:

 I'm pleased to announce that Timothy Potter has accepted the PMC’s
 invitation to join.

 Welcome Tim!

 Steve


 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org




 --
 Regards,
 Shalin Shekhar Mangar.

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7468) Kerberos authentication module

2015-05-26 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14559257#comment-14559257
 ] 

ASF subversion and git services commented on SOLR-7468:
---

Commit 1681778 from [~anshumg] in branch 'dev/trunk'
[ https://svn.apache.org/r1681778 ]

SOLR-7468: Added an alt. test, change for client and server to use different 
principals, and explicit addition of name.rules for test

 Kerberos authentication module
 --

 Key: SOLR-7468
 URL: https://issues.apache.org/jira/browse/SOLR-7468
 Project: Solr
  Issue Type: New Feature
  Components: security
Reporter: Ishan Chattopadhyaya
Assignee: Anshum Gupta
 Fix For: 5.2

 Attachments: SOLR-7468-alt-test.patch, SOLR-7468-alt-test.patch, 
 SOLR-7468-alt-test.patch, SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, 
 SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, 
 SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, 
 SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch, 
 SOLR-7468.patch, SOLR-7468.patch, SOLR-7468.patch


 SOLR-7274 introduces a pluggable authentication framework. This issue 
 provides a Kerberos plugin implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5955) Add config templates to SolrCloud.

2015-05-26 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14559280#comment-14559280
 ] 

Yonik Seeley commented on SOLR-5955:


bq. we should have the concept of immutable ConfigSets, we don't need a 
separate concept of templates. Thoughts?

Yep.  Depending on how you look at it, the concepts could be rather orthogonal.
Any config-set can be a template for creating another config-set.  Any 
config-set can be marked as immutable.

 Add config templates to SolrCloud.
 --

 Key: SOLR-5955
 URL: https://issues.apache.org/jira/browse/SOLR-5955
 Project: Solr
  Issue Type: New Feature
Reporter: Mark Miller
 Attachments: SOLR-5955.patch


 You should be able to upload config sets to a templates location and then 
 specify a template as your starting config when creating new collections via 
 REST API. We can have a default template that we ship with.
 This will let you create collections from scratch via REST API, and then you 
 can use things like the schema REST API to customize the template config to 
 your needs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Welcome Timothy Potter to the PMC

2015-05-26 Thread Alan Woodward
Congratulations and welcome!

Alan Woodward
www.flax.co.uk


On 26 May 2015, at 16:10, Steve Rowe wrote:

 I'm pleased to announce that Timothy Potter has accepted the PMC’s invitation 
 to join.
 
 Welcome Tim!
 
 Steve
 
 
 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org
 



Re: why didn't the test timeout? -- was: Re: [jira] [Updated] (SOLR-7587) TestSpellCheckResponse stalled and never timed out -- possible VersionBucket bug? (5.2 branch)

2015-05-26 Thread Chris Hostetter

: The default timeout seems to be 720 millis, this means 7200
: seconds or ~120 minutes. Look for @TimeoutSuite annotation in the

thanks ... my bad -- i did look for TimeoutSuite in the test, but i 
thought the default was 60 minutes. (forgot to double check that 
assumption)

false alarm (on the timeout)


-Hoss
http://www.lucidworks.com/

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-7594) TestSolr4Spatial2.testRptWithGeometryField failure

2015-05-26 Thread Steve Rowe (JIRA)
Steve Rowe created SOLR-7594:


 Summary: TestSolr4Spatial2.testRptWithGeometryField failure
 Key: SOLR-7594
 URL: https://issues.apache.org/jira/browse/SOLR-7594
 Project: Solr
  Issue Type: Bug
  Components: spatial
Affects Versions: Trunk, 5.2
Reporter: Steve Rowe


The seed fails for me on branch_5x and trunk:

{noformat}
   [junit4]   2 NOTE: reproduce with: ant test  -Dtestcase=TestSolr4Spatial2 
-Dtests.method=testRptWithGeometryField -Dtests.seed=3073201A99DE8699 
-Dtests.slow=true -Dtests.locale=be_BY -Dtests.timezone=America/Maceio 
-Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1
   [junit4] FAILURE 0.53s | TestSolr4Spatial2.testRptWithGeometryField 
   [junit4] Throwable #1: org.junit.ComparisonFailure: expected:[2] but 
was:[1]
   [junit4]at 
__randomizedtesting.SeedInfo.seed([3073201A99DE8699:166498ECA48FDFFB]:0)
   [junit4]at 
org.apache.solr.search.TestSolr4Spatial2.testRptWithGeometryField(TestSolr4Spatial2.java:140)
   [junit4]at java.lang.Thread.run(Thread.java:745)
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7592) Fix ant run-example after Jetty 9 upgrade.

2015-05-26 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14559251#comment-14559251
 ] 

Shalin Shekhar Mangar commented on SOLR-7592:
-

Thanks for fixing Mark!

 Fix ant run-example after Jetty 9 upgrade.
 --

 Key: SOLR-7592
 URL: https://issues.apache.org/jira/browse/SOLR-7592
 Project: Solr
  Issue Type: Bug
Reporter: Mark Miller
Assignee: Mark Miller
 Fix For: Trunk, 5.3


 Noticed this is not working.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Welcome Timothy Potter to the PMC

2015-05-26 Thread Dawid Weiss
Welcome Tim!
Dawid

On Tue, May 26, 2015 at 5:10 PM, Steve Rowe sar...@gmail.com wrote:
 I'm pleased to announce that Timothy Potter has accepted the PMC’s invitation 
 to join.

 Welcome Tim!

 Steve


 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6481) Improve GeoPointField type to only visit high precision boundary terms

2015-05-26 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14559294#comment-14559294
 ] 

Michael McCandless commented on LUCENE-6481:


I was curious about the polygon query performance, so I tweaked the bboxes 
around London, UK benchmark to just use a polygon query instead (with 5 
points, first and last are the same to the polygon is close), and surprisingly 
the performance is only a bit slower than the bbox case (19.1 msec vs 17.8 
msec) ... I expected much slower because in the polygon case we cannot use the 
prefix terms, I think?

 Improve GeoPointField type to only visit high precision boundary terms 
 ---

 Key: LUCENE-6481
 URL: https://issues.apache.org/jira/browse/LUCENE-6481
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/index
Reporter: Nicholas Knize
 Attachments: LUCENE-6481.patch, LUCENE-6481.patch, LUCENE-6481.patch, 
 LUCENE-6481.patch, LUCENE-6481.patch, LUCENE-6481_WIP.patch


 Current GeoPointField [LUCENE-6450 | 
 https://issues.apache.org/jira/browse/LUCENE-6450] computes a set of ranges 
 along the space-filling curve that represent a provided bounding box.  This 
 determines which terms to visit in the terms dictionary and which to skip. 
 This is suboptimal for large bounding boxes as we may end up visiting all 
 terms (which could be quite large). 
 This incremental improvement is to improve GeoPointField to only visit high 
 precision terms in boundary ranges and use the postings list for ranges that 
 are completely within the target bounding box.
 A separate improvement is to switch over to auto-prefix and build an 
 Automaton representing the bounding box.  That can be tracked in a separate 
 issue.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Welcome Timothy Potter to the PMC

2015-05-26 Thread Tommaso Teofili
Welcome Timothy!

Regards,
Tommaso

2015-05-26 17:10 GMT+02:00 Steve Rowe sar...@gmail.com:

 I'm pleased to announce that Timothy Potter has accepted the PMC’s
 invitation to join.

 Welcome Tim!

 Steve


 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org




Re: why didn't the test timeout? -- was: Re: [jira] [Updated] (SOLR-7587) TestSpellCheckResponse stalled and never timed out -- possible VersionBucket bug? (5.2 branch)

2015-05-26 Thread Dawid Weiss
No problem at all. I think it might have been 60 minutes initially but
nightly tests (or some bad combinations of components) didn't complete
within that limit. Perhaps it's time to revise this and lower the
worst case/ add overrides where really applicable.

Dawid

On Tue, May 26, 2015 at 6:32 PM, Chris Hostetter
hossman_luc...@fucit.org wrote:

 : The default timeout seems to be 720 millis, this means 7200
 : seconds or ~120 minutes. Look for @TimeoutSuite annotation in the

 thanks ... my bad -- i did look for TimeoutSuite in the test, but i
 thought the default was 60 minutes. (forgot to double check that
 assumption)

 false alarm (on the timeout)


 -Hoss
 http://www.lucidworks.com/

 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7559) Cannot use Faceting Feature while using/mlt handler

2015-05-26 Thread Jeroen Steggink (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeroen Steggink updated SOLR-7559:
--
Attachment: SOLR-7559.patch

Incorrect parameters where chosen

 Cannot use Faceting Feature while using/mlt handler
 ---

 Key: SOLR-7559
 URL: https://issues.apache.org/jira/browse/SOLR-7559
 Project: Solr
  Issue Type: Bug
  Components: MoreLikeThis
Affects Versions: 5.1
 Environment: windows 7 OS, Eclipse JDK 7
Reporter: Tim Hearn
 Attachments: SOLR-7559.patch

   Original Estimate: 10m
  Remaining Estimate: 10m

 When sending a query using the /mlt handler with faceting enabled, Solr 
 returns an NPE.  The exception is as follows:
 {quote}
 at
 org.apache.solr.request.SimpleFacets.getHeatmapCounts(SimpleFacets.java:1555)
 at
 org.apache.solr.request.SimpleFacets.getFacetCounts(SimpleFacets.java:284)
 at
 org.apache.solr.handler.MoreLikeThisHandler.handleRequestBody(MoreLikeThisHandler.java:233)
 at
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
 at org.apache.solr.core.SolrCore.execute(SolrCore.java:1984)
 at
 org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:829)
 at
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:446)
 at
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:220)
 at
 org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
 at
 org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
 at
 org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
 at
 org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
 at
 org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
 at
 org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
 at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
 at
 org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
 at
 org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
 at
 org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
 at
 org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
 at
 org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)
 at
 org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
 at org.eclipse.jetty.server.Server.handle(Server.java:368)
 at
 org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489)
 at
 org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:53)
 at
 org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:942)
 at
 org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:1004)
 at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:640)
 at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235)
 at
 org.eclipse.jetty.server.BlockingHttpConnection.handle(BlockingHttpConnection.java:72)
 at
 org.eclipse.jetty.server.bio.SocketConnector$ConnectorEndPoint.run(SocketConnector.java:264)
 at
 org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
 at
 org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
 at java.lang.Thread.run(Thread.java:745)
 {quote}
 The issue appears to start in the MoreLikeThisHandler.java class, line 233:
 {quote}
   228 if (params.getBool(FacetParams.FACET, false)) {
   229   if (mltDocs.docSet == null) {
   230 rsp.add(facet_counts, null);
   231   } else {
   232 SimpleFacets f = new SimpleFacets(req, mltDocs.docSet, 
 params);
   233 rsp.add(facet_counts, f.getFacetCounts());
   234   }
   235 }
 {quote}
 When the above constructor is used in the SimpleFacets class it sets the 
 ResponseBuilder object in that class to null, which is what causes the NPE to 
 be thrown when getHeatMapFacets() is called from getFacetCounts()
 {quote}
   129   public SimpleFacets(SolrQueryRequest req,
   130   DocSet docs,
   131   SolrParams params) {
   132 this(req,docs,params,null);
   133   }
   134 
   135   public SimpleFacets(SolrQueryRequest req,
   136   DocSet docs,
   137   SolrParams params,
   138   ResponseBuilder rb) {
   139 this.req = req;
   140 this.searcher = req.getSearcher();
   141 this.docs = this.docsOrig = docs;
   142 this.params = orig = params;
   143 this.required = new 

[jira] [Commented] (SOLR-6273) Cross Data Center Replication

2015-05-26 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14559780#comment-14559780
 ] 

ASF subversion and git services commented on SOLR-6273:
---

Commit 1681839 from [~erickoerickson] in branch 'dev/trunk'
[ https://svn.apache.org/r1681839 ]

SOLR-6273: Cross Data Center Replication: Fix at least one test, un-Ignore tests

 Cross Data Center Replication
 -

 Key: SOLR-6273
 URL: https://issues.apache.org/jira/browse/SOLR-6273
 Project: Solr
  Issue Type: New Feature
Reporter: Yonik Seeley
Assignee: Erick Erickson
 Attachments: SOLR-6273-trunk-testfix1.patch, SOLR-6273-trunk.patch, 
 SOLR-6273-trunk.patch, SOLR-6273.patch, SOLR-6273.patch, SOLR-6273.patch, 
 SOLR-6273.patch


 This is the master issue for Cross Data Center Replication (CDCR)
 described at a high level here: 
 http://heliosearch.org/solr-cross-data-center-replication/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6499) WindowsFS misses to remove open file handle if file is concurrently deleted

2015-05-26 Thread Simon Willnauer (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14559789#comment-14559789
 ] 

Simon Willnauer commented on LUCENE-6499:
-

uwe learned a new API - I knew that day will come :)

 WindowsFS misses to remove open file handle if file is concurrently deleted
 ---

 Key: LUCENE-6499
 URL: https://issues.apache.org/jira/browse/LUCENE-6499
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/test-framework
Affects Versions: 5.1
Reporter: Simon Willnauer
 Fix For: Trunk, 5.3

 Attachments: LUCENE-6499.patch, LUCENE-6499.patch


 WindowsFs has some race conditions when files are concurrently opened and 
 deleted. A file might be successfully opened while concurrently deleted which 
 should be prevented by the WindowsFS with an IOException / access denied. The 
 problem is that we try to remove the leaked file handle form the internal map 
 on close which fails since we fail to read the key from the filesystem since 
 it has already been deleted. This manifests in subsequent `access denied` 
 exceptions even though all streams on the file are closed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6499) WindowsFS misses to remove open file handle if file is concurrently deleted

2015-05-26 Thread Simon Willnauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Simon Willnauer updated LUCENE-6499:

Fix Version/s: (was: 5.3)
   5.2

 WindowsFS misses to remove open file handle if file is concurrently deleted
 ---

 Key: LUCENE-6499
 URL: https://issues.apache.org/jira/browse/LUCENE-6499
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/test-framework
Affects Versions: 5.1
Reporter: Simon Willnauer
 Fix For: Trunk, 5.2

 Attachments: LUCENE-6499.patch, LUCENE-6499.patch


 WindowsFs has some race conditions when files are concurrently opened and 
 deleted. A file might be successfully opened while concurrently deleted which 
 should be prevented by the WindowsFS with an IOException / access denied. The 
 problem is that we try to remove the leaked file handle form the internal map 
 on close which fails since we fail to read the key from the filesystem since 
 it has already been deleted. This manifests in subsequent `access denied` 
 exceptions even though all streams on the file are closed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6273) Cross Data Center Replication

2015-05-26 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14559792#comment-14559792
 ] 

Erick Erickson commented on SOLR-6273:
--

Apologies in advance if re-enabling all the tests generates noise. I couldn't 
get a failure on my box in 150 tries or so, so I'll have to pull logs from 
Jenkins if/when additional issues spring up.



 Cross Data Center Replication
 -

 Key: SOLR-6273
 URL: https://issues.apache.org/jira/browse/SOLR-6273
 Project: Solr
  Issue Type: New Feature
Reporter: Yonik Seeley
Assignee: Erick Erickson
 Attachments: SOLR-6273-trunk-testfix1.patch, SOLR-6273-trunk.patch, 
 SOLR-6273-trunk.patch, SOLR-6273.patch, SOLR-6273.patch, SOLR-6273.patch, 
 SOLR-6273.patch


 This is the master issue for Cross Data Center Replication (CDCR)
 described at a high level here: 
 http://heliosearch.org/solr-cross-data-center-replication/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   3   >