[jira] [Commented] (LUCENE-6006) Replace FieldInfo.normsType with FieldInfo.hasNorms boolean

2014-10-13 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14170580#comment-14170580
 ] 

Robert Muir commented on LUCENE-6006:
-

{quote}
 but we have it for the exceptions
case where every document in a segment hit an exception and never
added norms. I think this is the only reason it exists?
{quote}

Don't think so. the main trouble is sparse fields actually. Add Doc with Field 
A, then go delete that doc...

> Replace FieldInfo.normsType with FieldInfo.hasNorms boolean
> ---
>
> Key: LUCENE-6006
> URL: https://issues.apache.org/jira/browse/LUCENE-6006
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: 5.0, Trunk
>
> Attachments: LUCENE-6006.patch
>
>
> I came across this precursor while working on LUCENE-6005:
> I think FieldInfo.normsType can only be null (field did not index
> norms) or DocValuesType.NUMERIC (it did).  I'd like to simplify to
> just boolean hasNorms.
> This is a strange boolean, though: in theory it should be derived from
> {{indexed && omitNorms == false}}, but we have it for the exceptions
> case where every document in a segment hit an exception and never
> added norms.  I think this is the only reason it exists?  (In theory,
> such cases should result in 100% deleted segments, which IW should
> then drop ... but seems dangerous to "rely" on that).
> So I changed the indexing chain to just fill in the default (0) norms
> for all documents in such exceptional cases; this way going forward
> (starting with 5.0 indices) we really don't need this hasNorms.  But
> we still need it for pre-5.0 indices...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 2170 - Still Failing

2014-10-13 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/2170/

1 tests failed.
REGRESSION:  
org.apache.solr.cloud.DeleteLastCustomShardedReplicaTest.testDistribSearch

Error Message:
No live SolrServers available to handle this 
request:[http://127.0.0.1:24888/rx_ycd/p, http://127.0.0.1:30124/rx_ycd/p, 
http://127.0.0.1:17092/rx_ycd/p]

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this request:[http://127.0.0.1:24888/rx_ycd/p, 
http://127.0.0.1:30124/rx_ycd/p, http://127.0.0.1:17092/rx_ycd/p]
at 
__randomizedtesting.SeedInfo.seed([5B88729A8D4C135B:DA6EFC82FA137367]:0)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrServer.request(LBHttpSolrServer.java:332)
at 
org.apache.solr.client.solrj.impl.CloudSolrServer.sendRequest(CloudSolrServer.java:939)
at 
org.apache.solr.client.solrj.impl.CloudSolrServer.requestWithRetryOnStaleState(CloudSolrServer.java:717)
at 
org.apache.solr.client.solrj.impl.CloudSolrServer.request(CloudSolrServer.java:660)
at 
org.apache.solr.cloud.DeleteLastCustomShardedReplicaTest.removeAndWaitForLastReplicaGone(DeleteLastCustomShardedReplicaTest.java:117)
at 
org.apache.solr.cloud.DeleteLastCustomShardedReplicaTest.doTest(DeleteLastCustomShardedReplicaTest.java:107)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS] Lucene-Solr-5.x-Linux (32bit/jdk1.8.0_20) - Build # 11291 - Still Failing!

2014-10-13 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/11291/
Java: 32bit/jdk1.8.0_20 -server -XX:+UseG1GC

1 tests failed.
FAILED:  
org.apache.solr.cloud.DeleteLastCustomShardedReplicaTest.testDistribSearch

Error Message:
No live SolrServers available to handle this request:[http://127.0.0.1:54134, 
http://127.0.0.1:35893, http://127.0.0.1:49488]

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this request:[http://127.0.0.1:54134, http://127.0.0.1:35893, 
http://127.0.0.1:49488]
at 
__randomizedtesting.SeedInfo.seed([810927DD03EFE35D:EFA9C574B08361]:0)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrServer.request(LBHttpSolrServer.java:332)
at 
org.apache.solr.client.solrj.impl.CloudSolrServer.sendRequest(CloudSolrServer.java:939)
at 
org.apache.solr.client.solrj.impl.CloudSolrServer.requestWithRetryOnStaleState(CloudSolrServer.java:717)
at 
org.apache.solr.client.solrj.impl.CloudSolrServer.request(CloudSolrServer.java:660)
at 
org.apache.solr.cloud.DeleteLastCustomShardedReplicaTest.removeAndWaitForLastReplicaGone(DeleteLastCustomShardedReplicaTest.java:117)
at 
org.apache.solr.cloud.DeleteLastCustomShardedReplicaTest.doTest(DeleteLastCustomShardedReplicaTest.java:107)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.u

[JENKINS] Lucene-Solr-NightlyTests-trunk - Build # 656 - Still Failing

2014-10-13 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-trunk/656/

1 tests failed.
REGRESSION:  
org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.testDistribSearch

Error Message:
IOException occured when talking to server at: 
http://127.0.0.1:13373/w/d/collection1

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: http://127.0.0.1:13373/w/d/collection1
at 
__randomizedtesting.SeedInfo.seed([7E0577D042F3955E:FFE3F9C835ACF562]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:583)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:215)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:211)
at 
org.apache.solr.client.solrj.request.QueryRequest.process(QueryRequest.java:91)
at org.apache.solr.client.solrj.SolrServer.query(SolrServer.java:301)
at 
org.apache.solr.cloud.CloudInspectUtil.compareResults(CloudInspectUtil.java:223)
at 
org.apache.solr.cloud.CloudInspectUtil.compareResults(CloudInspectUtil.java:165)
at 
org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.testIndexingBatchPerRequestWithHttpSolrServer(FullSolrCloudDistribCmdsTest.java:414)
at 
org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.doTest(FullSolrCloudDistribCmdsTest.java:144)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.rand

Re: Next Solr release (5.0)

2014-10-13 Thread Jack Krupansky
I suspect that what will happen is that the initial 5.0 of Solr will not 
have a lot of draw based on features per se. Solr 4.10.x will become like 
Solr 3.6.x - the stable, safe choice for another year or so. Then, 
incrementally, the 5.x releases will gradually accumulate new features and 
stability, while 4.10.x sits relatively stagnant, so that in a year or so 
some 5.x will reach a critical mass of enhancements and stability and become 
the default for new apps.


IOW, go ahead and release Solr 5.0 as soon as some committer has the time to 
be RM. No real harm even if it has any serious issues since anybody with an 
absolute requirement for stability will stay with 4.10.x anyway. Put another 
way, release 5.0 whenever we would have released 4.11.


Then... we can sit back and watch to see what Elasticsearch and Heliosearch 
do - will they leap immediately to 5.0, or stay with 4.10 for while, or 
support both?


In my own bailiwick, DataStax Enterprise, we're waiting for Solr 4.10.x to 
bake for another couple of months before we integrate it in our product, and 
will probably want to see Solr 5.x bake for a good six to nine months before 
we switch over to it. Those are just my own personal, tentative thoughts, no 
commitment from myself or my client intended.


In any case, I enthusiastically look forward to seeing Solr 5.0 come out the 
door before we know it!


-- Jack Krupansky

-Original Message- 
From: Erick Erickson

Sent: Monday, October 13, 2014 5:39 PM
To: dev@lucene.apache.org
Subject: Next Solr release (5.0)

Hmmm, I'm thinking we need to start laying the groundwork on the
user's list for this maybe? It's a somewhat unusual "major" release in
that there's nothing scary-new compared to 4.10.

In the past, e.g. the 3.x->4.0 release was something I'd have been
reluctant to put into production without a good long lead time to
test; there was a _lot_ of new ground there! And there are
"interesting" company policies about whether they allow major
releases. Completely arbitrary restrictions, but they exist.

Do note that I'm _not_ advocating that we change the release process
or anything like that. What I _am_ wondering is if we should start a
discussion on the user's list to give people some time to adjust. In
which case having a good summary would be useful.

Or maybe it's just as well to leave it and expect a bunch of "what
does it all mean" questions when we announce the next release.

FWIW,
Erick

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org 



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Windows (32bit/jdk1.8.0_40-ea-b09) - Build # 4369 - Failure!

2014-10-13 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/4369/
Java: 32bit/jdk1.8.0_40-ea-b09 -client -XX:+UseSerialGC

1 tests failed.
REGRESSION:  org.apache.solr.cloud.ChaosMonkeySafeLeaderTest.testDistribSearch

Error Message:
expected:<0> but was:<1>

Stack Trace:
java.lang.AssertionError: expected:<0> but was:<1>
at 
__randomizedtesting.SeedInfo.seed([5354EE42EFE9CDAE:D2B2605A98B6AD92]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.cloud.ChaosMonkeySafeLeaderTest.doTest(ChaosMonkeySafeLeaderTest.java:153)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
at sun.reflect.GeneratedMethodAccessor73.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakContro

[jira] [Commented] (SOLR-6549) bin/solr script should support a -s option to set the -Dsolr.solr.home property

2014-10-13 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14170368#comment-14170368
 ] 

Shawn Heisey commented on SOLR-6549:


Paraphrasing some thoughts that I expressed on IRC:

{quote}
If the -s option is used, the argument could be tested.  If it's not a 
directory, a startup warning can be logged, or the script could even abort.  If 
solr.xml does not exist in the directory indicated, a warning could be printed. 
 A lack of solr.xml should abort, because the user might have solr.xml in 
zookeeper, or they might be intentionally trying to start in single-core mode.  
(Related: single-core mode probably should be nuked from 5.0, if it's not 
already)

Perhaps a -q option could be implemented to suppress startup warnings.  If the 
script sources /etc/default/solr (or some other filename), options could be 
picked up from that file.
{quote}

I see that after I said these things and started transferring my thoughts here, 
[~thelabdude] replied on IRC, saying that the script already does check for the 
existence of both the solr home and solr.xml, then disappeared.  I admit that I 
did not look at the script before speaking.  Have to stop doing that!  I think 
the existing checks can be improved, will think about it.


> bin/solr script should support a -s option to set the -Dsolr.solr.home 
> property
> ---
>
> Key: SOLR-6549
> URL: https://issues.apache.org/jira/browse/SOLR-6549
> Project: Solr
>  Issue Type: Improvement
>  Components: scripts and tools
>Reporter: Timothy Potter
>Assignee: Timothy Potter
> Fix For: 4.10.2, 5.0
>
>
> The bin/solr script supports a -d parameter for specifying the directory 
> containing the webapp, resources, etc, lib ... In most cases, these binaries 
> are reusable (and will eventually be in a server directory SOLR-3619) even if 
> you want to have multiple solr.solr.home directories on the same server. In 
> other words, it is more common/better to do:
> {code}
> bin/solr start -d server -s home1
> bin/solr start -d server -s home2
> {code}
> than to do:
> {code}
> bin/solr start -d server1
> bin/solr start -d server2
> {code}
> Basically, the start script needs to support a -s option that allows you to 
> share binaries but have different Solr home directories for running multiple 
> Solr instances on the same host.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.8.0_40-ea-b09) - Build # 11290 - Failure!

2014-10-13 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/11290/
Java: 64bit/jdk1.8.0_40-ea-b09 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
REGRESSION:  
org.apache.solr.cloud.DeleteLastCustomShardedReplicaTest.testDistribSearch

Error Message:
No live SolrServers available to handle this request:[https://127.0.0.1:56743, 
https://127.0.0.1:51951, https://127.0.0.1:52240]

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this request:[https://127.0.0.1:56743, https://127.0.0.1:51951, 
https://127.0.0.1:52240]
at 
__randomizedtesting.SeedInfo.seed([26ACECF63858763D:A74A62EE4F071601]:0)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrServer.request(LBHttpSolrServer.java:332)
at 
org.apache.solr.client.solrj.impl.CloudSolrServer.sendRequest(CloudSolrServer.java:939)
at 
org.apache.solr.client.solrj.impl.CloudSolrServer.requestWithRetryOnStaleState(CloudSolrServer.java:717)
at 
org.apache.solr.client.solrj.impl.CloudSolrServer.request(CloudSolrServer.java:660)
at 
org.apache.solr.cloud.DeleteLastCustomShardedReplicaTest.removeAndWaitForLastReplicaGone(DeleteLastCustomShardedReplicaTest.java:117)
at 
org.apache.solr.cloud.DeleteLastCustomShardedReplicaTest.doTest(DeleteLastCustomShardedReplicaTest.java:107)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
at sun.reflect.GeneratedMethodAccessor50.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(Te

[JENKINS] Lucene-Solr-NightlyTests-5.x - Build # 648 - Still Failing

2014-10-13 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-5.x/648/

All tests passed

Build Log:
[...truncated 55265 lines...]
BUILD FAILED
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.x/build.xml:531:
 The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.x/build.xml:79:
 The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.x/solr/build.xml:568:
 The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.x/lucene/common-build.xml:1905:
 The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.x/lucene/common-build.xml:1939:
 Compile failed; see the compiler error output for details.

Total time: 245 minutes 34 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Sending artifact delta relative to Lucene-Solr-NightlyTests-5.x #646
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 464 bytes
Compression is 0.0%
Took 25 ms
Recording test results
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-5.x-MacOSX (64bit/jdk1.7.0) - Build # 1843 - Failure!

2014-10-13 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-MacOSX/1843/
Java: 64bit/jdk1.7.0 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

All tests passed

Build Log:
[...truncated 54530 lines...]
BUILD FAILED
/Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/build.xml:524: The following 
error occurred while executing this line:
/Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/build.xml:79: The following 
error occurred while executing this line:
/Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/solr/build.xml:568: The 
following error occurred while executing this line:
/Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/lucene/common-build.xml:1905: 
The following error occurred while executing this line:
/Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/lucene/common-build.xml:1939: 
Compile failed; see the compiler error output for details.

Total time: 226 minutes 36 seconds
Build step 'Invoke Ant' marked build as failure
[description-setter] Description set: Java: 64bit/jdk1.7.0 
-XX:+UseCompressedOops -XX:+UseConcMarkSweepGC
Archiving artifacts
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

Re: Is there going to be Lucene/Solr 4.11?

2014-10-13 Thread Shawn Heisey
On 10/13/2014 5:17 PM, Alexandre Rafalovitch wrote:
> Oh, and I am not looking for the upgrade reasons for myself. I am
> looking for the reasons we can give others when they ask these
> questions. A popularizer rather than pure developer's approach here (I
> am both), but one I feel also contributes to Solr community. Which
> fact, I appreciate, does not make these questions any less annoying. I
> just feel that discussing them earlier once is more efficient than
> reinventing the reason/terminology every time a newbie asks the
> question later (such as with Core vs. Collection vs. Index debate).

Something that I think might be very useful is a "Why Should I Upgrade?"
wiki page.  I don't know if that could make it into the reference guide,
it might just be something for the MoinMoin wiki.  Each time a new minor
release comes out, a new paragraph for the specific release can be added
to that wiki page, and for some changes, the overview section for the
entire major version can be updated.

Each major version could have a separate wiki page (Why Should I Upgrade
to 5.x?), or we could attempt to put all the info on one page.

We'd have to be careful to write this info for executives, not the tech
guys.  If the CEO of ABC UnElectric Widgets can't understand the gist of
it, I think we've missed our target audience.

http://www.portal2sounds.com/164

Thanks,
Shawn


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Is there going to be Lucene/Solr 4.11?

2014-10-13 Thread Alexandre Rafalovitch
On 13 October 2014 15:27, Shai Erera  wrote:
> Point is (at least on my part) - *our* (this community's) major releases are
> mostly about index backwards compatibility support.

This is really interesting. I've read a lot of material on Solr and
some on Lucene and I never saw the releases explained this way.

> If you're looking for a justification to upgrade to Solr 5.0, then I will try 
> this:
> "because the Lucene/Solr community has decided to release version 5.0, which 
> improves code stability, gets rid of
> back-compat baggage which resulted in bugs, index corruptions and prevented 
> code improvements and enhancements; and
> because Solr and Lucene are released together - this is mainly a release that 
> will get your Solr on par w/ the latest Lucene
> binaries, and will guarantee that your indexes will get supported by next 
> Lucene/Solr major version - 6.0; As usual, Lucene
> and Solr will continue to release enhancements and cool features throughout 
> the dot releases".

Ok, this might be a great explanation for the Lucene community. They
are all developers. Solr community has a significant proportion of
less technical people (or technical in completely different fields),
so let me see if I can rephrase this into more impact-oriented
language.

Would it be fair to say:
"While Lucene/Solr version 4 releases were able to introduce many new
features (such as X, Y, and Z), they were at the same time being held
back by the legacy limitations of the Lucene 3 support. Starting from
Lucene/Solr 5, that compatibility has been removed and it opens up the
door on easier (faster?) development of new features (such as Promise
A, JIRA B, etc)".

Oh, and I am not looking for the upgrade reasons for myself. I am
looking for the reasons we can give others when they ask these
questions. A popularizer rather than pure developer's approach here (I
am both), but one I feel also contributes to Solr community. Which
fact, I appreciate, does not make these questions any less annoying. I
just feel that discussing them earlier once is more efficient than
reinventing the reason/terminology every time a newbie asks the
question later (such as with Core vs. Collection vs. Index debate).

Regards,
   Alex.

Personal: http://www.outerthoughts.com/ and @arafalov
Solr resources and newsletter: http://www.solr-start.com/ and @solrstart
Solr popularizers community: https://www.linkedin.com/groups?gid=6713853

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-6540) strdist() causes NPE if doc is missing field

2014-10-13 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man resolved SOLR-6540.

   Resolution: Fixed
Fix Version/s: Trunk
   5.0

> strdist() causes NPE if doc is missing field
> 
>
> Key: SOLR-6540
> URL: https://issues.apache.org/jira/browse/SOLR-6540
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
> Fix For: 5.0, Trunk
>
> Attachments: SOLR-6540.patch
>
>
> If you try to use the strdist function on a field which is missing in some 
> docs, you'll get a NullPointerException



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6540) strdist() causes NPE if doc is missing field

2014-10-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14170136#comment-14170136
 ] 

ASF subversion and git services commented on SOLR-6540:
---

Commit 1631592 from hoss...@apache.org in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1631592 ]

SOLR-6540 Fix NPE from strdist() func when doc value source does not exist in a 
doc (merge r1631555)

> strdist() causes NPE if doc is missing field
> 
>
> Key: SOLR-6540
> URL: https://issues.apache.org/jira/browse/SOLR-6540
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
> Attachments: SOLR-6540.patch
>
>
> If you try to use the strdist function on a field which is missing in some 
> docs, you'll get a NullPointerException



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6621) SolrZkClient does not guarantee that a watch object will only be triggered once for a given notification

2014-10-13 Thread Ramkumar Aiyengar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14170130#comment-14170130
 ] 

Ramkumar Aiyengar commented on SOLR-6621:
-

Most places do not need this ordering and I guess the current approach is 
simpler for that case. One alternative could be to expose wrapWatcher for use 
before exists or getData is called, and use a marker interface to ensure that 
when a wrapped watcher is passed as an argument, it is not re-wrapped.

> SolrZkClient does not guarantee that a watch object will only be triggered 
> once for a given notification
> 
>
> Key: SOLR-6621
> URL: https://issues.apache.org/jira/browse/SOLR-6621
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: Trunk
>Reporter: Renaud Delbru
>
> The SolrZkClient provides methods such as getData or exists. The problem is 
> that the client automatically wraps the provided watcher with a new watcher 
> (see 
> [here|https://github.com/apache/lucene-solr/blob/6ead83a6fafbdd6c444e2a837b09eccf34a255ef/solr/solrj/src/java/org/apache/solr/common/cloud/SolrZkClient.java#L255])
>  which breaks the guarantee that "a watch object, or function/context pair, 
> will only be triggered once for a given notification". This creates 
> undesirable effects when we are registering the same watch is the Watcher 
> callback method.
> A possible solution would be to introduce a SolrZkWatcher class, that will 
> take care of submitting the job to the zkCallbackExecutor. Components in 
> SolrCloud will extend this class and implement their own callback method. 
> This will ensure that the watcher object that zookeeper receives remains the 
> same.
> See SOLR-6462 for background information.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6006) Replace FieldInfo.normsType with FieldInfo.hasNorms boolean

2014-10-13 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6006?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless updated LUCENE-6006:
---
Attachment: LUCENE-6006.patch

Patch, I think it's ready.  I think it means that once I commit to 5.x, then 
for trunk I can remove this hasNorms since 5.x indices will never have 
hasNorms=false when indexed=true and omitNorms=false.

> Replace FieldInfo.normsType with FieldInfo.hasNorms boolean
> ---
>
> Key: LUCENE-6006
> URL: https://issues.apache.org/jira/browse/LUCENE-6006
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: 5.0, Trunk
>
> Attachments: LUCENE-6006.patch
>
>
> I came across this precursor while working on LUCENE-6005:
> I think FieldInfo.normsType can only be null (field did not index
> norms) or DocValuesType.NUMERIC (it did).  I'd like to simplify to
> just boolean hasNorms.
> This is a strange boolean, though: in theory it should be derived from
> {{indexed && omitNorms == false}}, but we have it for the exceptions
> case where every document in a segment hit an exception and never
> added norms.  I think this is the only reason it exists?  (In theory,
> such cases should result in 100% deleted segments, which IW should
> then drop ... but seems dangerous to "rely" on that).
> So I changed the indexing chain to just fill in the default (0) norms
> for all documents in such exceptional cases; this way going forward
> (starting with 5.0 indices) we really don't need this hasNorms.  But
> we still need it for pre-5.0 indices...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-6006) Replace FieldInfo.normsType with FieldInfo.hasNorms boolean

2014-10-13 Thread Michael McCandless (JIRA)
Michael McCandless created LUCENE-6006:
--

 Summary: Replace FieldInfo.normsType with FieldInfo.hasNorms 
boolean
 Key: LUCENE-6006
 URL: https://issues.apache.org/jira/browse/LUCENE-6006
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Michael McCandless
Assignee: Michael McCandless
 Fix For: 5.0, Trunk


I came across this precursor while working on LUCENE-6005:

I think FieldInfo.normsType can only be null (field did not index
norms) or DocValuesType.NUMERIC (it did).  I'd like to simplify to
just boolean hasNorms.

This is a strange boolean, though: in theory it should be derived from
{{indexed && omitNorms == false}}, but we have it for the exceptions
case where every document in a segment hit an exception and never
added norms.  I think this is the only reason it exists?  (In theory,
such cases should result in 100% deleted segments, which IW should
then drop ... but seems dangerous to "rely" on that).

So I changed the indexing chain to just fill in the default (0) norms
for all documents in such exceptional cases; this way going forward
(starting with 5.0 indices) we really don't need this hasNorms.  But
we still need it for pre-5.0 indices...




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3619) Rename 'example' dir to 'server' and pull examples into an 'examples' directory

2014-10-13 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14170066#comment-14170066
 ] 

Shawn Heisey commented on SOLR-3619:


I am resistant (but not completely opposed) to always running in cloud mode, 
even for a simple single-server install.  This is my position regardless of 
whether ZK is embedded or external.  I realize that it is probably where Solr 
is headed and that I shouldn't fight it, but I already see a lot of confusion 
from users who want to edit their config on disk and restart, just like they do 
for any other program they might download and use.

For that simple single-server install, an embedded ZK makes a lot of sense, but 
we do need to have documentation that guides a user towards a fully external 3 
node ensemble once they get beyond a proof of concept.  I also think we should 
recommend a ZK chroot, but that may be a discussion much further down the road.

It's a good idea to have a single "create collection" command, whether the 
actual thing that will be created is a collection or a core.  A prominent FAQ 
item (or perhaps an entire dedicated wiki page) needs to discuss the finer 
points of our terminology -- collection, shard, replica, core ... but I think 
the user needs to be initially directed to always think about a "collection" 
... understanding the makeup below that is important, but shouldn't muddy the 
user's first contact with Solr.

Has anyone given any thought to deprecating the CoreAdmin API and extending the 
Collections API to handle everything it currently does, by using a parameter to 
tell it that the operation is at the core level?


> Rename 'example' dir to 'server' and pull examples into an 'examples' 
> directory
> ---
>
> Key: SOLR-3619
> URL: https://issues.apache.org/jira/browse/SOLR-3619
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mark Miller
>Assignee: Timothy Potter
> Fix For: 4.9, Trunk
>
> Attachments: SOLR-3619.patch, SOLR-3619.patch, managed-schema, 
> server-name-layout.png, solrconfig.xml
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: How to set proxy to build lucene-solr behind company firewall

2014-10-13 Thread Chris Hostetter

: But, when building solr, it failed since dependencies could not be resolved:
: cd solr
: ant dist
: ...BUILD FAILED
: /scratch/sxzhu/git_storage/lucene-solr/solr/common-build.xml:398: The 
following error occurred while executing this line:
: /scratch/sxzhu/git_storage/lucene-solr/lucene/module-build.xml:284: The 
following error occurred while executing this line:
: /scratch/sxzhu/git_storage/lucene-solr/lucene/common-build.xml:434: 
impossible to resolve dependencies:
:     resolve failed - see output for details
: I believe this is due to the proxy setting, which causes downloading external 
dependencies to fail.

... you pruned too much from the output you emailed us -- what does your 
console log say just before the "BUILD FAILED" part? ... that info would 
be helpful to verify what exactly the problem is.

In general, when having trouple with ant, the "-verbose" and "-debug" 
options are helpful for getting more details.

: I did try something like:
: set ANT_OPTS="-Dproxy.specified=true -Dproxy.host= 
-Dproxy.port="
: but still not effective.

where did you see any docs suggestion that those sysprops would be 
helpful?

I don't use a proxy, but the ivy/ant docs i've seen indicatte that the 
correct sysprops to use are "http.proxyHost" and "http.proxyPort" ...

http://ant.apache.org/ivy/faq.html

-Hoss
http://www.lucidworks.com/

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

Next Solr release (5.0)

2014-10-13 Thread Erick Erickson
Hmmm, I'm thinking we need to start laying the groundwork on the
user's list for this maybe? It's a somewhat unusual "major" release in
that there's nothing scary-new compared to 4.10.

In the past, e.g. the 3.x->4.0 release was something I'd have been
reluctant to put into production without a good long lead time to
test; there was a _lot_ of new ground there! And there are
"interesting" company policies about whether they allow major
releases. Completely arbitrary restrictions, but they exist.

Do note that I'm _not_ advocating that we change the release process
or anything like that. What I _am_ wondering is if we should start a
discussion on the user's list to give people some time to adjust. In
which case having a good summary would be useful.

Or maybe it's just as well to leave it and expect a bunch of "what
does it all mean" questions when we announce the next release.

FWIW,
Erick

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6485) ReplicationHandler should have an option to throttle the speed of replication

2014-10-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14170017#comment-14170017
 ] 

ASF subversion and git services commented on SOLR-6485:
---

Commit 1631559 from hoss...@apache.org in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1631559 ]

SOLR-6485: remove useless javadocs that are broken and violate precommit (merge 
r1631554)

> ReplicationHandler should have an option to throttle the speed of replication
> -
>
> Key: SOLR-6485
> URL: https://issues.apache.org/jira/browse/SOLR-6485
> Project: Solr
>  Issue Type: Improvement
>Reporter: Varun Thacker
>Assignee: Noble Paul
> Fix For: 5.0, Trunk
>
> Attachments: SOLR-6485.patch, SOLR-6485.patch, SOLR-6485.patch, 
> SOLR-6485.patch, SOLR-6485.patch, SOLR-6485.patch, SOLR-6485.patch
>
>
> The ReplicationHandler should have an option to throttle the speed of 
> replication.
> It is useful for people who want bring up nodes in their SolrCloud cluster or 
> when have a backup-restore API and not eat up all their network bandwidth 
> while replicating.
> I am writing a test case and will attach a patch shortly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6540) strdist() causes NPE if doc is missing field

2014-10-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=1416#comment-1416
 ] 

ASF subversion and git services commented on SOLR-6540:
---

Commit 1631555 from hoss...@apache.org in branch 'dev/trunk'
[ https://svn.apache.org/r1631555 ]

SOLR-6540 Fix NPE from strdist() func when doc value source does not exist in a 
doc

> strdist() causes NPE if doc is missing field
> 
>
> Key: SOLR-6540
> URL: https://issues.apache.org/jira/browse/SOLR-6540
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
> Attachments: SOLR-6540.patch
>
>
> If you try to use the strdist function on a field which is missing in some 
> docs, you'll get a NullPointerException



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6485) ReplicationHandler should have an option to throttle the speed of replication

2014-10-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14169992#comment-14169992
 ] 

ASF subversion and git services commented on SOLR-6485:
---

Commit 1631554 from hoss...@apache.org in branch 'dev/trunk'
[ https://svn.apache.org/r1631554 ]

SOLR-6485: remove useless javadocs that are broken and violate precommit

> ReplicationHandler should have an option to throttle the speed of replication
> -
>
> Key: SOLR-6485
> URL: https://issues.apache.org/jira/browse/SOLR-6485
> Project: Solr
>  Issue Type: Improvement
>Reporter: Varun Thacker
>Assignee: Noble Paul
> Fix For: 5.0, Trunk
>
> Attachments: SOLR-6485.patch, SOLR-6485.patch, SOLR-6485.patch, 
> SOLR-6485.patch, SOLR-6485.patch, SOLR-6485.patch, SOLR-6485.patch
>
>
> The ReplicationHandler should have an option to throttle the speed of 
> replication.
> It is useful for people who want bring up nodes in their SolrCloud cluster or 
> when have a backup-restore API and not eat up all their network bandwidth 
> while replicating.
> I am writing a test case and will attach a patch shortly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3619) Rename 'example' dir to 'server' and pull examples into an 'examples' directory

2014-10-13 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14169986#comment-14169986
 ] 

Yonik Seeley commented on SOLR-3619:


bq. chatting with Hoss and Steve about this they suggested just using 
create_collection and if Solr is running in standalone mode, the Collections 
API just does the right thing and creates the core (vs. erroring out). Anyone 
have any thoughts about this?

That seems reasonable.

bq.  creating a core under the hood when running in standalone mode would 
confuse users as none of the other ‘Collection’ API calls would work though the 
collection creation would succeed

That's why all of the applicable APIs should be made to work.
The other path is to either scale down better by using embedded ZK, or by 
emulating ZK using the filesystem.

I have reservations about launching external ZK processes by default... it 
feels like it could really complicate the new user experience.  (so many 
gotchas in there... another moving part, what happens if ZK comes up but solr 
doesn't, killing Solr is no longer so simple, getting rid of unwanted state is 
no longer simple, etc).



> Rename 'example' dir to 'server' and pull examples into an 'examples' 
> directory
> ---
>
> Key: SOLR-3619
> URL: https://issues.apache.org/jira/browse/SOLR-3619
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mark Miller
>Assignee: Timothy Potter
> Fix For: 4.9, Trunk
>
> Attachments: SOLR-3619.patch, SOLR-3619.patch, managed-schema, 
> server-name-layout.png, solrconfig.xml
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-MacOSX (64bit/jdk1.8.0) - Build # 1884 - Failure!

2014-10-13 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-MacOSX/1884/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseParallelGC

1 tests failed.
REGRESSION:  org.apache.solr.cloud.AliasIntegrationTest.testDistribSearch

Error Message:
org.apache.http.NoHttpResponseException: The target server failed to respond

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: 
org.apache.http.NoHttpResponseException: The target server failed to respond
at 
__randomizedtesting.SeedInfo.seed([58135ECD3C6ADA99:D9F5D0D54B35BAA5]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrServer.requestWithRetryOnStaleState(CloudSolrServer.java:795)
at 
org.apache.solr.client.solrj.impl.CloudSolrServer.request(CloudSolrServer.java:660)
at 
org.apache.solr.client.solrj.request.AbstractUpdateRequest.process(AbstractUpdateRequest.java:124)
at org.apache.solr.client.solrj.SolrServer.add(SolrServer.java:116)
at org.apache.solr.client.solrj.SolrServer.add(SolrServer.java:102)
at 
org.apache.solr.cloud.AliasIntegrationTest.doTest(AliasIntegrationTest.java:110)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lu

[jira] [Commented] (SOLR-3619) Rename 'example' dir to 'server' and pull examples into an 'examples' directory

2014-10-13 Thread Timothy Potter (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14169895#comment-14169895
 ] 

Timothy Potter commented on SOLR-3619:
--

Interesting discussion related to this ticket on the #solr IRC channel today. 
In a nutshell, we're thinking the bin/solr scripts would default to starting 
the Solr server in cloud-mode and the next step after a new user starts the 
server they can do: bin/solr create_collection -name foo -shards 4 -replicas 2 
-configset schemaless (as an example) ... this would be for 5.x. If a user 
doesn't want cloud mode, then the user can start a standalone server but will 
be on their own for creating cores using the core admin api or the Web UI.

We also discussed starting ZooKeeper in a separate process instead of embedded 
but that can be tackled by another ticket.

Here's a record of that discussion:

[11:28am] timpotter: Hi - so as part of SOLR-3619, I’d like to add a command to 
the bin/solr scripts to create a new core or collection (depending on which 
mode Solr is running in cloud or standalone). I think it over complicates 
things to have two commands (create_core AND create_collection) so my first 
thought was to have some overloaded command like create_index but that 
introduces yet another term in our nomenclature that will only confuse new 
users even more … chatting with Hoss and Steve about this they suggested just 
using create_collection and if Solr is running in standalone mode, the 
Collections API just does the right thing and creates the core (vs. erroring 
out). Anyone have any thoughts about this?
[11:28am] timpotter: the idea here is that you’d fire up solr using bin/solr 
start and then either A) go to the Web UI or  can just create a new collection 
right there on the command-line
[11:28am] timpotter: (emoticon unintentional, should be B.
[11:30am] erikhatcher: timpotter: maybe it could default (or vice versa) to 
falling back to creating a core in standalone mode, and have a command-line 
switch that just errors after the collection creation fails 
(-force-solrcloud-mode ?)
[11:32am] timpotter: erikhatcher: It could do that … I think Hoss’ point is 
that the Collections API should just work in standalone mode but maybe that’s 
opening a big can of worms …
[11:32am] anshum: I’d say it’d be better to error out and only support 
collection creation in cloud mode… creating a core under the hood when running 
in standalone mode would confuse users as none of the other ‘Collection’ API 
calls would work though the collection creation would succeed
[11:38am] timpotter: anshum - that’s good point … so how to tell users to 
create a core in standalone mode … most important in my mind is easy to get 
started
[11:39am] anshum: we should default to solrcloud mode.. i.e. a zk comes up 
behind the scenes…, have defaults…  and bring things up
[11:39am] anshum: leave the core creation to advanced users only
[11:39am] anshum: we really wouldn’t want someone who’s getting started to know 
about ‘cores’ .. but that’s my thought..
[11:40am] timpotter: correct - which is why I liked create_collection but 
there’s no requirement to start Solr in cloud mode
[11:40am] timpotter: so if I start in standalone mode, what do I do next?
[11:43am] anshum: timpotter: that’s the thing. cloud = zk enabled. If the start 
scripts starts solr with embedded zk by default, there’s no standalone mode. If 
by standalone mode you mean 1 node, that could still be a solrcloud setup with 
1 node, embedded zk..
[11:45am] sarowe: anshum: i don't think we should ever use embedded zk, it's a 
bad practice
[11:45am] hoss: anshum: i would go even farther: no embedded zk, the start 
script with --example launches a distinct zk process  but i think 
ultimatley tim is worried baout incrimental progress
[11:45am] timpotter: anshum - yes the script can do that (ie. make cloud mode 
the default)
[11:46am] anshum: sarowe/hoss: sure… that’s even better.. I don’t think the 
user needs to familiarize with standalone vs cloud .. and I’m always +1 on 
non-embedded zk..
[11:46am] timpotter: is that what we want to do in 5.x? ie. download -> start 
solr (in cloud mode) -> create_collection? If so, that’s doable
[11:47am] anshum: timpotter: I would like that.
[11:48am] sarowe: timpotter: but that leaves your original question unanswered: 
what to do in (non-default) standalone mode?  should create_collection create a 
core?
[11:48am] timpotter: sarowe - sounds like just error out and the user is left 
to their own devices (curl to core API or Web UI)
[11:49am] anshum: sarowe: there is no standalone mode (non-cloud)… is what I’m 
saying… start script starts solrcloud in 5x ?
[11:49am] timpotter: which seems fine if non-cloud mode is not the default 
anymore
[11:49am] timpotter: anshum - correct - but people still do master/slave all 
the time so there has to be a way to not do cloud
[11:50am] sarowe: 

[JENKINS] Lucene-Solr-Tests-trunk-Java7 - Build # 4920 - Failure

2014-10-13 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-trunk-Java7/4920/

2 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.TestReplicationHandler

Error Message:
Resource in scope SUITE failed to close. Resource was registered from thread 
Thread[id=3571, name=coreLoadExecutor-1290-thread-1, state=RUNNABLE, 
group=TGRP-TestReplicationHandler], registration stack trace below.

Stack Trace:
com.carrotsearch.randomizedtesting.ResourceDisposalError: Resource in scope 
SUITE failed to close. Resource was registered from thread Thread[id=3571, 
name=coreLoadExecutor-1290-thread-1, state=RUNNABLE, 
group=TGRP-TestReplicationHandler], registration stack trace below.
at __randomizedtesting.SeedInfo.seed([EC9A93101F75C9C3]:0)
at java.lang.Thread.getStackTrace(Thread.java:1589)
at 
com.carrotsearch.randomizedtesting.RandomizedContext.closeAtEnd(RandomizedContext.java:166)
at 
org.apache.lucene.util.LuceneTestCase.closeAfterSuite(LuceneTestCase.java:727)
at 
org.apache.lucene.util.LuceneTestCase.wrapDirectory(LuceneTestCase.java:1313)
at 
org.apache.lucene.util.LuceneTestCase.newDirectory(LuceneTestCase.java:1204)
at 
org.apache.lucene.util.LuceneTestCase.newDirectory(LuceneTestCase.java:1196)
at 
org.apache.solr.core.MockDirectoryFactory.create(MockDirectoryFactory.java:47)
at 
org.apache.solr.core.CachingDirectoryFactory.get(CachingDirectoryFactory.java:350)
at org.apache.solr.core.SolrCore.getNewIndexDir(SolrCore.java:276)
at org.apache.solr.core.SolrCore.initIndex(SolrCore.java:488)
at org.apache.solr.core.SolrCore.(SolrCore.java:794)
at org.apache.solr.core.SolrCore.(SolrCore.java:652)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:509)
at org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:273)
at org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:267)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.AssertionError: Directory not closed: 
MockDirectoryWrapper(NIOFSDirectory@/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-trunk-Java7/solr/build/solr-core/test/J1/temp/solr.handler.TestReplicationHandler-EC9A93101F75C9C3-001/index-NIOFSDirectory-030
 
lockFactory=NativeFSLockFactory@/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-trunk-Java7/solr/build/solr-core/test/J1/temp/solr.handler.TestReplicationHandler-EC9A93101F75C9C3-001/index-NIOFSDirectory-030)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.lucene.util.CloseableDirectory.close(CloseableDirectory.java:47)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$2$1.apply(RandomizedRunner.java:699)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$2$1.apply(RandomizedRunner.java:696)
at 
com.carrotsearch.randomizedtesting.RandomizedContext.closeResources(RandomizedContext.java:183)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$2.afterAlways(RandomizedRunner.java:712)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
... 1 more


REGRESSION:  
org.apache.solr.update.SoftAutoCommitTest.testSoftAndHardCommitMaxTimeDelete

Error Message:
searcher529 wasn't soon enough after soft529: 1413227279582 !< 1413227279375 + 
150 (fudge)

Stack Trace:
java.lang.AssertionError: searcher529 wasn't soon enough after soft529: 
1413227279582 !< 1413227279375 + 150 (fudge)
at 
__randomizedtesting.SeedInfo.seed([EC9A93101F75C9C3:2BD62B8D04DD0473]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.update.SoftAutoCommitTest.testSoftAndHardCommitMaxTimeDelete(SoftAutoCommitTest.java:265)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesR

[jira] [Resolved] (SOLR-6157) ReplicationFactorTest hangs

2014-10-13 Thread Timothy Potter (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Potter resolved SOLR-6157.
--
   Resolution: Fixed
Fix Version/s: (was: 4.10)
   5.0

looks to be resolved in branch_5x and trunk

> ReplicationFactorTest hangs
> ---
>
> Key: SOLR-6157
> URL: https://issues.apache.org/jira/browse/SOLR-6157
> Project: Solr
>  Issue Type: Bug
>  Components: replication (java)
>Reporter: Uwe Schindler
>Assignee: Timothy Potter
> Fix For: 5.0
>
>
> See: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/10517/
> You can download all logs from there.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6529) Stop command in the start scripts should only stop the instance that it had started

2014-10-13 Thread Timothy Potter (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6529?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Potter updated SOLR-6529:
-
Fix Version/s: 4.10.2

> Stop command in the start scripts should only stop the instance that it had 
> started
> ---
>
> Key: SOLR-6529
> URL: https://issues.apache.org/jira/browse/SOLR-6529
> Project: Solr
>  Issue Type: Improvement
>Reporter: Varun Thacker
>Assignee: Timothy Potter
> Fix For: 4.10.2, 5.0
>
>
> Currently the stop command looks for all running Solr instances and stops 
> those. I feel this is a bit dangerous.
> When starting a process with the start command we could write out a pid file 
> of the solr process that it started. Then the stop script should stop that 
> process.
> It could error out if the pid file is not present.
> We could still keep the feature of stopping all solr nodes by passing passing 
> -all ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-6592) Re-try loop in the ZkController.waitForLeaderToSeeDownState method hangs unit test when leader is gone

2014-10-13 Thread Timothy Potter (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6592?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Potter resolved SOLR-6592.
--
   Resolution: Fixed
Fix Version/s: 5.0

> Re-try loop in the ZkController.waitForLeaderToSeeDownState method hangs unit 
> test when leader is gone
> --
>
> Key: SOLR-6592
> URL: https://issues.apache.org/jira/browse/SOLR-6592
> Project: Solr
>  Issue Type: Bug
>Reporter: Timothy Potter
>Assignee: Timothy Potter
> Fix For: 5.0
>
> Attachments: SOLR-6592.patch
>
>
> HttpPartitionTest is failing due to a ThreadLeakError, which I believe is 
> because the re-try loop in ZkController.waitForLeaderToSeeDownState is coded 
> to take upwards of 12 minutes to fail (2 minutes socket timeout, 6 max 
> retries). The code should be improved to stop trying if the leader is gone, 
> which seems to be the case here (maybe). At the very least, need to figure 
> out how to avoid this ThreadLeakError.
> Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/11234/
> Java: 64bit/jdk1.8.0_40-ea-b04 -XX:+UseCompressedOops -XX:+UseG1GC
> 2 tests failed.
> FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.HttpPartitionTest
> Error Message:
> 1 thread leaked from SUITE scope at org.apache.solr.cloud.HttpPartitionTest:  
>1) Thread[id=8655, name=Thread-2764, state=RUNNABLE, 
> group=TGRP-HttpPartitionTest] at 
> java.net.SocketInputStream.socketRead0(Native Method) at 
> java.net.SocketInputStream.socketRead(SocketInputStream.java:116) at 
> java.net.SocketInputStream.read(SocketInputStream.java:170) at 
> java.net.SocketInputStream.read(SocketInputStream.java:141) at 
> org.apache.http.impl.io.AbstractSessionInputBuffer.fillBuffer(AbstractSessionInputBuffer.java:160)
>  at 
> org.apache.http.impl.io.SocketInputBuffer.fillBuffer(SocketInputBuffer.java:84)
>  at 
> org.apache.http.impl.io.AbstractSessionInputBuffer.readLine(AbstractSessionInputBuffer.java:273)
>  at 
> org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:140)
>  at 
> org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:57)
>  at 
> org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:260)
>  at 
> org.apache.http.impl.AbstractHttpClientConnection.receiveResponseHeader(AbstractHttpClientConnection.java:283)
>  at 
> org.apache.http.impl.conn.DefaultClientConnection.receiveResponseHeader(DefaultClientConnection.java:251)
>  at 
> org.apache.http.impl.conn.ManagedClientConnectionImpl.receiveResponseHeader(ManagedClientConnectionImpl.java:197)
>  at 
> org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:271)
>  at 
> org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:123)
>  at 
> org.apache.http.impl.client.DefaultRequestDirector.tryExecute(DefaultRequestDirector.java:682)
>  at 
> org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:486)
>  at 
> org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:863)
>  at 
> org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
>  at 
> org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:106)
>  at 
> org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:57)
>  at 
> org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:466)
>  at 
> org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:215)
>  at 
> org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:211)
>  at 
> org.apache.solr.cloud.ZkController.waitForLeaderToSeeDownState(ZkController.java:1623)
>  at 
> org.apache.solr.cloud.ZkController.registerAllCoresAsDown(ZkController.java:422)
>  at 
> org.apache.solr.cloud.ZkController.access$100(ZkController.java:93) 
> at org.apache.solr.cloud.ZkController$1.command(ZkController.java:261)
>  at 
> org.apache.solr.common.cloud.ConnectionManager$1$1.run(ConnectionManager.java:166)
> Stack Trace:
> com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from 
> SUITE scope at org.apache.solr.cloud.HttpPartitionTest:
>1) Thread[id=8655, name=Thread-2764, state=RUNNABLE, 
> group=TGRP-HttpPartitionTest]
> at java.net.SocketInputStream.socketRead0(Native Method)
> at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
> at java.net.SocketInputStream.read(Sock

[jira] [Updated] (SOLR-6549) bin/solr script should support a -s option to set the -Dsolr.solr.home property

2014-10-13 Thread Timothy Potter (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Potter updated SOLR-6549:
-
Fix Version/s: 4.10.2

> bin/solr script should support a -s option to set the -Dsolr.solr.home 
> property
> ---
>
> Key: SOLR-6549
> URL: https://issues.apache.org/jira/browse/SOLR-6549
> Project: Solr
>  Issue Type: Improvement
>  Components: scripts and tools
>Reporter: Timothy Potter
>Assignee: Timothy Potter
> Fix For: 4.10.2, 5.0
>
>
> The bin/solr script supports a -d parameter for specifying the directory 
> containing the webapp, resources, etc, lib ... In most cases, these binaries 
> are reusable (and will eventually be in a server directory SOLR-3619) even if 
> you want to have multiple solr.solr.home directories on the same server. In 
> other words, it is more common/better to do:
> {code}
> bin/solr start -d server -s home1
> bin/solr start -d server -s home2
> {code}
> than to do:
> {code}
> bin/solr start -d server1
> bin/solr start -d server2
> {code}
> Basically, the start script needs to support a -s option that allows you to 
> share binaries but have different Solr home directories for running multiple 
> Solr instances on the same host.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6486) solr start script can have a debug flag option

2014-10-13 Thread Timothy Potter (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6486?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Potter updated SOLR-6486:
-
Fix Version/s: 4.10.2

> solr start script can have a debug flag option
> --
>
> Key: SOLR-6486
> URL: https://issues.apache.org/jira/browse/SOLR-6486
> Project: Solr
>  Issue Type: Bug
>  Components: scripts and tools
>Reporter: Noble Paul
>Assignee: Timothy Potter
> Fix For: 4.10.2, 5.0
>
> Attachments: SOLR-6486.patch
>
>
> normally I would add this line to my java -jar start.jar
> -agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=18983
> the script should take the whole string or assume the debug port to be 
> solrPort+1 (if all I pass is debug=true) or if 
> debug="-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=18983) 
> use it verbatim



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6509) Solr start scripts interactive mode doesn't honor -z argument

2014-10-13 Thread Timothy Potter (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6509?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Potter updated SOLR-6509:
-
Fix Version/s: 4.10.2

> Solr start scripts interactive mode doesn't honor -z argument
> -
>
> Key: SOLR-6509
> URL: https://issues.apache.org/jira/browse/SOLR-6509
> Project: Solr
>  Issue Type: Bug
>  Components: scripts and tools
>Affects Versions: 4.10
>Reporter: Shalin Shekhar Mangar
>Assignee: Timothy Potter
> Fix For: 4.10.2, 5.0
>
> Attachments: SOLR-6509.patch
>
>
> The solr start script ignore -z parameter when combined with -e cloud 
> (interactive cloud mode).
> {code}
> ./bin/solr -z localhost:2181 -e cloud
> Welcome to the SolrCloud example!
> This interactive session will help you launch a SolrCloud cluster on your 
> local workstation.
> To begin, how many Solr nodes would you like to run in your local cluster? 
> (specify 1-4 nodes) [2] 1
> Ok, let's start up 1 Solr nodes for your example SolrCloud cluster.
> Please enter the port for node1 [8983] 
> 8983
> Cloning /home/shalin/programs/solr-4.10.1/example into 
> /home/shalin/programs/solr-4.10.1/node1
> Starting up SolrCloud node1 on port 8983 using command:
> solr start -cloud -d node1 -p 8983 
> Waiting to see Solr listening on port 8983 [-]  
> Started Solr server on port 8983 (pid=27291). Happy searching!
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6509) Solr start scripts interactive mode doesn't honor -z argument

2014-10-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14169864#comment-14169864
 ] 

ASF subversion and git services commented on SOLR-6509:
---

Commit 1631528 from [~thelabdude] in branch 'dev/branches/lucene_solr_4_10'
[ https://svn.apache.org/r1631528 ]

SOLR-6509, SOLR-6486, SOLR-6549, SOLR-6529: backport recent fixes / 
improvements to the bin/solr scripts for inclusion in 4.10.2 release.

> Solr start scripts interactive mode doesn't honor -z argument
> -
>
> Key: SOLR-6509
> URL: https://issues.apache.org/jira/browse/SOLR-6509
> Project: Solr
>  Issue Type: Bug
>  Components: scripts and tools
>Affects Versions: 4.10
>Reporter: Shalin Shekhar Mangar
>Assignee: Timothy Potter
> Fix For: 4.10.2, 5.0
>
> Attachments: SOLR-6509.patch
>
>
> The solr start script ignore -z parameter when combined with -e cloud 
> (interactive cloud mode).
> {code}
> ./bin/solr -z localhost:2181 -e cloud
> Welcome to the SolrCloud example!
> This interactive session will help you launch a SolrCloud cluster on your 
> local workstation.
> To begin, how many Solr nodes would you like to run in your local cluster? 
> (specify 1-4 nodes) [2] 1
> Ok, let's start up 1 Solr nodes for your example SolrCloud cluster.
> Please enter the port for node1 [8983] 
> 8983
> Cloning /home/shalin/programs/solr-4.10.1/example into 
> /home/shalin/programs/solr-4.10.1/node1
> Starting up SolrCloud node1 on port 8983 using command:
> solr start -cloud -d node1 -p 8983 
> Waiting to see Solr listening on port 8983 [-]  
> Started Solr server on port 8983 (pid=27291). Happy searching!
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6529) Stop command in the start scripts should only stop the instance that it had started

2014-10-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14169867#comment-14169867
 ] 

ASF subversion and git services commented on SOLR-6529:
---

Commit 1631528 from [~thelabdude] in branch 'dev/branches/lucene_solr_4_10'
[ https://svn.apache.org/r1631528 ]

SOLR-6509, SOLR-6486, SOLR-6549, SOLR-6529: backport recent fixes / 
improvements to the bin/solr scripts for inclusion in 4.10.2 release.

> Stop command in the start scripts should only stop the instance that it had 
> started
> ---
>
> Key: SOLR-6529
> URL: https://issues.apache.org/jira/browse/SOLR-6529
> Project: Solr
>  Issue Type: Improvement
>Reporter: Varun Thacker
>Assignee: Timothy Potter
> Fix For: 5.0
>
>
> Currently the stop command looks for all running Solr instances and stops 
> those. I feel this is a bit dangerous.
> When starting a process with the start command we could write out a pid file 
> of the solr process that it started. Then the stop script should stop that 
> process.
> It could error out if the pid file is not present.
> We could still keep the feature of stopping all solr nodes by passing passing 
> -all ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6549) bin/solr script should support a -s option to set the -Dsolr.solr.home property

2014-10-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14169866#comment-14169866
 ] 

ASF subversion and git services commented on SOLR-6549:
---

Commit 1631528 from [~thelabdude] in branch 'dev/branches/lucene_solr_4_10'
[ https://svn.apache.org/r1631528 ]

SOLR-6509, SOLR-6486, SOLR-6549, SOLR-6529: backport recent fixes / 
improvements to the bin/solr scripts for inclusion in 4.10.2 release.

> bin/solr script should support a -s option to set the -Dsolr.solr.home 
> property
> ---
>
> Key: SOLR-6549
> URL: https://issues.apache.org/jira/browse/SOLR-6549
> Project: Solr
>  Issue Type: Improvement
>  Components: scripts and tools
>Reporter: Timothy Potter
>Assignee: Timothy Potter
> Fix For: 5.0
>
>
> The bin/solr script supports a -d parameter for specifying the directory 
> containing the webapp, resources, etc, lib ... In most cases, these binaries 
> are reusable (and will eventually be in a server directory SOLR-3619) even if 
> you want to have multiple solr.solr.home directories on the same server. In 
> other words, it is more common/better to do:
> {code}
> bin/solr start -d server -s home1
> bin/solr start -d server -s home2
> {code}
> than to do:
> {code}
> bin/solr start -d server1
> bin/solr start -d server2
> {code}
> Basically, the start script needs to support a -s option that allows you to 
> share binaries but have different Solr home directories for running multiple 
> Solr instances on the same host.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6486) solr start script can have a debug flag option

2014-10-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14169865#comment-14169865
 ] 

ASF subversion and git services commented on SOLR-6486:
---

Commit 1631528 from [~thelabdude] in branch 'dev/branches/lucene_solr_4_10'
[ https://svn.apache.org/r1631528 ]

SOLR-6509, SOLR-6486, SOLR-6549, SOLR-6529: backport recent fixes / 
improvements to the bin/solr scripts for inclusion in 4.10.2 release.

> solr start script can have a debug flag option
> --
>
> Key: SOLR-6486
> URL: https://issues.apache.org/jira/browse/SOLR-6486
> Project: Solr
>  Issue Type: Bug
>  Components: scripts and tools
>Reporter: Noble Paul
>Assignee: Timothy Potter
> Fix For: 5.0
>
> Attachments: SOLR-6486.patch
>
>
> normally I would add this line to my java -jar start.jar
> -agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=18983
> the script should take the whole string or assume the debug port to be 
> solrPort+1 (if all I pass is debug=true) or if 
> debug="-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=18983) 
> use it verbatim



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Is there going to be Lucene/Solr 4.11?

2014-10-13 Thread Shai Erera
The problem I think is that you tie the Lucene and Solr releases together,
and expect that whenever a new Lucene release is out, with features that
are considered great for the *Lucene* users, the matching Solr release will
have a number of cool new features too.

We've decided a long time ago to release Lucene and Solr together. But I
don't remember that we decided that each Lucene/Solr release needs to
include great/cool features on both of the releases.

Yes, if Lucene's code base is hardened, enough to warrant a dot release,
and our policy says this means a dot Solr release as well, then Solr users
are advised to upgrade (just like ElasticSearch users, when they will issue
their dot(.dot) release or whatever. Lucene is the base of Solr, and I see
a justification for e.g. a Solr 4.10 even if the only lines of code that
were changed are in the Lucene binaries that Solr 4.10 will package.

That, to me, is completely orthogonal to how often Solr releases cool and
great new features. If you feel this hasn't been the case since Solr 4.1
(and I don't think this is the case), then this is something serious we
should discuss in the context of "how come Solr didn't release something
major in last 2 years?". But as I said, this certainly isn't the case.

If you're looking for a justification to upgrade to Solr 5.0, then I will
try this:

"because the Lucene/Solr community has decided to release version 5.0,
which improves code stability, gets rid of back-compat baggage which
resulted in bugs, index corruptions and prevented code improvements and
enhancements; and because Solr and Lucene are released together - this is
mainly a release that will get your Solr on par w/ the latest Lucene
binaries, and will guarantee that your indexes will get supported by next
Lucene/Solr major version - 6.0; As usual, Lucene and Solr will continue to
release enhancements and cool features throughout the dot releases".

Point is (at least on my part) - *our* (this community's) major releases
are mostly about index backwards compatibility support. We also note big
API improvements/changes, though we that's not always the case (see 4.2 w/
the DocValues changes, 4.1 and then again 4.7, or 4.8 with facet APIs
overhauling) - depends if we marked a feature experimental or not. But to
the user who e.g. used DocValues in 4.1, and had to migrate to 4.2, I don't
think it would matter to him much if we said 4.2 is actually 5.0 -- it's
the same amount of work to him. So that's why I conclude that our major
releases are mostly about index back-compat support.

If it happens to be that they also come w/ great and cool new features,
that's a bonus IMO. For instance, and I'm only speculating, I think it's
just luck (that's my speculation) that both Lucene *and* Solr 4.0 had major
features (Lucene: Codecs, Solr: SolrCloud). The two are completely
unrelated and it could very well be that we would release Lucene 4.0 (and
Solr to accompany it) before SolrCloud is ready. And when that's ready, it
would follow with either a 4.1 or 5.0 release (both would be fine by me).

But I think that attempting to look at Lucene/Solr major releases as if
they carry some sort of gospel, is wrong. A 5.1 release can easily be
bigger than e.g. the previous 4.0 (at least in # features and lines of code
changes). We don't need to wait for 6.0 with this.

Our users should upgrade to the next major release, I would say, ALWAYS.
That's the only way they can guarantee their indexes continued to be
supported. Even if that major release has 0 (note, ZERO) # changes, they
should still upgrade. They will need to anyway, when the next-next major
release is out (or they reindex).

Shai

On Mon, Oct 13, 2014 at 7:11 PM, Jack Krupansky 
wrote:

> (And then there are people like me and my "Solr 4.x Deep Dive" book - I
> guess I'll finally have to get back to catching up, and have a great excuse
> for a new/revised title.)
>
> -- Jack Krupansky
>
> -Original Message- From: Alexandre Rafalovitch
> Sent: Monday, October 13, 2014 10:49 AM
> To: dev@lucene.apache.org
> Subject: Re: Is there going to be Lucene/Solr 4.11?
>
> On 13 October 2014 10:37, Jack Krupansky  wrote:
>
>> Now... I'll have to admit that maybe there might be clarity among the
>> Lucene
>> dev/user crowd, but mostly I'm referring to the Solr user crowd, who
>> aren't
>> up on Lucene internals.
>>
> I do admit having troubles envisioning the Solr 5 introduction book
> start with information I see so far:
> "Solr 5 is a great new release. _Users_ will benefit from using the
> new amazing features X, Y, and Z that take Solr beyond already great
> capabilities in version 4 and now allow , ,
> and ".
>
> I am not saying there is nothing to slot into that phrase above. But
> in all the discussions, it seems unclear what the actual user benefits
> will be. The index-corruption is an internal - if very important issue
> - but not something we can phrase user benefits around. "Increased
> stability" is a user-level term I

[jira] [Commented] (SOLR-6529) Stop command in the start scripts should only stop the instance that it had started

2014-10-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14169798#comment-14169798
 ] 

ASF subversion and git services commented on SOLR-6529:
---

Commit 1631515 from [~thelabdude] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1631515 ]

SOLR-6529: fix regression in error handling introduced by using pid files for 
finding Solr processes on the localhost

> Stop command in the start scripts should only stop the instance that it had 
> started
> ---
>
> Key: SOLR-6529
> URL: https://issues.apache.org/jira/browse/SOLR-6529
> Project: Solr
>  Issue Type: Improvement
>Reporter: Varun Thacker
>Assignee: Timothy Potter
> Fix For: 5.0
>
>
> Currently the stop command looks for all running Solr instances and stops 
> those. I feel this is a bit dangerous.
> When starting a process with the start command we could write out a pid file 
> of the solr process that it started. Then the stop script should stop that 
> process.
> It could error out if the pid file is not present.
> We could still keep the feature of stopping all solr nodes by passing passing 
> -all ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6529) Stop command in the start scripts should only stop the instance that it had started

2014-10-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14169797#comment-14169797
 ] 

ASF subversion and git services commented on SOLR-6529:
---

Commit 1631514 from [~thelabdude] in branch 'dev/trunk'
[ https://svn.apache.org/r1631514 ]

SOLR-6529: fix regression in error handling introduced by using pid files for 
finding Solr processes on the localhost

> Stop command in the start scripts should only stop the instance that it had 
> started
> ---
>
> Key: SOLR-6529
> URL: https://issues.apache.org/jira/browse/SOLR-6529
> Project: Solr
>  Issue Type: Improvement
>Reporter: Varun Thacker
>Assignee: Timothy Potter
> Fix For: 5.0
>
>
> Currently the stop command looks for all running Solr instances and stops 
> those. I feel this is a bit dangerous.
> When starting a process with the start command we could write out a pid file 
> of the solr process that it started. Then the stop script should stop that 
> process.
> It could error out if the pid file is not present.
> We could still keep the feature of stopping all solr nodes by passing passing 
> -all ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4715) CloudSolrServer does not provide support for setting underlying server properties

2014-10-13 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-4715:

Attachment: SOLR-4715-httpclient.patch

Merged some javadoc improvements in LBHttpSolrServer that Shawn had in the 
other patches.

> CloudSolrServer does not provide support for setting underlying server 
> properties
> -
>
> Key: SOLR-4715
> URL: https://issues.apache.org/jira/browse/SOLR-4715
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.3
>Reporter: Hardik Upadhyay
>Assignee: Shawn Heisey
>  Labels: solr, solrj
> Attachments: SOLR-4715-httpclient.patch, SOLR-4715-httpclient.patch, 
> SOLR-4715-incomplete.patch, SOLR-4715-incomplete.patch
>
>
> CloudSolrServer (and LBHttpSolrServer) do not allow the user to set 
> underlying HttpSolrServer and HttpClient settings without arcane java 
> enchantments.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-MAVEN] Lucene-Solr-Maven-5.x #731: POMs out of sync

2014-10-13 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Maven-5.x/731/

3 tests failed.
REGRESSION:  org.apache.solr.cloud.ChaosMonkeySafeLeaderTest.testDistribSearch

Error Message:
shard1 is not consistent.  Got 35 from 
http://127.0.0.1:19887/collection1lastClient and got 11 from 
http://127.0.0.1:32370/collection1

Stack Trace:
java.lang.AssertionError: shard1 is not consistent.  Got 35 from 
http://127.0.0.1:19887/collection1lastClient and got 11 from 
http://127.0.0.1:32370/collection1
at 
__randomizedtesting.SeedInfo.seed([882E075B35E84A4D:9C8894342B72A71]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.checkShardConsistency(AbstractFullDistribZkTestBase.java:1255)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.checkShardConsistency(AbstractFullDistribZkTestBase.java:1234)
at 
org.apache.solr.cloud.ChaosMonkeySafeLeaderTest.doTest(ChaosMonkeySafeLeaderTest.java:162)


REGRESSION:  
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testDistribSearch

Error Message:
Error CREATEing SolrCore 'halfcollection_shard1_replica1': Unable to create 
core [halfcollection_shard1_replica1] Caused by: Could not get shard id for 
core: halfcollection_shard1_replica1

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: Error 
CREATEing SolrCore 'halfcollection_shard1_replica1': Unable to create core 
[halfcollection_shard1_replica1] Caused by: Could not get shard id for core: 
halfcollection_shard1_replica1
at 
__randomizedtesting.SeedInfo.seed([A07AB56BAB8B253A:219C3B73DCD44506]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:569)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:215)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:211)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testErrorHandling(CollectionsAPIDistributedZkTest.java:583)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.doTest(CollectionsAPIDistributedZkTest.java:205)


REGRESSION:  org.apache.solr.cloud.DeleteReplicaTest.testDistribSearch

Error Message:
No live SolrServers available to handle this 
request:[https://127.0.0.1:62624/zh/oh, https://127.0.0.1:59855/zh/oh, 
https://127.0.0.1:59852/zh/oh, https://127.0.0.1:59858/zh/oh, 
https://127.0.0.1:12555/zh/oh]

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this request:[https://127.0.0.1:62624/zh/oh, 
https://127.0.0.1:59855/zh/oh, https://127.0.0.1:59852/zh/oh, 
https://127.0.0.1:59858/zh/oh, https://127.0.0.1:12555/zh/oh]
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:569)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:215)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:211)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrServer.doRequest(LBHttpSolrServer.java:353)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrServer.request(LBHttpSolrServer.java:312)
at 
org.apache.solr.client.solrj.impl.CloudSolrServer.sendRequest(CloudSolrServer.java:880)
at 
org.apache.solr.client.solrj.impl.CloudSolrServer.requestWithRetryOnStaleState(CloudSolrServer.java:658)
at 
org.apache.solr.client.solrj.impl.CloudSolrServer.request(CloudSolrServer.java:601)
at 
org.apache.solr.cloud.DeleteReplicaTest.removeAndWaitForReplicaGone(DeleteReplicaTest.java:172)
at 
org.apache.solr.cloud.DeleteReplicaTest.deleteLiveReplicaTest(DeleteReplicaTest.java:145)
at 
org.apache.solr.cloud.DeleteReplicaTest.doTest(DeleteReplicaTest.java:89)




Build Log:
[...truncated 53231 lines...]
BUILD FAILED
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-5.x/build.xml:547: 
The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-5.x/build.xml:199: 
The following error occurred while executing this line:
: Java returned: 1

Total time: 280 minutes 16 seconds
Build step 'Invoke Ant' marked build as failure
Recording test results
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-6617) /update/json/docs handler needs to do a better job with tweet like JSON structures

2014-10-13 Thread Timothy Potter (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14169749#comment-14169749
 ] 

Timothy Potter commented on SOLR-6617:
--

Patch looks good [~noble.paul]. I applied this to my test scenario:

{code}
curl "http://localhost:8983/solr/tutorial/update/json/docs"; -H 
'Content-type:application/json' -d @sample_tweet.json
{code}

Resulted in:

{code}
{
"user.name": [
  "Stewart Townsend"
],
"user.url": [
  "http://www.stewarttownsend.com";
],
"user.description": [
  "Developer Relations at Datasift (www.datasift.com)  - Car racing 
petrol head, all things social lover, co-founder of www.flowerytweetup.com"
],
"user.location": [
  "iPhone: 53.852402,-2.220047"
],
"user.statuses_count": [
  28247
],
"user.followers_count": [
  3094
],
"user.friends_count": [
  510
],
"user.screen_name": [
  "stewarttownsend"
],
"user.lang": [
  "en"
],
"user.time_zone": [
  "London"
],
"user.listed_count": [
  221
],
"user.id": [
  14065694
],
"user.id_str": [
  14065694
],
"user.geo_enabled": [
  true
],
"id": "136447843652214784",
"text": [
  "Morning San Francisco - 36 hours and counting.. #datasift"
],
"source": [
  "http://www.tweetdeck.com\"; rel=\"nofollow\">TweetDeck"
],
"created_at": [
  "Tue, 15 Nov 2011 14:17:55 +"
],
"_version_": 1481875073806631000
  }
{code}

Which I'd say is very reasonable behavior on Solr's part.  +1 for commit

> /update/json/docs handler needs to do a better job with tweet like JSON 
> structures
> --
>
> Key: SOLR-6617
> URL: https://issues.apache.org/jira/browse/SOLR-6617
> Project: Solr
>  Issue Type: Improvement
>Reporter: Timothy Potter
>Assignee: Noble Paul
> Attachments: SOLR-6617.patch
>
>
> SOLR-6304 allows me to send in arbitrary JSON document and have Solr do 
> something reasonable with it. I tried this with a simple tweet and got a 
> weird error:
> {code}
> curl "http://localhost:8983/solr/tutorial/update/json/docs"; -H 
> 'Content-type:application/json' -d @sample_tweet.json
> {"responseHeader":{"status":400,"QTime":11},"error":{"msg":"Document contains 
> multiple values for uniqueKey field: id=[14065694, 
> 136447843652214784]","code":400}}
> {code}
> Here's the tweet I'm trying to index:
> {code}
> {
> "user": {
> "name": "John Doe",
> "screen_name": "example",
> "lang": "en",
> "time_zone": "London",
> "listed_count": 221,
> "id": 14065694,
> "geo_enabled": true
> },
> "id": "136447843652214784",
> "text": "Morning San Francisco - 36 hours and counting.. #datasift",
> "created_at": "Tue, 15 Nov 2011 14:17:55 +"
> }
> {code}
> The error is because the nested user object within the tweet also has an "id" 
> field. So then I tried to map /user/id to user_id_s via:
> {code}
> curl 
> "http://localhost:8983/solr/tutorial/update/json/docs?f=user_id_s:/user/id"; 
> -H 'Content-type:application/json' -d @sample_tweet.json
> {"responseHeader":{"status":400,"QTime":0},"error":{"msg":"Document is 
> missing mandatory uniqueKey field: id","code":400}}
> {code}
> So then I added the mapping for id explicitly and it worked:
> curl 
> "http://localhost:8983/solr/tutorial/update/json/docs?f=id:/id&f=user_id_s:/user/id";
>  -H 'Content-type:application/json' -d @sample_tweet.json
> {"responseHeader":{"status":0,"QTime":25}}
> Working through this wasn't terrible but our goal with features like this is 
> to have Solr make good decisions when possible to ease the new user's burden 
> of getting to know Solr.
> I'm just wondering if the reasonable thing to do wouldn't be to map the user 
> fields with user_ prefix? ie /user/id becomes user_id automatically.
> Lastly, I wanted to use field guessing with this so my JSON document gets 
> indexed in a reasonable way and the only data that got indexed is:
> {code}
> {
> "user_id_s": "14065694",
> "id": "136447843652214784",
> "_version_": 1481614081193410600
> }
> {code}
> So I explicitly defined the /update/json/docs request handler in my 
> solrconfig.xml as:
> {code}
>   
> 
>  add-unknown-fields-to-the-schema
>  application/json
>
>   
> {code}
> Same result - no field guessing! (this is using the schemaless example config)



--
This message was sent by

[jira] [Updated] (SOLR-6326) ExternalFileFieldReloader and commits

2014-10-13 Thread Peter Keegan (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6326?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Keegan updated SOLR-6326:
---
Attachment: SOLR-6326.patch

Patch attached

> ExternalFileFieldReloader and commits
> -
>
> Key: SOLR-6326
> URL: https://issues.apache.org/jira/browse/SOLR-6326
> Project: Solr
>  Issue Type: Bug
>Reporter: Peter Keegan
>  Labels: difficulty-medium, externalfilefield, impact-medium
> Attachments: SOLR-6326.patch
>
>
> When there are multiple 'external file field' files available, Solr will 
> reload the last one (lexicographically) with a commit, but only if changes 
> were made to the index. Otherwise, it skips the reload and logs: "No 
> uncommitted changes. Skipping IW.commit." 
> IndexWriter.hasUncommittedChanges() returns false, but new external files 
> should be reloaded with commits.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.7.0_67) - Build # 11288 - Failure!

2014-10-13 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/11288/
Java: 64bit/jdk1.7.0_67 -XX:-UseCompressedOops -XX:+UseParallelGC

All tests passed

Build Log:
[...truncated 54658 lines...]
BUILD FAILED
/mnt/ssd/jenkins/workspace/Lucene-Solr-5.x-Linux/build.xml:524: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-5.x-Linux/build.xml:79: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build.xml:568: The 
following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-5.x-Linux/lucene/common-build.xml:1905: 
The following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-5.x-Linux/lucene/common-build.xml:1939: 
Compile failed; see the compiler error output for details.

Total time: 117 minutes 6 seconds
Build step 'Invoke Ant' marked build as failure
[description-setter] Description set: Java: 64bit/jdk1.7.0_67 
-XX:-UseCompressedOops -XX:+UseParallelGC
Archiving artifacts
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-5.x-Windows (32bit/jdk1.8.0_40-ea-b09) - Build # 4268 - Still Failing!

2014-10-13 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Windows/4268/
Java: 32bit/jdk1.8.0_40-ea-b09 -client -XX:+UseG1GC

No tests ran.

Build Log:
[...truncated 11361 lines...]
   [junit4] Suite: org.apache.solr.cloud.ChaosMonkeySafeLeaderTest
   [junit4]   2> Creating dataDir: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.ChaosMonkeySafeLeaderTest-93F4C42173ACF1E-001\init-core-data-001
   [junit4]   2> 3472022 T9063 oas.SolrTestCaseJ4.buildSSLConfig Randomized ssl 
(false) and clientAuth (true)
   [junit4]   2> 3472023 T9063 
oas.BaseDistributedSearchTestCase.initHostContext Setting hostContext system 
property: /
   [junit4]   2> 3472030 T9063 oas.SolrTestCaseJ4.setUp ###Starting 
testDistribSearch
   [junit4]   2> 3472034 T9063 oasc.ZkTestServer.run STARTING ZK TEST SERVER
   [junit4]   1> client port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 3472035 T9064 oasc.ZkTestServer$ZKServerMain.runFromConfig 
Starting server
   [junit4]   2> 3472162 T9063 oasc.ZkTestServer.run start zk server on 
port:61729
   [junit4]   2> 3472162 T9063 
oascc.SolrZkClient.createZkCredentialsToAddAutomatically Using default 
ZkCredentialsProvider
   [junit4]   2> 3472166 T9063 oascc.ConnectionManager.waitForConnected Waiting 
for client to connect to ZooKeeper
   [junit4]   2> 3472171 T9070 oascc.ConnectionManager.process Watcher 
org.apache.solr.common.cloud.ConnectionManager@1ba8b48 name:ZooKeeperConnection 
Watcher:127.0.0.1:61729 got event WatchedEvent state:SyncConnected type:None 
path:null path:null type:None
   [junit4]   2> 3472172 T9063 oascc.ConnectionManager.waitForConnected Client 
is connected to ZooKeeper
   [junit4]   2> 3472172 T9063 oascc.SolrZkClient.createZkACLProvider Using 
default ZkACLProvider
   [junit4]   2> 3472172 T9063 oascc.SolrZkClient.makePath makePath: /solr
   [junit4]   2> 3472180 T9065 oazs.NIOServerCnxn.doIO WARN caught end of 
stream exception EndOfStreamException: Unable to read additional data from 
client sessionid 0x1490a572d76, likely client has closed socket
   [junit4]   2>at 
org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228)
   [junit4]   2>at 
org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
   [junit4]   2>at java.lang.Thread.run(Thread.java:745)
   [junit4]   2> 
   [junit4]   2> 3472180 T9063 
oascc.SolrZkClient.createZkCredentialsToAddAutomatically Using default 
ZkCredentialsProvider
   [junit4]   2> 3472184 T9063 oascc.ConnectionManager.waitForConnected Waiting 
for client to connect to ZooKeeper
   [junit4]   2> 3472185 T9072 oascc.ConnectionManager.process Watcher 
org.apache.solr.common.cloud.ConnectionManager@c01b63 name:ZooKeeperConnection 
Watcher:127.0.0.1:61729/solr got event WatchedEvent state:SyncConnected 
type:None path:null path:null type:None
   [junit4]   2> 3472187 T9063 oascc.ConnectionManager.waitForConnected Client 
is connected to ZooKeeper
   [junit4]   2> 3472187 T9063 oascc.SolrZkClient.createZkACLProvider Using 
default ZkACLProvider
   [junit4]   2> 3472188 T9063 oascc.SolrZkClient.makePath makePath: 
/collections/collection1
   [junit4]   2> 3472194 T9063 oascc.SolrZkClient.makePath makePath: 
/collections/collection1/shards
   [junit4]   2> 3472198 T9063 oascc.SolrZkClient.makePath makePath: 
/collections/control_collection
   [junit4]   2> 3472200 T9063 oascc.SolrZkClient.makePath makePath: 
/collections/control_collection/shards
   [junit4]   2> 3472205 T9063 oasc.AbstractZkTestCase.putConfig put 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\core\src\test-files\solr\collection1\conf\solrconfig-tlog.xml
 to /configs/conf1/solrconfig.xml
   [junit4]   2> 3472205 T9063 oascc.SolrZkClient.makePath makePath: 
/configs/conf1/solrconfig.xml
   [junit4]   2> 3472216 T9063 oasc.AbstractZkTestCase.putConfig put 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\core\src\test-files\solr\collection1\conf\schema15.xml
 to /configs/conf1/schema.xml
   [junit4]   2> 3472216 T9063 oascc.SolrZkClient.makePath makePath: 
/configs/conf1/schema.xml
   [junit4]   2> 3472221 T9063 oasc.AbstractZkTestCase.putConfig put 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\core\src\test-files\solr\collection1\conf\solrconfig.snippet.randomindexconfig.xml
 to /configs/conf1/solrconfig.snippet.randomindexconfig.xml
   [junit4]   2> 3472221 T9063 oascc.SolrZkClient.makePath makePath: 
/configs/conf1/solrconfig.snippet.randomindexconfig.xml
   [junit4]   2> 3472226 T9063 oasc.AbstractZkTestCase.putConfig put 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\core\src\test-files\solr\collection1\conf\stopwords.txt
 to /configs/conf1/stopwords.txt
   [junit4]   2> 3472226 T9063 oascc.SolrZkClient.makePath makePath: 
/configs/conf1/stopwords.txt
   [junit4]   2> 3472232 T9063 oasc.AbstractZkTestCase.putConfig put 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\core\src\test-files\solr\co

[jira] [Commented] (SOLR-6592) Re-try loop in the ZkController.waitForLeaderToSeeDownState method hangs unit test when leader is gone

2014-10-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14169515#comment-14169515
 ] 

ASF subversion and git services commented on SOLR-6592:
---

Commit 1631467 from [~thelabdude] in branch 'dev/trunk'
[ https://svn.apache.org/r1631467 ]

SOLR-6592: add mention in solr/CHANGES.txt

> Re-try loop in the ZkController.waitForLeaderToSeeDownState method hangs unit 
> test when leader is gone
> --
>
> Key: SOLR-6592
> URL: https://issues.apache.org/jira/browse/SOLR-6592
> Project: Solr
>  Issue Type: Bug
>Reporter: Timothy Potter
>Assignee: Timothy Potter
> Attachments: SOLR-6592.patch
>
>
> HttpPartitionTest is failing due to a ThreadLeakError, which I believe is 
> because the re-try loop in ZkController.waitForLeaderToSeeDownState is coded 
> to take upwards of 12 minutes to fail (2 minutes socket timeout, 6 max 
> retries). The code should be improved to stop trying if the leader is gone, 
> which seems to be the case here (maybe). At the very least, need to figure 
> out how to avoid this ThreadLeakError.
> Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/11234/
> Java: 64bit/jdk1.8.0_40-ea-b04 -XX:+UseCompressedOops -XX:+UseG1GC
> 2 tests failed.
> FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.HttpPartitionTest
> Error Message:
> 1 thread leaked from SUITE scope at org.apache.solr.cloud.HttpPartitionTest:  
>1) Thread[id=8655, name=Thread-2764, state=RUNNABLE, 
> group=TGRP-HttpPartitionTest] at 
> java.net.SocketInputStream.socketRead0(Native Method) at 
> java.net.SocketInputStream.socketRead(SocketInputStream.java:116) at 
> java.net.SocketInputStream.read(SocketInputStream.java:170) at 
> java.net.SocketInputStream.read(SocketInputStream.java:141) at 
> org.apache.http.impl.io.AbstractSessionInputBuffer.fillBuffer(AbstractSessionInputBuffer.java:160)
>  at 
> org.apache.http.impl.io.SocketInputBuffer.fillBuffer(SocketInputBuffer.java:84)
>  at 
> org.apache.http.impl.io.AbstractSessionInputBuffer.readLine(AbstractSessionInputBuffer.java:273)
>  at 
> org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:140)
>  at 
> org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:57)
>  at 
> org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:260)
>  at 
> org.apache.http.impl.AbstractHttpClientConnection.receiveResponseHeader(AbstractHttpClientConnection.java:283)
>  at 
> org.apache.http.impl.conn.DefaultClientConnection.receiveResponseHeader(DefaultClientConnection.java:251)
>  at 
> org.apache.http.impl.conn.ManagedClientConnectionImpl.receiveResponseHeader(ManagedClientConnectionImpl.java:197)
>  at 
> org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:271)
>  at 
> org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:123)
>  at 
> org.apache.http.impl.client.DefaultRequestDirector.tryExecute(DefaultRequestDirector.java:682)
>  at 
> org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:486)
>  at 
> org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:863)
>  at 
> org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
>  at 
> org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:106)
>  at 
> org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:57)
>  at 
> org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:466)
>  at 
> org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:215)
>  at 
> org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:211)
>  at 
> org.apache.solr.cloud.ZkController.waitForLeaderToSeeDownState(ZkController.java:1623)
>  at 
> org.apache.solr.cloud.ZkController.registerAllCoresAsDown(ZkController.java:422)
>  at 
> org.apache.solr.cloud.ZkController.access$100(ZkController.java:93) 
> at org.apache.solr.cloud.ZkController$1.command(ZkController.java:261)
>  at 
> org.apache.solr.common.cloud.ConnectionManager$1$1.run(ConnectionManager.java:166)
> Stack Trace:
> com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from 
> SUITE scope at org.apache.solr.cloud.HttpPartitionTest:
>1) Thread[id=8655, name=Thread-2764, state=RUNNABLE, 
> group=TGRP-HttpPartitionTest]
> at java.net.SocketInputStream.socket

[jira] [Commented] (SOLR-6533) Support editing common solrconfig.xml values

2014-10-13 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14169510#comment-14169510
 ] 

Noble Paul commented on SOLR-6533:
--

second cut . ZK support added

> Support editing common solrconfig.xml values
> 
>
> Key: SOLR-6533
> URL: https://issues.apache.org/jira/browse/SOLR-6533
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Noble Paul
> Attachments: SOLR-6533.patch, SOLR-6533.patch
>
>
> There are a bunch of properties in solrconfig.xml which users want to edit. 
> We will attack them first
> These properties will be persisted to a separate file called config.json (or 
> whatever file). Instead of saving in the same format we will have well known 
> properties which users can directly edit
> {code}
> cores.transientCacheSize
> indexConfig.mergeFactor
> {code}   
> The api will be modeled around the bulk schema API
> {code:javascript}
> curl http://localhost:8983/solr/collection1/config -H 
> 'Content-type:application/json'  -d '{
> "set-property" : {"index.mergeFactor:5},
> "unset-property":{"cores.transientCacheSize"}
> }'
> {code}
> {code:javascript}
> //or use this to set ${mypropname} values
> curl http://localhost:8983/solr/collection1/config -H 
> 'Content-type:application/json'  -d '{
> "set-user-property" : {"mypropname":"my_prop_val"},
> "unset-user-property":{"mypropname"}
> }'
> {code}
> The values stored in the config.json will always take precedence and will be 
> applied after loading solrconfig.xml. 
> An http GET on /config path will give the real config that is applied . 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6592) Re-try loop in the ZkController.waitForLeaderToSeeDownState method hangs unit test when leader is gone

2014-10-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14169503#comment-14169503
 ] 

ASF subversion and git services commented on SOLR-6592:
---

Commit 1631464 from [~thelabdude] in branch 'dev/trunk'
[ https://svn.apache.org/r1631464 ]

SOLR-6592: add mention in solr/CHANGES.txt

> Re-try loop in the ZkController.waitForLeaderToSeeDownState method hangs unit 
> test when leader is gone
> --
>
> Key: SOLR-6592
> URL: https://issues.apache.org/jira/browse/SOLR-6592
> Project: Solr
>  Issue Type: Bug
>Reporter: Timothy Potter
>Assignee: Timothy Potter
> Attachments: SOLR-6592.patch
>
>
> HttpPartitionTest is failing due to a ThreadLeakError, which I believe is 
> because the re-try loop in ZkController.waitForLeaderToSeeDownState is coded 
> to take upwards of 12 minutes to fail (2 minutes socket timeout, 6 max 
> retries). The code should be improved to stop trying if the leader is gone, 
> which seems to be the case here (maybe). At the very least, need to figure 
> out how to avoid this ThreadLeakError.
> Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/11234/
> Java: 64bit/jdk1.8.0_40-ea-b04 -XX:+UseCompressedOops -XX:+UseG1GC
> 2 tests failed.
> FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.HttpPartitionTest
> Error Message:
> 1 thread leaked from SUITE scope at org.apache.solr.cloud.HttpPartitionTest:  
>1) Thread[id=8655, name=Thread-2764, state=RUNNABLE, 
> group=TGRP-HttpPartitionTest] at 
> java.net.SocketInputStream.socketRead0(Native Method) at 
> java.net.SocketInputStream.socketRead(SocketInputStream.java:116) at 
> java.net.SocketInputStream.read(SocketInputStream.java:170) at 
> java.net.SocketInputStream.read(SocketInputStream.java:141) at 
> org.apache.http.impl.io.AbstractSessionInputBuffer.fillBuffer(AbstractSessionInputBuffer.java:160)
>  at 
> org.apache.http.impl.io.SocketInputBuffer.fillBuffer(SocketInputBuffer.java:84)
>  at 
> org.apache.http.impl.io.AbstractSessionInputBuffer.readLine(AbstractSessionInputBuffer.java:273)
>  at 
> org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:140)
>  at 
> org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:57)
>  at 
> org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:260)
>  at 
> org.apache.http.impl.AbstractHttpClientConnection.receiveResponseHeader(AbstractHttpClientConnection.java:283)
>  at 
> org.apache.http.impl.conn.DefaultClientConnection.receiveResponseHeader(DefaultClientConnection.java:251)
>  at 
> org.apache.http.impl.conn.ManagedClientConnectionImpl.receiveResponseHeader(ManagedClientConnectionImpl.java:197)
>  at 
> org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:271)
>  at 
> org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:123)
>  at 
> org.apache.http.impl.client.DefaultRequestDirector.tryExecute(DefaultRequestDirector.java:682)
>  at 
> org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:486)
>  at 
> org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:863)
>  at 
> org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
>  at 
> org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:106)
>  at 
> org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:57)
>  at 
> org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:466)
>  at 
> org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:215)
>  at 
> org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:211)
>  at 
> org.apache.solr.cloud.ZkController.waitForLeaderToSeeDownState(ZkController.java:1623)
>  at 
> org.apache.solr.cloud.ZkController.registerAllCoresAsDown(ZkController.java:422)
>  at 
> org.apache.solr.cloud.ZkController.access$100(ZkController.java:93) 
> at org.apache.solr.cloud.ZkController$1.command(ZkController.java:261)
>  at 
> org.apache.solr.common.cloud.ConnectionManager$1$1.run(ConnectionManager.java:166)
> Stack Trace:
> com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from 
> SUITE scope at org.apache.solr.cloud.HttpPartitionTest:
>1) Thread[id=8655, name=Thread-2764, state=RUNNABLE, 
> group=TGRP-HttpPartitionTest]
> at java.net.SocketInputStream.socket

[jira] [Commented] (SOLR-6592) Re-try loop in the ZkController.waitForLeaderToSeeDownState method hangs unit test when leader is gone

2014-10-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14169498#comment-14169498
 ] 

ASF subversion and git services commented on SOLR-6592:
---

Commit 1631462 from [~thelabdude] in branch 'dev/trunk'
[ https://svn.apache.org/r1631462 ]

SOLR-6592: add mention in solr/CHANGES.txt

> Re-try loop in the ZkController.waitForLeaderToSeeDownState method hangs unit 
> test when leader is gone
> --
>
> Key: SOLR-6592
> URL: https://issues.apache.org/jira/browse/SOLR-6592
> Project: Solr
>  Issue Type: Bug
>Reporter: Timothy Potter
>Assignee: Timothy Potter
> Attachments: SOLR-6592.patch
>
>
> HttpPartitionTest is failing due to a ThreadLeakError, which I believe is 
> because the re-try loop in ZkController.waitForLeaderToSeeDownState is coded 
> to take upwards of 12 minutes to fail (2 minutes socket timeout, 6 max 
> retries). The code should be improved to stop trying if the leader is gone, 
> which seems to be the case here (maybe). At the very least, need to figure 
> out how to avoid this ThreadLeakError.
> Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/11234/
> Java: 64bit/jdk1.8.0_40-ea-b04 -XX:+UseCompressedOops -XX:+UseG1GC
> 2 tests failed.
> FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.HttpPartitionTest
> Error Message:
> 1 thread leaked from SUITE scope at org.apache.solr.cloud.HttpPartitionTest:  
>1) Thread[id=8655, name=Thread-2764, state=RUNNABLE, 
> group=TGRP-HttpPartitionTest] at 
> java.net.SocketInputStream.socketRead0(Native Method) at 
> java.net.SocketInputStream.socketRead(SocketInputStream.java:116) at 
> java.net.SocketInputStream.read(SocketInputStream.java:170) at 
> java.net.SocketInputStream.read(SocketInputStream.java:141) at 
> org.apache.http.impl.io.AbstractSessionInputBuffer.fillBuffer(AbstractSessionInputBuffer.java:160)
>  at 
> org.apache.http.impl.io.SocketInputBuffer.fillBuffer(SocketInputBuffer.java:84)
>  at 
> org.apache.http.impl.io.AbstractSessionInputBuffer.readLine(AbstractSessionInputBuffer.java:273)
>  at 
> org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:140)
>  at 
> org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:57)
>  at 
> org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:260)
>  at 
> org.apache.http.impl.AbstractHttpClientConnection.receiveResponseHeader(AbstractHttpClientConnection.java:283)
>  at 
> org.apache.http.impl.conn.DefaultClientConnection.receiveResponseHeader(DefaultClientConnection.java:251)
>  at 
> org.apache.http.impl.conn.ManagedClientConnectionImpl.receiveResponseHeader(ManagedClientConnectionImpl.java:197)
>  at 
> org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:271)
>  at 
> org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:123)
>  at 
> org.apache.http.impl.client.DefaultRequestDirector.tryExecute(DefaultRequestDirector.java:682)
>  at 
> org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:486)
>  at 
> org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:863)
>  at 
> org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
>  at 
> org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:106)
>  at 
> org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:57)
>  at 
> org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:466)
>  at 
> org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:215)
>  at 
> org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:211)
>  at 
> org.apache.solr.cloud.ZkController.waitForLeaderToSeeDownState(ZkController.java:1623)
>  at 
> org.apache.solr.cloud.ZkController.registerAllCoresAsDown(ZkController.java:422)
>  at 
> org.apache.solr.cloud.ZkController.access$100(ZkController.java:93) 
> at org.apache.solr.cloud.ZkController$1.command(ZkController.java:261)
>  at 
> org.apache.solr.common.cloud.ConnectionManager$1$1.run(ConnectionManager.java:166)
> Stack Trace:
> com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from 
> SUITE scope at org.apache.solr.cloud.HttpPartitionTest:
>1) Thread[id=8655, name=Thread-2764, state=RUNNABLE, 
> group=TGRP-HttpPartitionTest]
> at java.net.SocketInputStream.socket

[jira] [Resolved] (SOLR-6511) Fencepost error in LeaderInitiatedRecoveryThread

2014-10-13 Thread Timothy Potter (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6511?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Potter resolved SOLR-6511.
--
Resolution: Fixed

> Fencepost error in LeaderInitiatedRecoveryThread
> 
>
> Key: SOLR-6511
> URL: https://issues.apache.org/jira/browse/SOLR-6511
> Project: Solr
>  Issue Type: Bug
>Reporter: Alan Woodward
>Assignee: Timothy Potter
> Fix For: 4.10.2, 5.0
>
> Attachments: SOLR-6511.patch, SOLR-6511.patch
>
>
> At line 106:
> {code}
> while (continueTrying && ++tries < maxTries) {
> {code}
> should be
> {code}
> while (continueTrying && ++tries <= maxTries) {
> {code}
> This is only a problem when called from DistributedUpdateProcessor, as it can 
> have maxTries set to 1, which means the loop is never actually run.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Is there going to be Lucene/Solr 4.11?

2014-10-13 Thread Jack Krupansky
(And then there are people like me and my "Solr 4.x Deep Dive" book - I 
guess I'll finally have to get back to catching up, and have a great excuse 
for a new/revised title.)


-- Jack Krupansky

-Original Message- 
From: Alexandre Rafalovitch

Sent: Monday, October 13, 2014 10:49 AM
To: dev@lucene.apache.org
Subject: Re: Is there going to be Lucene/Solr 4.11?

On 13 October 2014 10:37, Jack Krupansky  wrote:
Now... I'll have to admit that maybe there might be clarity among the 
Lucene
dev/user crowd, but mostly I'm referring to the Solr user crowd, who 
aren't

up on Lucene internals.

I do admit having troubles envisioning the Solr 5 introduction book
start with information I see so far:
"Solr 5 is a great new release. _Users_ will benefit from using the
new amazing features X, Y, and Z that take Solr beyond already great
capabilities in version 4 and now allow , ,
and ".

I am not saying there is nothing to slot into that phrase above. But
in all the discussions, it seems unclear what the actual user benefits
will be. The index-corruption is an internal - if very important issue
- but not something we can phrase user benefits around. "Increased
stability" is a user-level term I guess, but it is semi-negative as it
automatically implies previous _general_ instability.

This is an ongoing problem with Solr - changes that occur strictly in 
Lucene
with no source changes in Solr, never get reported as improvements to 
Solr -

and it is nigh impossible for a Solr user to examine a list of Lucene
changes and determine how they may or may not impact the use of Solr
features.


I confirm this perception. I remember looking at one of the recent
Solr release notes and thinking: "hmm, not much here, why release" to
then click-through to the Lucene notes and seeing the significant
number of important items. Most of the people do not click-through
though and those who do, do not necessarily understand which of the
issues do have impact on Solr.

I guess what I am saying is that it would be nice for somebody to
figure out the Solr 5 user story at some point _before_ we actually
shipped it. And the earlier, the better.

Regards,
  Alex.
P.s. If somebody wants to discuss this user-level perception at Solr
Revolution, I am happy to shout the first round of drinks. :-) Or,
hash it out virtually in the popularizers community I setup:
https://www.linkedin.com/groups?gid=6713853


Personal: http://www.outerthoughts.com/ and @arafalov
Solr resources and newsletter: http://www.solr-start.com/ and @solrstart
Solr popularizers community: https://www.linkedin.com/groups?gid=6713853

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org 



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6511) Fencepost error in LeaderInitiatedRecoveryThread

2014-10-13 Thread Timothy Potter (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14169433#comment-14169433
 ] 

Timothy Potter commented on SOLR-6511:
--

Good catch Shalin thanks. That's my bad for changing functionality as part of 
this ticket.

> Fencepost error in LeaderInitiatedRecoveryThread
> 
>
> Key: SOLR-6511
> URL: https://issues.apache.org/jira/browse/SOLR-6511
> Project: Solr
>  Issue Type: Bug
>Reporter: Alan Woodward
>Assignee: Timothy Potter
> Fix For: 4.10.2, 5.0
>
> Attachments: SOLR-6511.patch, SOLR-6511.patch
>
>
> At line 106:
> {code}
> while (continueTrying && ++tries < maxTries) {
> {code}
> should be
> {code}
> while (continueTrying && ++tries <= maxTries) {
> {code}
> This is only a problem when called from DistributedUpdateProcessor, as it can 
> have maxTries set to 1, which means the loop is never actually run.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6511) Fencepost error in LeaderInitiatedRecoveryThread

2014-10-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14169432#comment-14169432
 ] 

ASF subversion and git services commented on SOLR-6511:
---

Commit 1631447 from [~thelabdude] in branch 'dev/branches/lucene_solr_4_10'
[ https://svn.apache.org/r1631447 ]

SOLR-6511: fix back compat issue when reading existing data from ZK

> Fencepost error in LeaderInitiatedRecoveryThread
> 
>
> Key: SOLR-6511
> URL: https://issues.apache.org/jira/browse/SOLR-6511
> Project: Solr
>  Issue Type: Bug
>Reporter: Alan Woodward
>Assignee: Timothy Potter
> Fix For: 4.10.2, 5.0
>
> Attachments: SOLR-6511.patch, SOLR-6511.patch
>
>
> At line 106:
> {code}
> while (continueTrying && ++tries < maxTries) {
> {code}
> should be
> {code}
> while (continueTrying && ++tries <= maxTries) {
> {code}
> This is only a problem when called from DistributedUpdateProcessor, as it can 
> have maxTries set to 1, which means the loop is never actually run.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6511) Fencepost error in LeaderInitiatedRecoveryThread

2014-10-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14169428#comment-14169428
 ] 

ASF subversion and git services commented on SOLR-6511:
---

Commit 1631444 from [~thelabdude] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1631444 ]

SOLR-6511: fix back compat issue when reading existing data from ZK

> Fencepost error in LeaderInitiatedRecoveryThread
> 
>
> Key: SOLR-6511
> URL: https://issues.apache.org/jira/browse/SOLR-6511
> Project: Solr
>  Issue Type: Bug
>Reporter: Alan Woodward
>Assignee: Timothy Potter
> Fix For: 4.10.2, 5.0
>
> Attachments: SOLR-6511.patch, SOLR-6511.patch
>
>
> At line 106:
> {code}
> while (continueTrying && ++tries < maxTries) {
> {code}
> should be
> {code}
> while (continueTrying && ++tries <= maxTries) {
> {code}
> This is only a problem when called from DistributedUpdateProcessor, as it can 
> have maxTries set to 1, which means the loop is never actually run.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6592) Re-try loop in the ZkController.waitForLeaderToSeeDownState method hangs unit test when leader is gone

2014-10-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14169425#comment-14169425
 ] 

ASF subversion and git services commented on SOLR-6592:
---

Commit 1631442 from [~thelabdude] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1631442 ]

SOLR-6592: Avoid waiting for the leader to see the down state if that leader is 
not live.

> Re-try loop in the ZkController.waitForLeaderToSeeDownState method hangs unit 
> test when leader is gone
> --
>
> Key: SOLR-6592
> URL: https://issues.apache.org/jira/browse/SOLR-6592
> Project: Solr
>  Issue Type: Bug
>Reporter: Timothy Potter
>Assignee: Timothy Potter
> Attachments: SOLR-6592.patch
>
>
> HttpPartitionTest is failing due to a ThreadLeakError, which I believe is 
> because the re-try loop in ZkController.waitForLeaderToSeeDownState is coded 
> to take upwards of 12 minutes to fail (2 minutes socket timeout, 6 max 
> retries). The code should be improved to stop trying if the leader is gone, 
> which seems to be the case here (maybe). At the very least, need to figure 
> out how to avoid this ThreadLeakError.
> Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/11234/
> Java: 64bit/jdk1.8.0_40-ea-b04 -XX:+UseCompressedOops -XX:+UseG1GC
> 2 tests failed.
> FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.HttpPartitionTest
> Error Message:
> 1 thread leaked from SUITE scope at org.apache.solr.cloud.HttpPartitionTest:  
>1) Thread[id=8655, name=Thread-2764, state=RUNNABLE, 
> group=TGRP-HttpPartitionTest] at 
> java.net.SocketInputStream.socketRead0(Native Method) at 
> java.net.SocketInputStream.socketRead(SocketInputStream.java:116) at 
> java.net.SocketInputStream.read(SocketInputStream.java:170) at 
> java.net.SocketInputStream.read(SocketInputStream.java:141) at 
> org.apache.http.impl.io.AbstractSessionInputBuffer.fillBuffer(AbstractSessionInputBuffer.java:160)
>  at 
> org.apache.http.impl.io.SocketInputBuffer.fillBuffer(SocketInputBuffer.java:84)
>  at 
> org.apache.http.impl.io.AbstractSessionInputBuffer.readLine(AbstractSessionInputBuffer.java:273)
>  at 
> org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:140)
>  at 
> org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:57)
>  at 
> org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:260)
>  at 
> org.apache.http.impl.AbstractHttpClientConnection.receiveResponseHeader(AbstractHttpClientConnection.java:283)
>  at 
> org.apache.http.impl.conn.DefaultClientConnection.receiveResponseHeader(DefaultClientConnection.java:251)
>  at 
> org.apache.http.impl.conn.ManagedClientConnectionImpl.receiveResponseHeader(ManagedClientConnectionImpl.java:197)
>  at 
> org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:271)
>  at 
> org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:123)
>  at 
> org.apache.http.impl.client.DefaultRequestDirector.tryExecute(DefaultRequestDirector.java:682)
>  at 
> org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:486)
>  at 
> org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:863)
>  at 
> org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
>  at 
> org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:106)
>  at 
> org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:57)
>  at 
> org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:466)
>  at 
> org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:215)
>  at 
> org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:211)
>  at 
> org.apache.solr.cloud.ZkController.waitForLeaderToSeeDownState(ZkController.java:1623)
>  at 
> org.apache.solr.cloud.ZkController.registerAllCoresAsDown(ZkController.java:422)
>  at 
> org.apache.solr.cloud.ZkController.access$100(ZkController.java:93) 
> at org.apache.solr.cloud.ZkController$1.command(ZkController.java:261)
>  at 
> org.apache.solr.common.cloud.ConnectionManager$1$1.run(ConnectionManager.java:166)
> Stack Trace:
> com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from 
> SUITE scope at org.apache.solr.cloud.HttpPartitionTest:
>1) Thread[id=8655, name=Thread-2764, state=RUNNABLE, 
> group=TGRP-Http

Re: Solr: very slow custom Query/Weight/Scorer, "post filtering" vs sorting

2014-10-13 Thread Shawn Heisey
On 10/13/2014 9:04 AM, Patrick Schemitz wrote:
> I'm trying to implement an image similarity search using Solr 4.6.1.

You're inventing a wheel that has already been done a couple of times. 
This does not necessarily mean that you need to give up on your
implementation, but it does mean you need to have the right reasons for
continuing.  This might even be good news for you, because you know that
it *is* possible.

There's one commercial solution that I know of (supporting up to Solr
4.9), and one that's open source:

http://www.pixolution.de/
https://bitbucket.org/dermotte/liresolr

The company where I work has pixolution integrated into Solr.  Test
queries look really good, but we haven't yet integrated it into anything
that faces our users.  I haven't tried the open source solution.

Thanks,
Shawn


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6511) Fencepost error in LeaderInitiatedRecoveryThread

2014-10-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14169402#comment-14169402
 ] 

ASF subversion and git services commented on SOLR-6511:
---

Commit 1631439 from [~thelabdude] in branch 'dev/trunk'
[ https://svn.apache.org/r1631439 ]

SOLR-6511: fix back compat issue when reading existing data from ZK

> Fencepost error in LeaderInitiatedRecoveryThread
> 
>
> Key: SOLR-6511
> URL: https://issues.apache.org/jira/browse/SOLR-6511
> Project: Solr
>  Issue Type: Bug
>Reporter: Alan Woodward
>Assignee: Timothy Potter
> Fix For: 4.10.2, 5.0
>
> Attachments: SOLR-6511.patch, SOLR-6511.patch
>
>
> At line 106:
> {code}
> while (continueTrying && ++tries < maxTries) {
> {code}
> should be
> {code}
> while (continueTrying && ++tries <= maxTries) {
> {code}
> This is only a problem when called from DistributedUpdateProcessor, as it can 
> have maxTries set to 1, which means the loop is never actually run.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4715) CloudSolrServer does not provide support for setting underlying server properties

2014-10-13 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-4715:

Attachment: SOLR-4715-httpclient.patch

Here's a patch which adds constructors for HttpClient. Setting other properties 
such as request writers and response parsers has already been done with 
SOLR-5223.

> CloudSolrServer does not provide support for setting underlying server 
> properties
> -
>
> Key: SOLR-4715
> URL: https://issues.apache.org/jira/browse/SOLR-4715
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.3
>Reporter: Hardik Upadhyay
>Assignee: Shawn Heisey
>  Labels: solr, solrj
> Attachments: SOLR-4715-httpclient.patch, SOLR-4715-incomplete.patch, 
> SOLR-4715-incomplete.patch
>
>
> CloudSolrServer (and LBHttpSolrServer) do not allow the user to set 
> underlying HttpSolrServer and HttpClient settings without arcane java 
> enchantments.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Solr: very slow custom Query/Weight/Scorer, "post filtering" vs sorting

2014-10-13 Thread Patrick Schemitz
On 10/13/2014 05:09 PM, Shalin Shekhar Mangar wrote:
> That sounds like
> re-ranking?
> https://cwiki.apache.org/confluence/display/solr/Query+Re-Ranking

Interesting approach. So at least I don't have to implement an
additional custom sort function. However,

"If a document matches the original query, but does not match the
re-ranking query, the document's original score will remain."

I need the ImageSimilarityQuery to drop docs with dissimilar images from
the result set. As this apparently cannot be done in the re-rank query,
I'ld have to run the ImageSimilarityQuery twice: once as a post filter
to trim the recall, and then as a re-rank query for correct sorting.

As computing the image similarities is quite expensive, I'ld rather not
do this twice.

Any idea how I can resolve this?

Cheers, Patrick

> On Mon, Oct 13, 2014 at 8:34 PM, Patrick Schemitz  > wrote:
> 
> Hi all,
> 
> I'm trying to implement an image similarity search using Solr 4.6.1.
> 
> I store an image descriptor in each document, and compare these with the
> descriptor given in the query, resulting in an image similarity score.
> This score is then used to filter documents (via a threshold), and to
> sort the results.
> 
> I've written the boilerplate QParser and QParserPlugin around a custom
> Query and its accompanying Weight and Scorer classes. The
> Scorer.nextDoc() is where the actual image similarity computation (and
> skipping of docs below the threshold) takes place.
> 
> This Query/Weight/Scorer construct is obviously very costly, so I don't
> want it to leapfrog with the other - much faster - filters in the query
> (especially when using a high threshold).
> 
> I've tried the "post filtering" mechanism described by Yonik here:
> http://java.dzone.com/articles/advanced-filter-caching-solr
> 
> This speeds up things, but now the results are not sorted by image
> similarity any more.
> 
> I guess what I actually need is a "post query", as opposed to a "post
> filter".
> 
> How can I bring together post filtering and sorting?
> 
> Do I have to write and use a custom sort function, effectively computing
> image similarities twice?
> 
> Any help appreciated!
> 
> Cheers, Patrick
> 
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> 
> For additional commands, e-mail: dev-h...@lucene.apache.org
> 
> 
> 
> 
> 
> -- 
> Regards,
> Shalin Shekhar Mangar.


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Solr: very slow custom Query/Weight/Scorer, "post filtering" vs sorting

2014-10-13 Thread Shalin Shekhar Mangar
That sounds like re-ranking?
https://cwiki.apache.org/confluence/display/solr/Query+Re-Ranking

On Mon, Oct 13, 2014 at 8:34 PM, Patrick Schemitz  wrote:

> Hi all,
>
> I'm trying to implement an image similarity search using Solr 4.6.1.
>
> I store an image descriptor in each document, and compare these with the
> descriptor given in the query, resulting in an image similarity score.
> This score is then used to filter documents (via a threshold), and to
> sort the results.
>
> I've written the boilerplate QParser and QParserPlugin around a custom
> Query and its accompanying Weight and Scorer classes. The
> Scorer.nextDoc() is where the actual image similarity computation (and
> skipping of docs below the threshold) takes place.
>
> This Query/Weight/Scorer construct is obviously very costly, so I don't
> want it to leapfrog with the other - much faster - filters in the query
> (especially when using a high threshold).
>
> I've tried the "post filtering" mechanism described by Yonik here:
> http://java.dzone.com/articles/advanced-filter-caching-solr
>
> This speeds up things, but now the results are not sorted by image
> similarity any more.
>
> I guess what I actually need is a "post query", as opposed to a "post
> filter".
>
> How can I bring together post filtering and sorting?
>
> Do I have to write and use a custom sort function, effectively computing
> image similarities twice?
>
> Any help appreciated!
>
> Cheers, Patrick
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


-- 
Regards,
Shalin Shekhar Mangar.


Solr: very slow custom Query/Weight/Scorer, "post filtering" vs sorting

2014-10-13 Thread Patrick Schemitz
Hi all,

I'm trying to implement an image similarity search using Solr 4.6.1.

I store an image descriptor in each document, and compare these with the
descriptor given in the query, resulting in an image similarity score.
This score is then used to filter documents (via a threshold), and to
sort the results.

I've written the boilerplate QParser and QParserPlugin around a custom
Query and its accompanying Weight and Scorer classes. The
Scorer.nextDoc() is where the actual image similarity computation (and
skipping of docs below the threshold) takes place.

This Query/Weight/Scorer construct is obviously very costly, so I don't
want it to leapfrog with the other - much faster - filters in the query
(especially when using a high threshold).

I've tried the "post filtering" mechanism described by Yonik here:
http://java.dzone.com/articles/advanced-filter-caching-solr

This speeds up things, but now the results are not sorted by image
similarity any more.

I guess what I actually need is a "post query", as opposed to a "post
filter".

How can I bring together post filtering and sorting?

Do I have to write and use a custom sort function, effectively computing
image similarities twice?

Any help appreciated!

Cheers, Patrick

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-6620) 【SOLR】suggestComponent

2014-10-13 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved SOLR-6620.
--
Resolution: Not a Problem

Please raise this kind of question on the user's list, lots more people will 
see it and you'll get much faster help. We try to reserve the JIRAs for code 
changes.

See here: http://lucene.apache.org/solr/discussion.html for how to get on the 
user's list.

> 【SOLR】suggestComponent
> --
>
> Key: SOLR-6620
> URL: https://issues.apache.org/jira/browse/SOLR-6620
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.7.2
>Reporter: Chang
>
> I want to use the suggest, every is ok, but it's so slowly to restart solr, 
> any suggestions?
> Thanks!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6307) Atomic update remove does not work for int array or date array

2014-10-13 Thread Anurag Sharma (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14169367#comment-14169367
 ] 

Anurag Sharma commented on SOLR-6307:
-

Yes. Sure, go ahead! 
Should I send the review using RBT or the attached patch file will work?

> Atomic update remove does not work for int array or date array
> --
>
> Key: SOLR-6307
> URL: https://issues.apache.org/jira/browse/SOLR-6307
> Project: Solr
>  Issue Type: Bug
>  Components: update
>Affects Versions: 4.9
>Reporter: Kun Xi
>Assignee: Noble Paul
>  Labels: atomic, difficulty-medium, impact-medium
> Attachments: SOLR-6307.patch
>
>
> Try to remove an element in the string array with curl:
> {code}
> curl http://localhost:8080/update\?commit\=true -H 
> 'Content-type:application/json' -d '[{ "attr_birth_year_is": { "remove": 
> [1960]},  "id": 1098}]'
> curl http://localhost:8080/update\?commit\=true -H 
> 'Content-type:application/json' -d '[{"reserved_on_dates_dts": {"remove": 
> ["2014-02-12T12:00:00Z", "2014-07-16T12:00:00Z", "2014-02-15T12:00:00Z", 
> "2014-02-21T12:00:00Z"]}, "id": 1098}]'
> {code}
> Neither of them works.
> The set and add operation for int array works. 
> The set, remove, and  add operation for string array works



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-6421) ADDREPLICA doesn't respect :port_solr designation

2014-10-13 Thread Ralph Tice (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14169359#comment-14169359
 ] 

Ralph Tice edited comment on SOLR-6421 at 10/13/14 2:51 PM:


Nevermind, I see the docs don't even mention createNodeSet anymore. :)


was (Author: ralph.tice):
Could we at least address the usability issue here, and if port_solr is passed 
into createNodeSet throw a validation error that suggests node parameter 
instead?

> ADDREPLICA doesn't respect :port_solr designation
> -
>
> Key: SOLR-6421
> URL: https://issues.apache.org/jira/browse/SOLR-6421
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 4.10
> Environment: Ubuntu 14.04
>Reporter: Ralph Tice
>  Labels: SolrCloud, difficulty-easy, impact-medium
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> When I issue an ADDREPLICA call like so:
>
> http://localhost:8983/solr/admin/collections?action=ADDREPLICA&shard=myshard&collection=mycollection&createNodeSet=solr18.mycorp.com:8983_solr
> SolrCloud does not seem to respect the 8983_solr designation in the 
> createNodeSet parameter and instead places the shard on any JVM on the 
> machine instance.  First attempt I got a replica on 8994_solr and second 
> attempt to place a replica on 8983 got a replica on 8992_solr instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Is there going to be Lucene/Solr 4.11?

2014-10-13 Thread Alexandre Rafalovitch
On 13 October 2014 10:37, Jack Krupansky  wrote:
> Now... I'll have to admit that maybe there might be clarity among the Lucene
> dev/user crowd, but mostly I'm referring to the Solr user crowd, who aren't
> up on Lucene internals.
I do admit having troubles envisioning the Solr 5 introduction book
start with information I see so far:
"Solr 5 is a great new release. _Users_ will benefit from using the
new amazing features X, Y, and Z that take Solr beyond already great
capabilities in version 4 and now allow , ,
and ".

I am not saying there is nothing to slot into that phrase above. But
in all the discussions, it seems unclear what the actual user benefits
will be. The index-corruption is an internal - if very important issue
- but not something we can phrase user benefits around. "Increased
stability" is a user-level term I guess, but it is semi-negative as it
automatically implies previous _general_ instability.

> This is an ongoing problem with Solr - changes that occur strictly in Lucene
> with no source changes in Solr, never get reported as improvements to Solr -
> and it is nigh impossible for a Solr user to examine a list of Lucene
> changes and determine how they may or may not impact the use of Solr
> features.

I confirm this perception. I remember looking at one of the recent
Solr release notes and thinking: "hmm, not much here, why release" to
then click-through to the Lucene notes and seeing the significant
number of important items. Most of the people do not click-through
though and those who do, do not necessarily understand which of the
issues do have impact on Solr.

I guess what I am saying is that it would be nice for somebody to
figure out the Solr 5 user story at some point _before_ we actually
shipped it. And the earlier, the better.

Regards,
   Alex.
P.s. If somebody wants to discuss this user-level perception at Solr
Revolution, I am happy to shout the first round of drinks. :-) Or,
hash it out virtually in the popularizers community I setup:
https://www.linkedin.com/groups?gid=6713853


Personal: http://www.outerthoughts.com/ and @arafalov
Solr resources and newsletter: http://www.solr-start.com/ and @solrstart
Solr popularizers community: https://www.linkedin.com/groups?gid=6713853

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6421) ADDREPLICA doesn't respect :port_solr designation

2014-10-13 Thread Ralph Tice (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14169359#comment-14169359
 ] 

Ralph Tice commented on SOLR-6421:
--

Could we at least address the usability issue here, and if port_solr is passed 
into createNodeSet throw a validation error that suggests node parameter 
instead?

> ADDREPLICA doesn't respect :port_solr designation
> -
>
> Key: SOLR-6421
> URL: https://issues.apache.org/jira/browse/SOLR-6421
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 4.10
> Environment: Ubuntu 14.04
>Reporter: Ralph Tice
>  Labels: SolrCloud, difficulty-easy, impact-medium
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> When I issue an ADDREPLICA call like so:
>
> http://localhost:8983/solr/admin/collections?action=ADDREPLICA&shard=myshard&collection=mycollection&createNodeSet=solr18.mycorp.com:8983_solr
> SolrCloud does not seem to respect the 8983_solr designation in the 
> createNodeSet parameter and instead places the shard on any JVM on the 
> machine instance.  First attempt I got a replica on 8994_solr and second 
> attempt to place a replica on 8983 got a replica on 8992_solr instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6485) ReplicationHandler should have an option to throttle the speed of replication

2014-10-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14169355#comment-14169355
 ] 

ASF subversion and git services commented on SOLR-6485:
---

Commit 1631429 from [~noble.paul] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1631429 ]

SOLR-6485 added javadocs

> ReplicationHandler should have an option to throttle the speed of replication
> -
>
> Key: SOLR-6485
> URL: https://issues.apache.org/jira/browse/SOLR-6485
> Project: Solr
>  Issue Type: Improvement
>Reporter: Varun Thacker
>Assignee: Noble Paul
> Fix For: 5.0, Trunk
>
> Attachments: SOLR-6485.patch, SOLR-6485.patch, SOLR-6485.patch, 
> SOLR-6485.patch, SOLR-6485.patch, SOLR-6485.patch, SOLR-6485.patch
>
>
> The ReplicationHandler should have an option to throttle the speed of 
> replication.
> It is useful for people who want bring up nodes in their SolrCloud cluster or 
> when have a backup-restore API and not eat up all their network bandwidth 
> while replicating.
> I am writing a test case and will attach a patch shortly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6485) ReplicationHandler should have an option to throttle the speed of replication

2014-10-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14169354#comment-14169354
 ] 

ASF subversion and git services commented on SOLR-6485:
---

Commit 1631426 from [~noble.paul] in branch 'dev/trunk'
[ https://svn.apache.org/r1631426 ]

SOLR-6485 added javadocs

> ReplicationHandler should have an option to throttle the speed of replication
> -
>
> Key: SOLR-6485
> URL: https://issues.apache.org/jira/browse/SOLR-6485
> Project: Solr
>  Issue Type: Improvement
>Reporter: Varun Thacker
>Assignee: Noble Paul
> Fix For: 5.0, Trunk
>
> Attachments: SOLR-6485.patch, SOLR-6485.patch, SOLR-6485.patch, 
> SOLR-6485.patch, SOLR-6485.patch, SOLR-6485.patch, SOLR-6485.patch
>
>
> The ReplicationHandler should have an option to throttle the speed of 
> replication.
> It is useful for people who want bring up nodes in their SolrCloud cluster or 
> when have a backup-restore API and not eat up all their network bandwidth 
> while replicating.
> I am writing a test case and will attach a patch shortly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6476) Create a bulk mode for schema API

2014-10-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14169350#comment-14169350
 ] 

ASF subversion and git services commented on SOLR-6476:
---

Commit 1631421 from [~noble.paul] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1631421 ]

SOLR-6476 change the error message

> Create a bulk mode for schema API
> -
>
> Key: SOLR-6476
> URL: https://issues.apache.org/jira/browse/SOLR-6476
> Project: Solr
>  Issue Type: New Feature
>  Components: Schema and Analysis
>Reporter: Noble Paul
>Assignee: Noble Paul
>  Labels: managedResource
> Fix For: 5.0, Trunk
>
> Attachments: SOLR-6476.patch, SOLR-6476.patch, SOLR-6476.patch, 
> SOLR-6476.patch, SOLR-6476.patch, SOLR-6476.patch, SOLR-6476.patch, 
> SOLR-6476.patch
>
>
> The current schema API does one operation at a time and the normal usecase is 
> that users add multiple fields/fieldtypes/copyFields etc in one shot.
> example 
> {code:javascript}
> curl http://localhost:8983/solr/collection1/schema -H 
> 'Content-type:application/json'  -d '{
> "add-field": {
> "name":"sell-by",
> "type":"tdate",
> "stored":true
> },
> "add-field":{
> "name":"catchall",
> "type":"text_general",
> "stored":false
> }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6476) Create a bulk mode for schema API

2014-10-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14169349#comment-14169349
 ] 

ASF subversion and git services commented on SOLR-6476:
---

Commit 1631420 from [~noble.paul] in branch 'dev/trunk'
[ https://svn.apache.org/r1631420 ]

SOLR-6476 change the error message

> Create a bulk mode for schema API
> -
>
> Key: SOLR-6476
> URL: https://issues.apache.org/jira/browse/SOLR-6476
> Project: Solr
>  Issue Type: New Feature
>  Components: Schema and Analysis
>Reporter: Noble Paul
>Assignee: Noble Paul
>  Labels: managedResource
> Fix For: 5.0, Trunk
>
> Attachments: SOLR-6476.patch, SOLR-6476.patch, SOLR-6476.patch, 
> SOLR-6476.patch, SOLR-6476.patch, SOLR-6476.patch, SOLR-6476.patch, 
> SOLR-6476.patch
>
>
> The current schema API does one operation at a time and the normal usecase is 
> that users add multiple fields/fieldtypes/copyFields etc in one shot.
> example 
> {code:javascript}
> curl http://localhost:8983/solr/collection1/schema -H 
> 'Content-type:application/json'  -d '{
> "add-field": {
> "name":"sell-by",
> "type":"tdate",
> "stored":true
> },
> "add-field":{
> "name":"catchall",
> "type":"text_general",
> "stored":false
> }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Is there going to be Lucene/Solr 4.11?

2014-10-13 Thread Jack Krupansky
Most of my questions were indeed answered - and I acknowledged that. But 
this claim of "get benefits" continue to be quite vague and murky. In fact, 
until you mentioned it, I was unaware that there had been any significant, 
noticeable benefits from any index changes from 4.1 to date - certainly 
nobody has stepped forward to highlight such benefits. So, as it stands, 
there is no clear, comprehensible user-oriented summary of the net benefits 
to an average Solr user of upgrading their index from 4.x to 5.0 - other 
than that if they fail to do so then they won't be able to upgrade their 
index when 6.0 comes out.


As another example, if a user were to upgrade a 4.4 index to 4.10, what net 
benefits should they expect to see? Does ANYBODY know? Ditto, say, for 4.4 
to 5.0.


Now... I'll have to admit that maybe there might be clarity among the Lucene 
dev/user crowd, but mostly I'm referring to the Solr user crowd, who aren't 
up on Lucene internals.


This is an ongoing problem with Solr - changes that occur strictly in Lucene 
with no source changes in Solr, never get reported as improvements to Solr - 
and it is nigh impossible for a Solr user to examine a list of Lucene 
changes and determine how they may or may not impact the use of Solr 
features.


-- Jack Krupansky

-Original Message- 
From: Adrien Grand

Sent: Monday, October 13, 2014 8:56 AM
To: dev@lucene.apache.org
Subject: Re: Is there going to be Lucene/Solr 4.11?

Hi Jack,

Ryan and Robert already answered all these questions in the following
thread: http://search-lucene.com/m/WwzTb21Dbf3, what are you missing?
Most 4.x releases (7 out of 11!) came with significant changes to the
index format in order to improve performance, compression or
robustness and this will continue to be the case with the 5.x
releases, including 5.0. So the average Lucene and Solr users will
keep on getting benefits from upgrading, like they already did when
upgrading from 3.x to 4.0 or from 4.x to 4.(x+1).

--
Adrien

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org 



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-5852) Add CloudSolrServer helper method to connect to a ZK ensemble

2014-10-13 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar resolved SOLR-5852.
-
   Resolution: Fixed
Fix Version/s: Trunk
   5.0
 Assignee: Shalin Shekhar Mangar

Thanks everyone!

> Add CloudSolrServer helper method to connect to a ZK ensemble
> -
>
> Key: SOLR-5852
> URL: https://issues.apache.org/jira/browse/SOLR-5852
> Project: Solr
>  Issue Type: Improvement
>Reporter: Varun Thacker
>Assignee: Shalin Shekhar Mangar
> Fix For: 5.0, Trunk
>
> Attachments: SOLR-5852-SH.patch, SOLR-5852-SH.patch, SOLR-5852.patch, 
> SOLR-5852.patch, SOLR-5852.patch, SOLR-5852.patch, SOLR-5852.patch, 
> SOLR-5852.patch, SOLR-5852_FK.patch, SOLR-5852_FK.patch
>
>
> We should have a CloudSolrServer constructor which takes a list of ZK servers 
> to connect to.
> Something Like 
> {noformat}
> public CloudSolrServer(String... zkHost);
> {noformat}
> - Document the current constructor better to mention that to connect to a ZK 
> ensemble you can pass a comma-delimited list of ZK servers like 
> zk1:2181,zk2:2181,zk3:2181
> - Thirdly should getLbServer() and getZKStatereader() be public?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5852) Add CloudSolrServer helper method to connect to a ZK ensemble

2014-10-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14169334#comment-14169334
 ] 

ASF subversion and git services commented on SOLR-5852:
---

Commit 1631411 from sha...@apache.org in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1631411 ]

SOLR-5852: Add CloudSolrServer helper method to connect to a ZK ensemble

> Add CloudSolrServer helper method to connect to a ZK ensemble
> -
>
> Key: SOLR-5852
> URL: https://issues.apache.org/jira/browse/SOLR-5852
> Project: Solr
>  Issue Type: Improvement
>Reporter: Varun Thacker
> Attachments: SOLR-5852-SH.patch, SOLR-5852-SH.patch, SOLR-5852.patch, 
> SOLR-5852.patch, SOLR-5852.patch, SOLR-5852.patch, SOLR-5852.patch, 
> SOLR-5852.patch, SOLR-5852_FK.patch, SOLR-5852_FK.patch
>
>
> We should have a CloudSolrServer constructor which takes a list of ZK servers 
> to connect to.
> Something Like 
> {noformat}
> public CloudSolrServer(String... zkHost);
> {noformat}
> - Document the current constructor better to mention that to connect to a ZK 
> ensemble you can pass a comma-delimited list of ZK servers like 
> zk1:2181,zk2:2181,zk3:2181
> - Thirdly should getLbServer() and getZKStatereader() be public?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5852) Add CloudSolrServer helper method to connect to a ZK ensemble

2014-10-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14169332#comment-14169332
 ] 

ASF subversion and git services commented on SOLR-5852:
---

Commit 1631409 from sha...@apache.org in branch 'dev/trunk'
[ https://svn.apache.org/r1631409 ]

SOLR-5852: Add CloudSolrServer helper method to connect to a ZK ensemble

> Add CloudSolrServer helper method to connect to a ZK ensemble
> -
>
> Key: SOLR-5852
> URL: https://issues.apache.org/jira/browse/SOLR-5852
> Project: Solr
>  Issue Type: Improvement
>Reporter: Varun Thacker
> Attachments: SOLR-5852-SH.patch, SOLR-5852-SH.patch, SOLR-5852.patch, 
> SOLR-5852.patch, SOLR-5852.patch, SOLR-5852.patch, SOLR-5852.patch, 
> SOLR-5852.patch, SOLR-5852_FK.patch, SOLR-5852_FK.patch
>
>
> We should have a CloudSolrServer constructor which takes a list of ZK servers 
> to connect to.
> Something Like 
> {noformat}
> public CloudSolrServer(String... zkHost);
> {noformat}
> - Document the current constructor better to mention that to connect to a ZK 
> ensemble you can pass a comma-delimited list of ZK servers like 
> zk1:2181,zk2:2181,zk3:2181
> - Thirdly should getLbServer() and getZKStatereader() be public?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-6579) SnapPuller Replication blocks clean shutdown of tomcat

2014-10-13 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar reassigned SOLR-6579:
---

Assignee: Shalin Shekhar Mangar

> SnapPuller Replication blocks clean shutdown of tomcat
> --
>
> Key: SOLR-6579
> URL: https://issues.apache.org/jira/browse/SOLR-6579
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.10.1
>Reporter: Philip Black-Knight
>Assignee: Shalin Shekhar Mangar
> Attachments: cleanupSnapPullerFinally.patch
>
>
> main issue was described in the mailing list her:
> http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201409.mbox/browser
> and 
> here:
> but also including the quotes:
> original message from Nick
> {quote}
> Hello,
> I have solr 4.10 running on tomcat 7. I'm doing replication from one master
> to about 10 slaves, with standard configuration:
> {code}
>   
>
>  ${enable.master:false}
>  commit
>  startup
>  schema.xml,stopwords.txt
>  
>
>
>  ${enable.slave:false}
>  http://master:8080/solr/mycore
>  00:00:60
>
>   
> {code}
> It appears that if tomcat gets shutdown while solr is replicating, it
> prevents tomcat from shutting down fully. Immediately after receiving the
> shutdown command, a thread dump is logged into catalina.out (this may have
> been turned on by some configuration someone else on my team made). I
> removed some threads that didn't look related, mostly about tomcat session
> replication, or with names like "http-bio-8080-exec-10".
> {code}
> 62252 [http-bio-8080-exec-1] INFO  org.apache.solr.core.SolrCore  –
> [mycore] webapp=/solr path=/replication
> params={command=details&_=1412014928648&wt=json} status=0 QTime=6
> 63310 [http-bio-8080-exec-1] INFO  org.apache.solr.core.SolrCore  –
> [mycore] webapp=/solr path=/replication
> params={command=details&_=1412014929699&wt=json} status=0 QTime=6
> 2014-09-29 14:22:10
> Full thread dump Java HotSpot(TM) 64-Bit Server VM (24.65-b04 mixed
> mode):
> "fsyncService-12-thread-1" prio=10 tid=0x7f3bd4002000 nid=0x203d
> waiting on condition [0x7f3c271f]
>java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0x0007e1ff4458> (a
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> at
> java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
> at
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
> at
> java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
> at
> java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> "explicit-fetchindex-cmd" daemon prio=10 tid=0x7f3c0413e800
> nid=0x203c runnable [0x7f3c272f1000]
>java.lang.Thread.State: RUNNABLE
> at java.net.SocketInputStream.socketRead0(Native Method)
> at java.net.SocketInputStream.read(SocketInputStream.java:152)
> at java.net.SocketInputStream.read(SocketInputStream.java:122)
> at
> org.apache.http.impl.io.AbstractSessionInputBuffer.read(AbstractSessionInputBuffer.java:198)
> at
> org.apache.http.impl.io.ChunkedInputStream.read(ChunkedInputStream.java:174)
> at
> org.apache.http.conn.EofSensorInputStream.read(EofSensorInputStream.java:137)
> at
> org.apache.solr.common.util.FastInputStream.readWrappedStream(FastInputStream.java:80)
> at
> org.apache.solr.common.util.FastInputStream.read(FastInputStream.java:114)
> at
> org.apache.solr.common.util.FastInputStream.readFully(FastInputStream.java:152)
> at
> org.apache.solr.handler.SnapPuller$DirectoryFileFetcher.fetchPackets(SnapPuller.java:1239)
> at
> org.apache.solr.handler.SnapPuller$DirectoryFileFetcher.fetchFile(SnapPuller.java:1187)
> at
> org.apache.solr.handler.SnapPuller.downloadIndexFiles(SnapPuller.java:774)
> at
> org.apache.solr.handler.SnapPuller.fetchLatestIndex(SnapPuller.java:424)
> at
> org.apache.solr.handler.ReplicationHandler.doFetch(ReplicationHandler.java:337)
> at
> org.apache.solr.handler.ReplicationHandler$1.run(ReplicationHandler.java:227)
> "process reaper" daemon prio=10 tid=0x7f3c0409c000 nid=0x203b
> wait

[jira] [Commented] (SOLR-6476) Create a bulk mode for schema API

2014-10-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14169321#comment-14169321
 ] 

ASF subversion and git services commented on SOLR-6476:
---

Commit 1631400 from [~noble.paul] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1631400 ]

SOLR-6476 refactored to get the Operation class outside

> Create a bulk mode for schema API
> -
>
> Key: SOLR-6476
> URL: https://issues.apache.org/jira/browse/SOLR-6476
> Project: Solr
>  Issue Type: New Feature
>  Components: Schema and Analysis
>Reporter: Noble Paul
>Assignee: Noble Paul
>  Labels: managedResource
> Fix For: 5.0, Trunk
>
> Attachments: SOLR-6476.patch, SOLR-6476.patch, SOLR-6476.patch, 
> SOLR-6476.patch, SOLR-6476.patch, SOLR-6476.patch, SOLR-6476.patch, 
> SOLR-6476.patch
>
>
> The current schema API does one operation at a time and the normal usecase is 
> that users add multiple fields/fieldtypes/copyFields etc in one shot.
> example 
> {code:javascript}
> curl http://localhost:8983/solr/collection1/schema -H 
> 'Content-type:application/json'  -d '{
> "add-field": {
> "name":"sell-by",
> "type":"tdate",
> "stored":true
> },
> "add-field":{
> "name":"catchall",
> "type":"text_general",
> "stored":false
> }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6476) Create a bulk mode for schema API

2014-10-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14169317#comment-14169317
 ] 

ASF subversion and git services commented on SOLR-6476:
---

Commit 1631397 from [~noble.paul] in branch 'dev/trunk'
[ https://svn.apache.org/r1631397 ]

SOLR-6476 refactored to get the Operation class outside

> Create a bulk mode for schema API
> -
>
> Key: SOLR-6476
> URL: https://issues.apache.org/jira/browse/SOLR-6476
> Project: Solr
>  Issue Type: New Feature
>  Components: Schema and Analysis
>Reporter: Noble Paul
>Assignee: Noble Paul
>  Labels: managedResource
> Fix For: 5.0, Trunk
>
> Attachments: SOLR-6476.patch, SOLR-6476.patch, SOLR-6476.patch, 
> SOLR-6476.patch, SOLR-6476.patch, SOLR-6476.patch, SOLR-6476.patch, 
> SOLR-6476.patch
>
>
> The current schema API does one operation at a time and the normal usecase is 
> that users add multiple fields/fieldtypes/copyFields etc in one shot.
> example 
> {code:javascript}
> curl http://localhost:8983/solr/collection1/schema -H 
> 'Content-type:application/json'  -d '{
> "add-field": {
> "name":"sell-by",
> "type":"tdate",
> "stored":true
> },
> "add-field":{
> "name":"catchall",
> "type":"text_general",
> "stored":false
> }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6462) forward updates asynchronously to peer clusters/leaders

2014-10-13 Thread Renaud Delbru (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14169296#comment-14169296
 ] 

Renaud Delbru commented on SOLR-6462:
-

I have started to implement the CDCR request handler that will handle CDCR 
life-cycle actions and forward updates to the peer clusters.
While trying to implement the synchronisation of the life-cycle status amongst 
all the nodes of a cluster by using zookeeper, I have encountered a limitation 
of the SolrZkClient. The SolrZkClient provides methods such as getData or 
exists. The problem is that the client automatically wraps the provided watcher 
with a new watcher (see 
[here|https://github.com/apache/lucene-solr/blob/6ead83a6fafbdd6c444e2a837b09eccf34a255ef/solr/solrj/src/java/org/apache/solr/common/cloud/SolrZkClient.java#L255])
 which breaks the guarantee that "a watch object, or function/context pair, 
will only be triggered once for a given notification". This creates undesirable 
effects when we are registering the same watch is the Watcher callback method.

I have created the issue SOLR-6621 to notify about the problem.

> forward updates asynchronously to peer clusters/leaders
> ---
>
> Key: SOLR-6462
> URL: https://issues.apache.org/jira/browse/SOLR-6462
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Yonik Seeley
>
> http://heliosearch.org/solr-cross-data-center-replication/#UpdateFlow
> - An update will be received by the shard leader and versioned
> - Update will be sent from the leader to it’s replicas
> - Concurrently, update will be sent (synchronously or asynchronously) to the 
> shard leader in other clusters
> - Shard leader in the other cluster will receive already versioned update 
> (and not re-version it), and forward the update to it’s replicas



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6621) SolrZkClient does not guarantee that a watch object will only be triggered once for a given notification

2014-10-13 Thread Renaud Delbru (JIRA)
Renaud Delbru created SOLR-6621:
---

 Summary: SolrZkClient does not guarantee that a watch object will 
only be triggered once for a given notification
 Key: SOLR-6621
 URL: https://issues.apache.org/jira/browse/SOLR-6621
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: Trunk
Reporter: Renaud Delbru


The SolrZkClient provides methods such as getData or exists. The problem is 
that the client automatically wraps the provided watcher with a new watcher 
(see 
[here|https://github.com/apache/lucene-solr/blob/6ead83a6fafbdd6c444e2a837b09eccf34a255ef/solr/solrj/src/java/org/apache/solr/common/cloud/SolrZkClient.java#L255])
 which breaks the guarantee that "a watch object, or function/context pair, 
will only be triggered once for a given notification". This creates undesirable 
effects when we are registering the same watch is the Watcher callback method.

A possible solution would be to introduce a SolrZkWatcher class, that will take 
care of submitting the job to the zkCallbackExecutor. Components in SolrCloud 
will extend this class and implement their own callback method. This will 
ensure that the watcher object that zookeeper receives remains the same.

See SOLR-6462 for background information.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-6421) ADDREPLICA doesn't respect :port_solr designation

2014-10-13 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6421?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar resolved SOLR-6421.
-
Resolution: Invalid

Not a bug. Use the "node" parameter to add a replica to a specific node. Thanks 
for investigating Ishan.

> ADDREPLICA doesn't respect :port_solr designation
> -
>
> Key: SOLR-6421
> URL: https://issues.apache.org/jira/browse/SOLR-6421
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 4.10
> Environment: Ubuntu 14.04
>Reporter: Ralph Tice
>  Labels: SolrCloud, difficulty-easy, impact-medium
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> When I issue an ADDREPLICA call like so:
>
> http://localhost:8983/solr/admin/collections?action=ADDREPLICA&shard=myshard&collection=mycollection&createNodeSet=solr18.mycorp.com:8983_solr
> SolrCloud does not seem to respect the 8983_solr designation in the 
> createNodeSet parameter and instead places the shard on any JVM on the 
> machine instance.  First attempt I got a replica on 8994_solr and second 
> attempt to place a replica on 8983 got a replica on 8992_solr instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6594) deprecate the one action only APIs for schema editing

2014-10-13 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14169268#comment-14169268
 ] 

Shalin Shekhar Mangar commented on SOLR-6594:
-

+1

We are close to a 5.0 release so we should start deprecating the other APIs.

> deprecate the one action only APIs for schema editing
> -
>
> Key: SOLR-6594
> URL: https://issues.apache.org/jira/browse/SOLR-6594
> Project: Solr
>  Issue Type: Bug
>Reporter: Noble Paul
>
> with SOLR-6476 done and committed , we have more than one way of writing to 
> schema . Having two different ways of doing the same thing is counter 
> productive . 
> I would like to mark them as deprecated and the calls to those APIs will 
> succeed but will give a deprecation message in the output.  The read APIs 
> would continue to be the same , though .



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6617) /update/json/docs handler needs to do a better job with tweet like JSON structures

2014-10-13 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14169267#comment-14169267
 ] 

Shalin Shekhar Mangar commented on SOLR-6617:
-

I can see why you did not choose a simple parameter to enable FQN vs NAME. This 
makes mapping even more powerful because we can now choose how to certain 
nested sections individually. We'll need to document that we will use FQN by 
default because it breaks backward-compatibility with the previous release.

> /update/json/docs handler needs to do a better job with tweet like JSON 
> structures
> --
>
> Key: SOLR-6617
> URL: https://issues.apache.org/jira/browse/SOLR-6617
> Project: Solr
>  Issue Type: Improvement
>Reporter: Timothy Potter
>Assignee: Noble Paul
> Attachments: SOLR-6617.patch
>
>
> SOLR-6304 allows me to send in arbitrary JSON document and have Solr do 
> something reasonable with it. I tried this with a simple tweet and got a 
> weird error:
> {code}
> curl "http://localhost:8983/solr/tutorial/update/json/docs"; -H 
> 'Content-type:application/json' -d @sample_tweet.json
> {"responseHeader":{"status":400,"QTime":11},"error":{"msg":"Document contains 
> multiple values for uniqueKey field: id=[14065694, 
> 136447843652214784]","code":400}}
> {code}
> Here's the tweet I'm trying to index:
> {code}
> {
> "user": {
> "name": "John Doe",
> "screen_name": "example",
> "lang": "en",
> "time_zone": "London",
> "listed_count": 221,
> "id": 14065694,
> "geo_enabled": true
> },
> "id": "136447843652214784",
> "text": "Morning San Francisco - 36 hours and counting.. #datasift",
> "created_at": "Tue, 15 Nov 2011 14:17:55 +"
> }
> {code}
> The error is because the nested user object within the tweet also has an "id" 
> field. So then I tried to map /user/id to user_id_s via:
> {code}
> curl 
> "http://localhost:8983/solr/tutorial/update/json/docs?f=user_id_s:/user/id"; 
> -H 'Content-type:application/json' -d @sample_tweet.json
> {"responseHeader":{"status":400,"QTime":0},"error":{"msg":"Document is 
> missing mandatory uniqueKey field: id","code":400}}
> {code}
> So then I added the mapping for id explicitly and it worked:
> curl 
> "http://localhost:8983/solr/tutorial/update/json/docs?f=id:/id&f=user_id_s:/user/id";
>  -H 'Content-type:application/json' -d @sample_tweet.json
> {"responseHeader":{"status":0,"QTime":25}}
> Working through this wasn't terrible but our goal with features like this is 
> to have Solr make good decisions when possible to ease the new user's burden 
> of getting to know Solr.
> I'm just wondering if the reasonable thing to do wouldn't be to map the user 
> fields with user_ prefix? ie /user/id becomes user_id automatically.
> Lastly, I wanted to use field guessing with this so my JSON document gets 
> indexed in a reasonable way and the only data that got indexed is:
> {code}
> {
> "user_id_s": "14065694",
> "id": "136447843652214784",
> "_version_": 1481614081193410600
> }
> {code}
> So I explicitly defined the /update/json/docs request handler in my 
> solrconfig.xml as:
> {code}
>   
> 
>  add-unknown-fields-to-the-schema
>  application/json
>
>   
> {code}
> Same result - no field guessing! (this is using the schemaless example config)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Is there going to be Lucene/Solr 4.11?

2014-10-13 Thread Adrien Grand
Hi Jack,

Ryan and Robert already answered all these questions in the following
thread: http://search-lucene.com/m/WwzTb21Dbf3, what are you missing?
Most 4.x releases (7 out of 11!) came with significant changes to the
index format in order to improve performance, compression or
robustness and this will continue to be the case with the 5.x
releases, including 5.0. So the average Lucene and Solr users will
keep on getting benefits from upgrading, like they already did when
upgrading from 3.x to 4.0 or from 4.x to 4.(x+1).

-- 
Adrien

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6307) Atomic update remove does not work for int array or date array

2014-10-13 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14169226#comment-14169226
 ] 

Noble Paul commented on SOLR-6307:
--

have you given the final patch ? I can review and commit this

> Atomic update remove does not work for int array or date array
> --
>
> Key: SOLR-6307
> URL: https://issues.apache.org/jira/browse/SOLR-6307
> Project: Solr
>  Issue Type: Bug
>  Components: update
>Affects Versions: 4.9
>Reporter: Kun Xi
>Assignee: Noble Paul
>  Labels: atomic, difficulty-medium, impact-medium
> Attachments: SOLR-6307.patch
>
>
> Try to remove an element in the string array with curl:
> {code}
> curl http://localhost:8080/update\?commit\=true -H 
> 'Content-type:application/json' -d '[{ "attr_birth_year_is": { "remove": 
> [1960]},  "id": 1098}]'
> curl http://localhost:8080/update\?commit\=true -H 
> 'Content-type:application/json' -d '[{"reserved_on_dates_dts": {"remove": 
> ["2014-02-12T12:00:00Z", "2014-07-16T12:00:00Z", "2014-02-15T12:00:00Z", 
> "2014-02-21T12:00:00Z"]}, "id": 1098}]'
> {code}
> Neither of them works.
> The set and add operation for int array works. 
> The set, remove, and  add operation for string array works



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-6307) Atomic update remove does not work for int array or date array

2014-10-13 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul reassigned SOLR-6307:


Assignee: Noble Paul

> Atomic update remove does not work for int array or date array
> --
>
> Key: SOLR-6307
> URL: https://issues.apache.org/jira/browse/SOLR-6307
> Project: Solr
>  Issue Type: Bug
>  Components: update
>Affects Versions: 4.9
>Reporter: Kun Xi
>Assignee: Noble Paul
>  Labels: atomic, difficulty-medium, impact-medium
> Attachments: SOLR-6307.patch
>
>
> Try to remove an element in the string array with curl:
> {code}
> curl http://localhost:8080/update\?commit\=true -H 
> 'Content-type:application/json' -d '[{ "attr_birth_year_is": { "remove": 
> [1960]},  "id": 1098}]'
> curl http://localhost:8080/update\?commit\=true -H 
> 'Content-type:application/json' -d '[{"reserved_on_dates_dts": {"remove": 
> ["2014-02-12T12:00:00Z", "2014-07-16T12:00:00Z", "2014-02-15T12:00:00Z", 
> "2014-02-21T12:00:00Z"]}, "id": 1098}]'
> {code}
> Neither of them works.
> The set and add operation for int array works. 
> The set, remove, and  add operation for string array works



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6421) ADDREPLICA doesn't respect :port_solr designation

2014-10-13 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14169220#comment-14169220
 ] 

Ishan Chattopadhyaya commented on SOLR-6421:


Instead of "createNodeSet", try "node" parameter. Just tested that it works as 
desired.
https://cwiki.apache.org/confluence/display/solr/Collections+API#CollectionsAPI-api9

> ADDREPLICA doesn't respect :port_solr designation
> -
>
> Key: SOLR-6421
> URL: https://issues.apache.org/jira/browse/SOLR-6421
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 4.10
> Environment: Ubuntu 14.04
>Reporter: Ralph Tice
>  Labels: SolrCloud, difficulty-easy, impact-medium
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> When I issue an ADDREPLICA call like so:
>
> http://localhost:8983/solr/admin/collections?action=ADDREPLICA&shard=myshard&collection=mycollection&createNodeSet=solr18.mycorp.com:8983_solr
> SolrCloud does not seem to respect the 8983_solr designation in the 
> createNodeSet parameter and instead places the shard on any JVM on the 
> machine instance.  First attempt I got a replica on 8994_solr and second 
> attempt to place a replica on 8983 got a replica on 8992_solr instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Is there going to be Lucene/Solr 4.11?

2014-10-13 Thread Jack Krupansky
Will there be ongoing confusion as to whether there will be a 4.11 release? 
The answer is clearly a resounding YES!


That said, it is now clear that 5.0 will be the next non-patch release.

But that does not eliminate the possibility that in the future somebody 
might not come up with a rationale for another non-patch 4.x release. 
Unlikley, but still possible. Some users might be desperate for some 
non-patch feature and not be able to migrate production Solr servers for 
some reason.


And there will certainly be plenty of confusion about 5.0 vs. 4.x until the 
committers start doing a much better job of explaining the transition to 
non-committers. I mean, I THINK I understand, but how many average Solr 
users really understand? For example, the question of whether average users 
will get any significant benefit from migrating their indexes from 4.x to 
5.0 is still quite murky. Will they be faster? Smaller? What features will 
or won't work with a non-migrated 4.x index? No clear answers have been 
provided, to date.


-- Jack Krupansky

-Original Message- 
From: Alexandre Rafalovitch

Sent: Sunday, October 12, 2014 9:20 PM
To: dev@lucene.apache.org
Subject: Is there going to be Lucene/Solr 4.11?

Hello,

I found two JIRAs I was interested in marked as merged in 4.x branch
(SOLR-5097 and SOLR-5098). Got excited and then discovered they were
merged after 4.10 and the info added to the 4.11 README section.

Does it mean there _will_ be a Solr 4.11? I thought the discussion was
that 4.10.x was the end of the line. Or is this just a general
practice with no consequence and I will only saw those JIRAs in 5.0
(whenever that shows up)?

Regards,
  Alex.

Personal: http://www.outerthoughts.com/ and @arafalov
Solr resources and newsletter: http://www.solr-start.com/ and @solrstart
Solr popularizers community: https://www.linkedin.com/groups?gid=6713853

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org 



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3955) Return only matched multiValued field

2014-10-13 Thread Jacob Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14169192#comment-14169192
 ] 

Jacob Xu commented on SOLR-3955:


This is a urgent issue that should be concerned. Specially, when the 
multiValued filed has large number of values (>1K), it will cost high memory 
and I/O to return all values.

> Return only matched multiValued field
> -
>
> Key: SOLR-3955
> URL: https://issues.apache.org/jira/browse/SOLR-3955
> Project: Solr
>  Issue Type: Improvement
>  Components: search
>Affects Versions: 4.0
>Reporter: Dotan Cohen
>  Labels: features
>
> Assuming a multivalued, stored and indexed field named "comment". When 
> performing a search, it would be very helpful if there were a way to return 
> only the values of "comment" which contain the match. For example:
> When searching for "gold" instead of getting this result:
> 
> 
> Theres a lady whos sure
> all that glitters is gold
> and shes buying a stairway to heaven
> 
> 
> I would prefer to get this result:
> 
> 
> all that glitters is gold
> 
> 
> (psuedo-XML from memory, may not be accurate but illustrates the point)
> Thanks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-6591) Cluster state updates can be lost on exception in main queue loop

2014-10-13 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6591?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar reassigned SOLR-6591:
---

Assignee: Shalin Shekhar Mangar  (was: Noble Paul)

> Cluster state updates can be lost on exception in main queue loop
> -
>
> Key: SOLR-6591
> URL: https://issues.apache.org/jira/browse/SOLR-6591
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: Trunk
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
> Fix For: Trunk
>
>
> I found this bug while going through the failure on jenkins:
> https://builds.apache.org/job/Lucene-Solr-NightlyTests-trunk/648/
> {code}
> 2 tests failed.
> REGRESSION:  
> org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testDistribSearch
> Error Message:
> Error CREATEing SolrCore 'halfcollection_shard1_replica1': Unable to create 
> core [halfcollection_shard1_replica1] Caused by: Could not get shard id for 
> core: halfcollection_shard1_replica1
> Stack Trace:
> org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: Error 
> CREATEing SolrCore 'halfcollection_shard1_replica1': Unable to create core 
> [halfcollection_shard1_replica1] Caused by: Could not get shard id for core: 
> halfcollection_shard1_replica1
> at 
> org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:570)
> at 
> org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:215)
> at 
> org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:211)
> at 
> org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testErrorHandling(CollectionsAPIDistributedZkTest.java:583)
> at 
> org.apache.solr.cloud.CollectionsAPIDistributedZkTest.doTest(CollectionsAPIDistributedZkTest.java:205)
> at 
> org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6606) In cloud mode the leader should distribute autoCommits to it's replicas

2014-10-13 Thread Ferenczi Jim (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14169142#comment-14169142
 ] 

Ferenczi Jim commented on SOLR-6606:


Is is intended for partial recovery or only to distribute the downloads on a 
full recovery (when the Replica downloads a full index from the master) ? I am 
saying this because for the partial recovery you need to handle the deletes as 
well. For instance if one replica missed the last commit he could download the 
segment from the master but he would loose all the deletes related to this 
update. Keeping the list of every delete per commit seems mandatory but also 
very expensive unless we can garbage collect them at some point. 

> In cloud mode the leader should distribute autoCommits to it's replicas
> ---
>
> Key: SOLR-6606
> URL: https://issues.apache.org/jira/browse/SOLR-6606
> Project: Solr
>  Issue Type: Improvement
>Reporter: Varun Thacker
> Fix For: 5.0, Trunk
>
> Attachments: SOLR-6606.patch, SOLR-6606.patch
>
>
> Today in SolrCloud different replicas of a shard can trigger auto (hard) 
> commits at different times. Although the documents which get added to the 
> system remain consistent the way the segments gets formed can be different 
> because of this.
> The downside of segments not getting formed in an identical fashion across 
> replicas is that when a replica goes into recovery chances are that it has to 
> do a full index replication from the leader. This is time consuming and we 
> can possibly avoid this if the leader forwards auto (hard) commit commands to 
> it's replicas and the replicas never explicitly trigger an auto (hard) commit.
> I am working on a patch. Should have it up shortly



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-1672) RFE: facet reverse sort count

2014-10-13 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-1672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14169061#comment-14169061
 ] 

Jan Høydahl commented on SOLR-1672:
---

Any interest in reviving this issue and add support for ascending facet sort? 
And I favor the {{index \[asc|desc\]}} / {{count \[asc|desc\]}} format over the 
{{facet.sortorder}} one

> RFE: facet reverse sort count
> -
>
> Key: SOLR-1672
> URL: https://issues.apache.org/jira/browse/SOLR-1672
> Project: Solr
>  Issue Type: Improvement
>  Components: search
>Affects Versions: 1.4
> Environment: Java, Solrj, http
>Reporter: Peter Sturge
>Priority: Minor
> Attachments: SOLR-1672.patch
>
>   Original Estimate: 0h
>  Remaining Estimate: 0h
>
> As suggested by Chris Hosstetter, I have added an optional Comparator to the 
> BoundedTreeSet in the UnInvertedField class.
> This optional comparator is used when a new (and also optional) field facet 
> parameter called 'facet.sortorder' is set to the string 'dsc' 
> (e.g. &f..facet.sortorder=dsc for per field, or 
> &facet.sortorder=dsc for all facets).
> Note that this parameter has no effect if facet.method=enum.
> Any value other than 'dsc' (including no value) reverts the BoundedTreeSet to 
> its default behaviour.
>  
> This change affects 2 source files:
> > UnInvertedField.java
> [line 438] The getCounts() method signature is modified to add the 
> 'facetSortOrder' parameter value to the end of the argument list.
>  
> DIFF UnInvertedField.java:
> - public NamedList getCounts(SolrIndexSearcher searcher, DocSet baseDocs, int 
> offset, int limit, Integer mincount, boolean missing, String sort, String 
> prefix) throws IOException {
> + public NamedList getCounts(SolrIndexSearcher searcher, DocSet baseDocs, int 
> offset, int limit, Integer mincount, boolean missing, String sort, String 
> prefix, String facetSortOrder) throws IOException {
> [line 556] The getCounts() method is modified to create an overridden 
> BoundedTreeSet(int, Comparator) if the 'facetSortOrder' parameter 
> equals 'dsc'.
> DIFF UnInvertedField.java:
> - final BoundedTreeSet queue = new BoundedTreeSet(maxsize);
> + final BoundedTreeSet queue = (sort.equals("count") || 
> sort.equals("true")) ? (facetSortOrder.equals("dsc") ? new 
> BoundedTreeSet(maxsize, new Comparator()
> { @Override
> public int compare(Object o1, Object o2)
> {
>   if (o1 == null || o2 == null)
> return 0;
>   int result = ((Long) o1).compareTo((Long) o2);
>   return (result != 0 ? result > 0 ? -1 : 1 : 0); //lowest number first sort
> }}) : new BoundedTreeSet(maxsize)) : null;
> > SimpleFacets.java
> [line 221] A getFieldParam(field, "facet.sortorder", "asc"); is added to 
> retrieve the new parameter, if present. 'asc' used as a default value.
> DIFF SimpleFacets.java:
> + String facetSortOrder = params.getFieldParam(field, "facet.sortorder", 
> "asc");
>  
> [line 253] The call to uif.getCounts() in the getTermCounts() method is 
> modified to pass the 'facetSortOrder' value string.
> DIFF SimpleFacets.java:
> - counts = uif.getCounts(searcher, base, offset, limit, 
> mincount,missing,sort,prefix);
> + counts = uif.getCounts(searcher, base, offset, limit, 
> mincount,missing,sort,prefix, facetSortOrder);
> Implementation Notes:
> I have noted in testing that I was not able to retrieve any '0' counts as I 
> had expected.
> I believe this could be because there appear to be some optimizations in 
> SimpleFacets/count caching such that zero counts are not iterated (at least 
> not by default)
> as a performance enhancement.
> I could be wrong about this, and zero counts may appear under some other as 
> yet untested circumstances. Perhaps an expert familiar with this part of the 
> code can clarify.
> In fact, this is not such a bad thing (at least for my requirements), as a 
> whole bunch of zero counts is not necessarily useful (for my requirements, 
> starting at '1' is just right).
>  
> There may, however, be instances where someone *will* want zero counts - e.g. 
> searching for zero product stock counts (e.g. 'what have we run out of'). I 
> was envisioning the facet.mincount field
> being the preferred place to set where the 'lowest value' begins (e.g. 0 or 1 
> or possibly higher), but because of the caching/optimization, the behaviour 
> is somewhat different than expected.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-5076) Make it possible to get list of collections with CollectionsHandler

2014-10-13 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5076?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar resolved SOLR-5076.
-
Resolution: Duplicate

This was added by SOLR-5466. See 
https://cwiki.apache.org/confluence/display/solr/Collections+API#CollectionsAPI-List


> Make it possible to get list of collections with CollectionsHandler
> ---
>
> Key: SOLR-5076
> URL: https://issues.apache.org/jira/browse/SOLR-5076
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Reporter: Shawn Heisey
>Priority: Minor
>
> It would be very useful to have /admin/collections (CollectionsHandler) send 
> a response similar to /admin/cores.  This should probably be the default 
> action, but requiring ?action=STATUS wouldn't be the end of the world.
> It would be very useful if CloudSolrServer were to implement a getCollections 
> method, but that probably should be a separate issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4715) CloudSolrServer does not provide support for setting underlying server properties

2014-10-13 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14169048#comment-14169048
 ] 

Noble Paul commented on SOLR-4715:
--

Just add a constructor which accepts HttpClient as a param . It'll be 
consistent with the other implementations

> CloudSolrServer does not provide support for setting underlying server 
> properties
> -
>
> Key: SOLR-4715
> URL: https://issues.apache.org/jira/browse/SOLR-4715
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.3
>Reporter: Hardik Upadhyay
>Assignee: Shawn Heisey
>  Labels: solr, solrj
> Attachments: SOLR-4715-incomplete.patch, SOLR-4715-incomplete.patch
>
>
> CloudSolrServer (and LBHttpSolrServer) do not allow the user to set 
> underlying HttpSolrServer and HttpClient settings without arcane java 
> enchantments.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Reopened] (SOLR-6511) Fencepost error in LeaderInitiatedRecoveryThread

2014-10-13 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6511?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar reopened SOLR-6511:
-

Tim, the change to the LIR state to be kept as map is not back-compatible. For 
example, I saw the following error upon upgrading a cluster (to trunk) (which 
had some LIR state written down in ZK already). But looking at the code, the 
same exception can happen on upgrading to latest lucene_solr_4_10 branch too.

{code}
41228 [RecoveryThread] ERROR org.apache.solr.cloud.RecoveryStrategy   Error 
while trying to recover. 
core=coll_5x3_shard5_replica3:java.lang.ClassCastException: java.lang.String 
cannot be cast to java.util.Map
at 
org.apache.solr.cloud.ZkController.getLeaderInitiatedRecoveryStateObject(ZkController.java:1993)
at 
org.apache.solr.cloud.ZkController.getLeaderInitiatedRecoveryState(ZkController.java:1958)
at org.apache.solr.cloud.ZkController.publish(ZkController.java:1105)
at org.apache.solr.cloud.ZkController.publish(ZkController.java:1075)
at org.apache.solr.cloud.ZkController.publish(ZkController.java:1071)
at 
org.apache.solr.cloud.RecoveryStrategy.doRecovery(RecoveryStrategy.java:355)
at org.apache.solr.cloud.RecoveryStrategy.run(RecoveryStrategy.java:235)

{code}

> Fencepost error in LeaderInitiatedRecoveryThread
> 
>
> Key: SOLR-6511
> URL: https://issues.apache.org/jira/browse/SOLR-6511
> Project: Solr
>  Issue Type: Bug
>Reporter: Alan Woodward
>Assignee: Timothy Potter
> Fix For: 4.10.2, 5.0
>
> Attachments: SOLR-6511.patch, SOLR-6511.patch
>
>
> At line 106:
> {code}
> while (continueTrying && ++tries < maxTries) {
> {code}
> should be
> {code}
> while (continueTrying && ++tries <= maxTries) {
> {code}
> This is only a problem when called from DistributedUpdateProcessor, as it can 
> have maxTries set to 1, which means the loop is never actually run.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5076) Make it possible to get list of collections with CollectionsHandler

2014-10-13 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-5076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14169027#comment-14169027
 ] 

Jan Høydahl commented on SOLR-5076:
---

+1 Would like to see this as well!

> Make it possible to get list of collections with CollectionsHandler
> ---
>
> Key: SOLR-5076
> URL: https://issues.apache.org/jira/browse/SOLR-5076
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Reporter: Shawn Heisey
>Priority: Minor
>
> It would be very useful to have /admin/collections (CollectionsHandler) send 
> a response similar to /admin/cores.  This should probably be the default 
> action, but requiring ?action=STATUS wouldn't be the end of the world.
> It would be very useful if CloudSolrServer were to implement a getCollections 
> method, but that probably should be a separate issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org